[04:06:59] I am hosting a mediawiki from Apache using DocumentRoot "/var/www/mediawiki" making my wiki the "base URL" all visits to the root directory of the website bring me to the wiki.. But now I'd like to host some alternative files on my web server. and I was hoping to receive guidance the safest way to perform the migration [04:07:29] I'd like to be able to access the wiki now at "Site.com/wiki" instead of "site.com" [04:07:41] /S/s [04:16:54] I found my answer.. wondering if I can setup a redirect from old-URLs to the new location [09:23:48] What does (?x: do in js regexes? [09:25:14] non-tracking group or something IIRC, so it doesn't count as backreference when you use numeric groups in replacements [09:26:15] I thought that was just normal (?: [09:26:15] without the x [09:27:09] hmmm, you're right, I don't know what that x does there [09:28:47] bawolff: it's the modifier for "extended" regex [09:28:50] ignores whitespace [09:29:08] bawolff: but, that is not supposed to be supported in JS. [09:29:13] http://php.net/manual/en/regexp.reference.internal-options.php [09:29:32] ok [09:29:32] thanks [09:29:32] ah, javascript! [09:29:39] https://regex101.com/ [09:29:43] I though it was PHP, haven't seen that, sorry [09:30:02] MatmaRex: Hmm, well its used in the uploadwizard code [09:30:17] heh [09:30:27] where? [09:31:43] mw.FlickrChecker.js [09:31:54] in checkFlickr function [09:35:32] bawolff: that's (?:x and not (?x: [09:35:48] Oh [09:35:49] which just matches literal 'x' [09:36:00] I totally reversed that in my mind [12:09:20] Hi [12:09:21] I installed wikimedia yesterday, and today I saw a lot of pages like timberland-sneakers-ladies-24vy66.html in the root directory of my wiki, what's going on ? [12:12:55] Maybe your server was hacked [12:13:51] bawolff: certainly, but only the root of the wiki [12:15:21] it seems to be a security issue with mediawiki [12:18:24] Well that's a possibility, it could also be a million other things [12:32:15] maxagaz: look at the access_log of your server to see what IP accessed the server and what URLs they used. Compare timestamps of the files to identify the requests that created those files [12:32:52] that assuming those files were created from the web server and not by someone with ssh access [12:40:19] Vulpix: I stopped it by removing the indes.php files I found at the oot [12:40:21] root [12:40:42] this file "indes.php" was listed by ps [12:42:08] well, stopping the hacked script is good, but you should find the cause that allowed someone to compromise the server [12:42:46] also, I'd reinstall the server from scratch (taking backup of relevant information) because other files on the server may be compromised as well [12:53:56] Krinkle: Hi. Does this look familiar? - https://phabricator.wikimedia.org/T100058 [13:41:24] Hello everyone :) Quick question : is the Hackathon in Lyon open to "casual visitors" ? [13:41:59] * bawolff doesn't know. Maybe [13:42:54] XaS: Try asking in #wmhack [13:42:57] I live nearby and I thought I might come around for an hour, maybe take a few pictures and have a chat with some developers [13:43:02] * bawolff feels like nobody would mind [13:43:04] oh right, thx [13:43:15] XaS: I'm pretty sure that's cool, but I'm not an organizer [14:44:15] how would you download each and every page info [14:44:16] like [14:44:20] every page's title [14:44:23] maybe some snippet [14:48:28] Shibe: what is your goal? [14:48:45] to download each pages title and put it into a dictionary to search for it sooper fast [14:49:48] is this for WMF wikis? [14:49:51] Shibe: you can use mediawiki api https://www.mediawiki.org/wiki/API:Allpages [14:50:35] codezee: there may be better options [14:51:01] !dump [14:51:02] For information on how to get dumps from Wikimedia Wikis, see http://meta.wikimedia.org/wiki/Data_dumps . For a how-to on importing dumps, see https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps . [14:51:33] Betacommand: I guess you mean if the info of a wiki has to be retrieved from within the wiki itself? [14:52:33] codezee: how about for a specific wiki [14:52:35] like wiki.roblox.com [14:52:44] I tried this http://wiki.roblox.com/api.php?action=query&list=allpages&apfrom=* but its not linking everything [14:54:42] that needs pagination. You can get up to 500 on each request http://wiki.roblox.com/api.php?action=query&list=allpages&apfrom=&aplimit=500 [14:55:14] then add apcontinue= with the return of the apcontinue element to get the next 500 pages, and so on [14:55:38] thanks [15:00:34] In php's var_dump, what does ∫(0) signify? [15:03:00] richa, https://github.com/Stype/mwoauth-php/blob/master/MWOAuthClient.php [15:04:54] bawolff, ... that's just somewhere in the var_dump output? can I see an example? [15:05:23] If you were at the hackathon you'd be able to see it directly on my screen :P [15:06:07] Krenair: https://dpaste.de/aHEw [15:07:57] nevermind [15:08:42] might just refer to an integer? [15:09:30] although not for all of them... interesting [15:09:46] It was actually &int( ..., and firefox was interpreting that as ∫ and replacing the entity reference [15:10:47] ah [15:46:24] Is it possible for a page name to have litteral underscores? [15:46:33] I mean, not converted to spaces. [15:50:17] !displaytitle [15:50:17] See . [15:50:21] hmm [15:50:47] Celelibi: it isn't, but you can display the page title with underscores, which basically achieves your effect [15:51:53] Ah, yes. Thanks. [19:26:27] hi. what's the sorting algorithm for the autocomplete of the search field in Wikipedia? [19:47:13] salty-horse: it's based on page popularity somehow. i think it might be by the number of incoming internal links [19:47:38] Wikimedia wikis are running the CirrusSearch extension: https://www.mediawiki.org/wiki/Special:MyLanguage/Extension:CirrusSearch [19:48:39] yep. https://www.mediawiki.org/wiki/Help:CirrusSearch#Search_suggestions [19:51:48] MatmaRex, thanks. I'm wondering because "doub" completes to NSFW terms [19:52:29] heh [19:53:41] salty-horse: there used to be related issues with the Wikimedia Commons image search, with searches such as "toothbrush" coming up with NSFW imagery. the search algorithm was tweaked somehow to give these lower weight or something [19:53:56] MatmaRex, so.. NOTABUG? :) [19:54:38] dunno. it might be a good idea to file a bug anyway, perhaps there's something that can be reasonably done [19:57:39] MatmaRex, bug against which product? [19:58:17] salty-horse: Search [19:59:38] MatmaRex, but is this a "MediaWiki" or a "Wikipedia" issue? [20:00:06] salty-horse: probably both? [20:04:42] MatmaRex, here? http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29 [20:06:55] salty-horse: i was thinking Phabricator, actually. https://phabricator.wikimedia.org/maniphest/task/create/ [20:07:17] but your message should be seen on the village pump too [20:09:12] Phabricator doesn't have a "search" project. just sprints [20:10:19] I'm pretty sure it does [20:13:32] "Search-and-Discovery"? [20:13:45] it's purple and has a "group" icon [20:14:12] what's the issue you're reporting? [20:18:48] Krenair, searching for "doub" in wikipedia gives a suggestion to an NSFW article in the first results [20:19:26] You might try CirrusSearch [20:19:49] If it can somehow be argued as a technical issue [20:25:03] Krenair, I think it's a policy issue, if at all, so may the Village Pump? [20:25:19] How is it a policy issue [20:25:20] ? [20:26:03] You mean censoring search results based on NSFW-ish [20:26:34] There's a feature to do that, but its very politically contentious [20:26:50] or feature to downgrade anyways, not really a safe search [20:28:59] bawolff, I'm not advocating for removing it. However, if Wikipedia has a policy regarding it, I'd like to help out by reporting it [20:29:21] salty-horse: Its kind of a complex issue politically. [20:29:40] bawolff, so I'm staying away. Thanks for the help :) [20:30:03] more controversial on commons instead of wikipedia [20:31:37] https://en.wikipedia.org/wiki/Wikipedia:Content_disclaimer is kind of the policy [20:32:38] If you're not aware of what you're getting into.... I doubt the enwiki village pump will be too kind [20:33:04] :D [23:32:46] hello. is there kind of sample source for default install to see some nice formating skills? :D [23:32:53] some kind of sample you can copy paste in your own wiki [23:33:15] to inspire people for possibilities of formating, frontpages and categories [23:46:18] hey guys! [23:46:38] I want to do the following: [23:46:46] 'delete all pages containing this specific text' [23:47:02] or rather [23:47:16] I want to delete all pages that are in a specific category. [23:47:24] how to easily do this? [23:48:12] Volund: You generally need an extension, or custom js [23:48:50] which extension would you recommend? I've found deleteBatch but for that I need to figure out how to make a text file with the pages I want anyways. [23:50:03] HellTiger: I don't really think so (beyond the help page for formatting things). Feel free to make something like that [23:50:58] thanks bawolff