[00:13:29] I'm getting spam despite using QuestyCaptcha. Is there a way to see if the spammers are actually providing the right captcha answers? [00:14:46] is there documentation regarding what types of licenses should be defined at the Licenses page? [00:14:55] or is that up to each individual site admin to pick some [00:16:49] teward: Up to individuals, but you probably want something like CC licenses. [00:17:14] JordiGH: any idea where I can grab a predefined list of such licesnses for a copy-paste into my MediaWiki:Licenses page? [00:17:20] hate to ask but i'm lazy and don't want to type everything :) [00:17:30] Sorry, no idea. [01:05:19] Having problems installing VisualEditor on 1.26.2. Using git head fails as it complains of needing 1.27. Fair enough. [01:06:00] But using the 1_26 branch gives a problem where parts of VisualEditor are looking for TableContext.js but it has been renamed TableLinkContext.js [01:06:40] git log shows it was renamed at some point, but maybe that branch caught it mid-change? [01:10:17] Once that is fixed, the deprecated PreviewWidget pops up. Grrr. [01:23:00] Ah, the submodule was gumming things up. Solution: git checkout REL1_26; git submodule update [01:29:53] G_SabinoMullane, sorry, missed this message earlier [01:30:06] yes, you should always update the lib/ve submodule after updating your copy of VE [01:46:32] Gotcha. On the plus side, it is working now and looks great [01:59:42] cool, good to hear :) [05:45:22] Ah... quick Q, what do I do if I accidentally submitted a mailing list post without a title? :/ [05:57:40] josephine_l: you live with it :) It's OK; it happens to everyone. [05:58:33] ori: Haha, oops, I actually submitted a second post with a title. :x Hope that wasn't too spammy. [05:58:55] no, that's fine too. [05:59:05] Thanks ori! :) [10:48:49] hi folks [11:02:45] Today i update my mediawiki from 1.24 to 1.26 and after some problems with $wgScriptPath and $wgArticlePath now it works quite obviously. But iam facing another mysterious problem. Subpages (like: https://wiki.mysite.com/Home/subofHome) are rendered without any styling. Inspector shows a 404 on a second and a third HTML file that he try to download. But he is not trying to download any CSS/JS. Hope anyone could help me with sharing their knowledge. [11:03:17] Hi I want to contribute to wikimedia through Outreachy 2016. I would really appreciate if someone can guide me through it. [11:03:24] tranquillo: sounds like a misconfiguration in your webserver [11:07:30] @RobotsOnDrugs wich kind of do you think? [11:07:50] that question doesn't make any sense [11:15:58] Even in my head it makes sense. :D Ok, wich kind of misconfiguration of my server do you think could be the problem? [11:19:32] it's difficult to say without knowing what your configuration is [11:20:08] if you're using nginx, i might be able to help in #nginx, otherwise i have no idea [11:21:22] nope iam using apache on a uberspace [11:25:27] deepikadutta: #wikipedia-en is probably a better place for that [11:25:56] (type "/join #wikipedia-en") [11:27:49] Could it be a problem with the .htaccess? Because one 404 request goes to: https://wiki.mysite.com/subofHome/load.php?debug=false&lang=de&modules=mediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.sectionAnchor%7Cskins.modern&only=styles&skin=modern [11:28:46] can anyone tell me how i can access or edit the code for Special:FormEdit page in mediawiki? [11:31:12] Actually, i have a form on my site using which one can add data. But i want the user to be signed in before usiing the form. So if he clicks on the tab "Add data" without signing in, he must be redirected to sign in page, and post sign in, againg redirected to "add data" page? [11:31:15] Ash___, that'd be SemanticForms right? [11:31:28] yes [11:31:50] you'd be looking for some file like SpecialFormEdit.php under the SemanticForms extension [11:32:20] ok [11:32:45] @Krenair, can you answer the second query as well? [11:33:20] query kart_ Hello sir , I wanted to contribute to Outrechy 2016. I would really appreciate if you could guide me through. [11:34:32] deepikadutta, hi and welcome! Thanks for your interest! [11:34:36] deepikadutta, have you checked https://www.mediawiki.org/wiki/Outreachy/Round_12 ? [11:35:00] Ash___, don't know enough about that extension, sorry [11:35:25] no problem, Krenair, thanks [11:36:31] andre__: hi, thank your response. Yes I have checked the link but can't quite get as to where I should start from. [11:37:26] deepikadutta: Ah, nice. What exactly is unclear on that page? Please tell us so we can improve it. :) [11:39:17] andre__: As a beginner , I can't really be clear as to what my first step should be. [11:41:22] deepikadutta, Have you followed [[Outreach programs/Life of a successful project#Coming up with a proposal]] ? [11:41:41] "Find, or create, a Phabricator project task that interests you." [11:44:52] andre__: I will go through your suggestions and get back to you in a while :) [11:47:35] deepikadutta: if a sentence or a specific step on the wiki is unclear, please tell us the sentence so we can help. Thanks! [11:49:27] query prikshit_ listen [11:51:13] deepikadutta: I'm not sure what you're trying, but if you want to run an IRC command, you have to prefix it by a slash. Writing "query" and a username in a public channel will just ping the person. [11:52:14] andre__: ya, just realised that the key did not work. [11:53:08] Actually, i have a form on my site using which one can add data. But i want the user to be signed in before usiing the form. So if he clicks on the tab "Add data" without signing in, he must be redirected to sign in page, and post sign in, againg redirected to "add data" page? i am using the Semantic Forms, and The Add data form is opened using Special:FormEdit [12:01:32] Hmm... Freenode appears to be broken. [12:01:52] svip: it's upgrade all the things day [12:02:06] Oh. [12:02:13] Fun! [12:17:23] andre__: I looked through the list of projects and have found one which interests me. What do you suggest is the fastest way to get a mentor response, [12:17:31] mail list?? [12:28:07] deepikadutta: Please see the link in https://www.mediawiki.org/wiki/Outreachy/Round_12#Project_ideas to "Answering your questions" [12:52:26] Actually, i have a form on my site using which one can add data. But i want the user to be signed in before usiing the form. So if he clicks on the tab "Add data" without signing in, he must be redirected to sign in page, and post sign in, againg redirected to "add data" page? i am using the Semantic Forms, and The Add data form is opened using Special:FormEdit [13:08:05] marnin [13:10:25] Hello again, short question: How to enable the ParserFunctions (like: https://www.mediawiki.org/wiki/Help:Extension:ParserFunctions#.23expr)? [13:13:04] Install the extension! [13:19:52] it seems, MW 1.26 already contains the extension (ParserFunctions). And i can see it in the extensions dir. And i try to enable it with: "wfLoadExtension( 'ParserFunctions' );" in LocalSettings.php [13:33:20] Actually, i have a form on my site using which one can add data. But i want the user to be signed in before usiing the form. So if he clicks on the tab "Add data" without signing in, he must be redirected to sign in page, and post sign in, againg redirected to "add data" page? i am using the Semantic Forms, and The Add data form is opened using Special:FormEdit [13:36:11] tranquillo: It doesn't contain it, but the tarball IS shipped with it [13:36:34] I suspect you need to use the old loading method, not the new one [13:57:44] okay, require_once( "$IP/extensions/ParserFunctions/ParserFunctions.php"); works fine. Merci [18:01:11] We're getting a spam attack, despite us using questycaptcha and despite just having changed the captcha answers. [18:01:24] I'm wondering if the attack is circumventing our captcha. How can I find out? [18:08:11] JordiGH: If your CAPTCHAs are being bypassed it is because your site is being targeted for attack by actual real humans. [18:08:38] Extension:AbuseFilter can assist with blocking them after they register. [18:11:27] Trela: I doubt it's a human attack. The text looks auto-generated and the captchas require them to execute Octave code. [18:12:05] It's also way too frequent and coming from just a few IP addresses. It could be a computer-assisted human. [18:12:58] Is there some logging I can see what kind of POST generated the spam pages? [18:19:14] https://www.mediawiki.org/wiki/Manual:Logging_table [18:21:37] But that doesn't tell me what POST corresponds to the creation of pages, does it? If, for example, I suspect a mediawiki vuln that allows captcha bypassing, how would I reassure myself? [18:36:52] I need to completely remove deleted spam pages. I tried deleteArchivedRevisions.php but they're still visible. Any ideas? [18:44:14] JordiGH: questycaptcha is a very short-term solution [18:44:32] if you're still following the same basic patterns for the questions, the bots can easily code an algorithm that interprets that and puts in the answer [18:44:48] Skizzerz: You think the bots are actually running Octave? [18:44:49] you need to rotate questions frequently in order for such a system to be good [18:44:59] someone went through the trouble of writing a spambot that targets Octave? [18:45:00] emulating it possibly [18:45:31] Skizzerz: Is there no easy way to confirm that the spammers are indeed typing the captcha correctly? [18:45:37] but sure, if it's "run this code and type in what you get", machines are quite proficient at that [18:46:07] you can capture request bodies and look at them for odd stuff [18:46:13] How do I do that? [18:46:35] how much traffic does your site get? [18:46:47] Not much by humans. [18:46:58] total, including bots/spiders [18:47:06] Probably still not much. [18:47:27] We run on a single instance on Dreamhost with one of their cheapest hosting options and everything works fine. [18:47:46] i.e. not running into CPU or database shortage or something like that. [18:49:05] set up $wgDebugLogFile to point to somewhere on the disk [18:49:09] ideally not web-accessible [18:49:19] the file will get very large very quickly, don't keep it on for too long [18:49:47] among tons of other things, the responses submitted to the captchas should be in there [18:50:03] Okay, I've done that before. I had forgotten if that's what logged HTTP requests. [18:50:29] Thanks. [18:50:54] Is deleting a page really this difficult? O_o I can't find anything after 30 minutes of googling and trying things. [18:51:52] slester: ? [18:52:08] what skin? [18:52:16] if vector, dropdown arrow to the top right of the page, click Delete [18:52:24] you need to be logged into an administrator account [18:52:37] Not just deleting it, that's easy. But there's a spam page I want to no longer be accessible [18:52:47] deleting it still shows the title/reason/etc. [18:52:55] so if you say delete but don't mean delete, what do you mean instead? :) [18:52:57] ah [18:52:59] ah [18:53:02] so you want log deletion too [18:53:03] Let me finish typing? [18:53:07] sec [18:53:07] I want it obliterated. [18:53:38] Btw, I want this too. [18:53:39] there's a (disabled by default) feature to delete specific revisions and log entries [18:53:40] The answer isn't deleteArchivedRevisions either, because that didn't actually do anything. [18:53:48] I want the whole page gone forever. [18:53:57] so you could delete the deletion log entry so that nobody can see the pagename, etc. [18:53:58] Not specific revisions or specific log entries. [18:54:11] then what you are asking for is impossible in base mediawiki [18:54:30] (by design; mediawiki is set up so that all actions are ostensibly reversible) [18:54:43] some extensions let you delete pages permanently, look into those [18:54:45] so blissfully ignorant of any abuse that may happen? :P [18:54:46] OK. [18:54:52] still haven't found any, though [18:55:25] not so much blissfully ignorant as maintaining an audit trail and allowing reversal of bad actions [18:55:49] Why would anyone want a spam page kept around, though? Who cares about auditing that? [18:56:23] https://www.mediawiki.org/wiki/Extension:DeletePagesForGood might work, I've never used it [18:56:47] it likely doesn't remove related log entries though [18:58:26] actually looks like it does [18:58:54] so assuming it works, that probably does what you want [19:09:23] Skizzerz: doesn't exist for my version of MediaWiki :< [19:11:10] what's your version? [19:12:20] 1.22 [19:12:36] I'd also recommend ugprading mediawiki then [19:12:43] 1.22 is EOL and has unfixed security issues [19:13:10] if you don't like upgrading all that often, you should probably track the LTS branches [19:13:23] those receive security updates for a longer period of time than most releases [19:14:01] True, but I'm not in control of that sadly. [19:14:55] No shell access? [19:15:23] Well, it's a high-traffic wiki and my superiors don't want to take it down or make possibly breaking upgrades with our existing setup. [19:16:09] Mediawiki is pretty stable across upgrades, IME. [19:16:28] And upgrading it is just a matter of symlinking to the latest tarball you just unpacked. [19:16:30] At the minimum there's a new way of loading extensions. [19:22:30] Hoi, i just upgraded from 1.25.2 to 1.26.2 and now all style cheats are gone [19:23:10] looks like that my installation has a chmod prob...because if i chmod 0777 to the mediawiki dir all seems to be fine [19:23:30] anyone can help? [21:25:39] How might I pull all reference information from a wikipedia page? I can see that I could, for example, query the sections, get the section number of References, then pull the wikitext for that section. But is there some kind of quicker path to doing so? I’m mostly interested in querying the first/last names for all references on a page. [21:44:07] mixwhit: The wikitext content of references generally isn't in the References section, it's inline at the place where the reference is used [21:44:45] mixwhit: You could explore using the Parsoid HTML of the page ( https://en.wikipedia.org/api/rest_v1/page/html/Albert_Einstein ) which has a bunch of information encoded in the HTML including reference stuff [21:46:40] JordiGH: if the spam wave you're getting is Indian IPs who want you to know about QuickBooks tech support help numbers really badly, everybody's getting those right now [21:47:04] I believe they are using humans to get past captchas and then running scripts against the sites [21:47:29] because they behave in intelligent patterns - IP block their land line ISP and they'll try to use their cell phone provider for example [21:47:37] and then after that they'll start using botnet nodes. [21:48:39] besides IP blocks we had to add new title blacklist entries and a new AbuseFilter. [21:49:27] .*(((customer|tech(nical)?) (support|care|help))|help desk|service) (((tele)?phone number)|number).* <- works pretty well; could probably be optimized because I'm terrible at PCRE [21:50:22] for AbuseFilter we don't let new accounts create any article that looks like it has a phone # in it also. [21:50:30] .*[1]?[+-\.\s]*(800|844|855|888)[+-\.\s]*[\d]{3}[+-\.\s]*[\d]{4}.* [21:50:56] QuasAtWork: Ah, yes, that's the one. [21:51:08] So it's a mechanical turk? [21:51:29] it's all coming out of the company that owns quickbooks247.com and a few other borderline fraudulent sites. [21:51:56] but yeah I believe they have humans in their employ due to the actions they have taken against us. [21:55:36] the worst thing is that at the same instant they create the pages on your wiki they also use a site called googlepings.com to spray every search engine with referrers to it [21:55:54] QuasAtWork: Thanks, that's helpful. I'll stfw for AbuseFilter and follow your suggestion. Right now we've disabled anonymous edits which is sad. [21:55:55] I got a warning from google webmaster tools, and was able to trace back the external referrers [21:56:34] cool, no prob [21:56:37] I was guessing that they probably were doing that to get higher up in search rankings for fraud purposes. [21:56:43] yup [21:57:09] It's a pretty old scam, too, this QuickBooks scam. I see references to it from 2011. [21:57:43] QuasAtWork: Can you share your IP blocks? [21:57:44] yeah I was digging around a lot the other night [21:57:52] trying to see if it could be reported to anybody who would care [21:58:15] seems it's still legal to be as destructive as you want to wikis :P [21:58:18] yeah hang on... [21:58:25] I bet Indian authorities have bigger problems than a few people scamming the elderly in America and Europe. [21:58:59] http://doomwiki.org/wiki/Special:BlockList?wpTarget=&wpOptions[]=userblocks&wpOptions[]=tempblocks&wpOptions[]=addressblocks&limit=50 [21:59:08] at least I assume you're able to see that. [21:59:20] Ooh, Doom wiki! [21:59:36] if they're run by humans, you can probably set up something to redirect them to a malicious site [21:59:52] And the world goes blind. [22:00:07] those keygen search sites that fills your computer with malware in one second :P [22:00:18] I'm pretty sure the humans breaking the captchas are just hapless foot soldiers. [22:02:06] RFC meeting starting now in #wikimedia-office regarding REST API versioning [22:33:46] Does MediaWiki have a way to show the IP address of a registered user? [22:34:15] A user made a post under his username - does MediaWiki somewhere store his IP? [22:34:17] you need an extension IIRC [22:34:21] for Special:CheckUser [22:34:39] I know of CheckUser - it has not been in use. [22:34:51] that's all it does is show IPs. [22:34:58] other than that you'd have to database dive. [22:35:11] In which way? [22:35:17] uhh.. hmm [22:35:31] not too familiar with the SQL schema. [22:36:29] When I am looking at the revision table https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Revision_table [22:36:40] I see a user ID and the user name. But no IP... [22:37:41] apparently that data isn't stored unless you have CheckUser installed. [22:37:50] seems to go into a table called cu_changes [22:38:31] is it possible to disable the viewing of Special:ListGroupsRights page for standard users? [22:39:46] teward: why? it has nothing sensitive on it [22:40:15] p858snake|L2: just asking if possible, is all ;) [22:41:12] if it would be possible to put a hack in, but really not much point [22:41:25] they could still call the info via API i guess as well [22:41:54] p858snake|L2: With RewriteRules, sure. [22:42:32] QuasAtWork: Yeah, I am actually looking at existing edits - and CheckUser had not been installed when they were done. [22:43:00] I think you are out of luck but I'm not an expert either ;) [22:43:29] Hmm...the page table only has the titles and so, but no information on any user at all. [22:43:41] The text table is out as well. [22:44:19] The formats in logging, fiearchive, oldimage and so on seem to be identical: Either IP address _or_ user ID and username. [22:44:33] Aaaarrrgghhhhh! [22:44:44] there's some old spiel on WP about one of the benefits of registering is not having your IP tracked [22:44:57] pretty sure we have it linked from our policies & guidelines :P [22:45:27] or maybe did before we installed CheckUser out of necessity [22:45:41] And it would not even be true - all wikis are actually using CheckUser. [22:46:25] Joergi: last resort is look at access_log (if you still have them) [22:46:31] yeah the guy who wrote that probably predated the extension [22:46:36] or possibly even extensions existing at all [22:46:53] Vulpix: Very old edits, from a time where MW 1.16.2 was current - and older. [22:47:08] Vulpix: I would be glad, if I only had a _complete_ database. [22:47:29] Vulpix: I don't even _think_ about logs or something. :-( [22:48:30] it would be strange to care for those IP after so long time [22:48:56] I don't suppose failing an edit due to Titleblacklist is logged somewhere :> [22:49:07] can't seem to find it if it is [22:49:48] QuasAtWork: What are you thinking about? [22:50:06] Vulpix: I do not have the user table. [22:50:07] oh I just saw a new acct register and hasn't done anything yet [22:50:11] That is the problem. [22:50:12] suspicious. [22:50:36] I figured it might be sitting there pounding the blacklist. [22:50:36] So for old edits my idea was to just replace the usernames with IP addresses and that's it. [22:50:57] That way I might be able to run the installation although a few thousand usernames are missing... [22:52:45] doesn't revision have the user name? that should be sufficient, why do you need the IP used by that user? [22:53:24] The revision tables does not only have the usernames, but also their IDs. [22:53:47] But that doesn't help; I do not know these peoples passwods, settings and so on... [22:53:56] This installation is just tso badly broken. [22:54:12] Have I mentioned that someone added new users on top of the missing ones? [22:54:59] whoops, yeah, that will be a nightmare [22:55:26] That's why I think it's easier to make these old revisions appear without using the user table. [22:55:42] Or...can I change user IDs? [22:55:57] I mean associating a username with a different ID? [22:57:18] yes, you can change rev_user to 0 [22:59:00] I mean: Can I move a user from one user_id to another user_id? [22:59:26] Like Carl first has user_id 2 and after that has user_id 10002. [22:59:56] Would that break things even worse? [22:59:56] you can, using https://www.mediawiki.org/wiki/Extension:UserMerge [23:00:19] Does it have a way to select the user_id? [23:00:24] using that extension things shouldn't break [23:00:51] IDK [23:01:08] I could "select" the user_id by creating a bunch of dummy users, of which I know the user_ids... [23:01:09] of course, both users must exist on the user table [23:02:53] oh, incrementing all user ID's... that's not a work for that extension [23:03:09] you should edit the database directly [23:03:45] but you should find all instances of user id on the database to change them all [23:04:55] That will be these: https://git.wikimedia.org/blob/mediawiki%2Fextensions%2FUserMerge/master/MergeUser.php#L205 [23:12:51] i have a template and want to sum up variables from it IF they are actually set: {{#if:{{{stat2HPmin}}}|+{{formatnum:{{{stat2HPmin}}}|R}}}} || {{#expr:{{formatnum:{{{stat1HPmin}}}|R}}{{#if:{{{stat2HPmin}}}|+{{formatnum:{{{stat2HPmin}}}|R}}}}}} [23:13:06] but it gets me a { punctuation error when the template variable is not set [23:13:13] but actually i catch it with the #if? [23:13:33] is there a guide to making file license infoboxes like MediaWiki's site has? [23:13:44] or is it basically reverse-engineering the required templates and such backwards [23:16:38] oh i see u just set the template vars empty but they need to be set else the {{{var}}} string appears [23:54:42] !infobox [23:54:42] https://en.wikipedia.org/wiki/Template:Infobox [23:54:48] No goddamn it [23:54:56] !wpinfobox [23:55:00] !botbrain [23:55:00] Hello, I'm wm-bot. The database for this channel is published at http://bots.wmflabs.org/~wm-bot/db/%23mediawiki.htm More about WM-Bot: https://meta.wikimedia.org/wiki/wm-bot [23:57:26] !infobox? [23:57:32] !infobox [23:57:32] https://en.wikipedia.org/wiki/Template:Infobox [23:58:55] !wm-bot-ping [23:58:55] pong [23:59:08] Yeah, that's the spirit! :-)))