[01:04:43] Isarra: 25 Compatibility fixes * 3 hours is not 50 :) [01:06:10] MaxSem: Have you SEEN what the copatibility problems are with skinning? [01:06:21] Sometimes you have to rewrite pretty much an entire extension!Q [01:06:23] -Q [01:06:33] And Echo keeps coming back over and over... [01:06:41] I'm talking about arithmetics, Isarra [01:06:48] Oh! [01:06:56] Sorry. [01:07:00] Thanks. [01:07:47] How to disable Structured Discussions on my own user page? [01:11:42] Man, I wish I knew. [02:10:52] Heh. [02:13:43] zzo left, I knew the answer to that question too :( [02:36:54] Skizzerz: What is it? [02:37:56] https://www.mediawiki.org/wiki/Extension:StructuredDiscussions#Enabling_or_disabling_StructuredDiscussions ? [02:40:44] Ivy: for a single page, change its content model back to wikitext [02:41:34] idk how well that works if you have existing topics on the page, you may have to delete it first [02:41:43] and then recreate with wikitext content model [02:42:43] Hmmm. [02:43:19] Oh right, user talk pages weren't auto-converted. [02:43:23] Just a lot of other talk pages, I guess. [02:43:45] "Structured Discussions" ... lordy. [08:03:55] How do I install a specific library with composer? [08:04:11] For some reason, it won't install /wrappedstrings, despite an extension using it. [08:04:31] Sorry, Wikimedia/WrappedString [08:06:52] Hi [08:09:01] Svip: does the extension declare it in composer.json [08:09:33] bawolff: I used require and that seemed to work. [08:09:54] bawolff: But to your question, no, it doesn't. [08:09:58] I guess it just assumes it's installed. [08:10:03] i think you need to run composer update --no-dev in the extension directory (maybe. I dont really know) [08:10:25] wrappedstring is required by newer versions of mediawiki i think [08:10:47] so maybe ext auth just assumed you are using newer mediawiki [08:11:50] S/auth// [09:34:57] hey guys, for some odd reason my Wiki is redirecting me to a malformed url after I try to go to a page that's auto-suggested from the search. [09:35:09] It works fine when I'm doing a search for a keyword though [09:36:28] but when I go to auto-completed, it will basically throw me to http://domain.local/wiki/http://domain.local/wiki/index.php/Pagename [09:38:40] any idea where I might have misconfigured something? I tried installing cirrus but the plugins are not being called from LocalSettings anymore after I disabled them, so this basically persists in the non-cirrus-enabled wiki [09:44:06] flying_sausages: by "plugings" you mean mediawiki extensions? [09:44:16] yes sorry [09:45:03] could you try to rephrase your issue because I don't really understand the problem? [09:45:30] Hah let me try again, i went to fetch my coffee in the meantime :) [09:45:34] :) [09:45:50] Basically, when I use the search field, it is partially broken [09:46:15] with or without cirrus? [09:46:17] When I search for a keyword, and click e.g. "Containging:... 'keyword'" it works fine, throws me to the search apge [09:46:25] this is currently for both [09:46:29] with and without [09:47:28] However when I try to seach for a name of a page I know exists, and I am offered the auto-complete to go directly to the page, clicking this item forwards me to the "doubled-up" link, where the domain and wiki address are included twice [09:48:06] autocomplete "top right box" on every page is normally using the opensearch api [09:48:09] This is literally what I get as a URL when I look for the Kaspersky page [09:48:13] http://vmamesys05.ame.local/wiki/http://vmamesys05.ame.local//wiki/index.php/Kaspersky [09:48:16] strange... [09:48:56] same if you click on a result in the autocomplete results? [09:49:17] that is the second case I described above, is it not? [09:49:31] let me see if I can make a screen recording [09:50:11] in the automplete you have basically 3 choices: 1/ hit enter (go), 2/ click on a result 3/ click on search for page containing... [09:54:05] dcausse, here you can see what exactly goes wrong https://www.useloom.com/share/69dbfa36cb914b47a23d8446d5a67bc9 [09:54:31] basically cases 1 and 3 work, case 2 does not [09:56:15] flying_sausages: can you activate the "inspect mode" of your browser and see what kind of api request you send when typing in the search box [09:56:58] it should be something like: https://en.wikipedia.org/w/api.php?action=opensearch&format=json&formatversion=2&search=something_you_type&namespace=0&limit=10&suggest=true [09:58:00] alright let me give that a try [10:02:30] dcausse, sorry I'm not entirely sure what exactly should I be looking for, I am using Chrome 63 [10:03:35] I've got my Inspect window up but I'm not sure which pane contains the info I'm searching for [10:03:54] flying_sausages: in chrome: right click, then open "Inspect", open the "network" tab, type some chars in the search box, and look at the URL sent [10:04:35] it should start with "api.php?...", copy paste the full url [10:05:26] got it [10:05:38] got one for each character of "hello" [10:05:44] yes [10:05:54] the middle one points to http://vmamesys05.ame.local/wiki/api.php?action=opensearch&format=json&formatversion=2&search=hel&namespace=0&limit=10&suggest=true [10:06:05] which seems ok to me [10:06:29] the first three calls errored out but the 4th and 5th succeded with a 200 [10:06:53] open the url that errored in the browser to have a look at the error [10:08:52] so e.g. the page I get thrown to after searching kaspersky? [10:09:35] If so, this is the 1st url which gives a 302 [10:09:40] Request URL: http://vmamesys05.ame.local/wiki/index.php?search=Kaspersky&title=Special%3ASearch&go=Go [10:09:54] and the second 404s [10:09:58] Request URL: http://vmamesys05.ame.local/wiki/http://vmamesys05.ame.local/wiki/index.php/Kaspersky [10:12:05] flying_sausages: it seems like an apache/redirects misconfiguration but I'm not very knowledgeable in this area :/ [10:14:42] flying_sausages: What is the value of $wgServer, $wgArticlePath and $wgScriptPath in your LocalSettings.php ? [10:15:10] dcausse, that's alright, thanks for your efforts regardless :) [10:15:15] bawolff, let me check [10:15:54] $wgServer = "http://vmamesys05.ame.local"; $wgScriptPath = "/wiki"; [10:16:10] ok, that's good [10:16:22] can't find anything containing article in my LocalSettings... [10:16:40] its ok if $wgArticlePath is not set, if its not then it uses the default which is fine [10:16:57] let me try with $wgArticlePath = '/wiki/$1'; [10:18:19] or more like $wgArticlePath = '/wiki/index.php/$1'; [10:18:28] I mostly just wanted to eliminate possible causes, its totally ok if its not set [10:18:29] still nada [10:18:50] alright I'll get rid of it again [10:20:06] So if its not those variables, then next most likely cause is apache rewrite rules, like dcausse was suggesting [10:20:41] hmm, let me check that config [10:20:49] flying_sausages: Do you have a .htaccess file in MediaWiki's config with rewrite rules? [10:21:06] Or otherwise have an apache config with either rewrite rule directives or alias directives? [10:21:08] no .htaccess in the mediawiki folder [10:21:15] Or equivalent for whatever web server you are using [10:21:25] I have a redirect for the apache virtualhost [10:22:42] Could you copy and past the config you have to dpaste.de ? [10:23:58] this is the only config I have enabled that does anything with the wiki https://pastebin.com/raw/RmbkBXbR [10:24:44] our DNS server returns the same IP for either with or without the .ame.local btw [10:27:19] sotty bawolff didn't see you wanted through this https://dpaste.de/czGF [10:27:23] *sorry [10:29:39] sorry, got disconnected there [10:30:31] flying_sausages: Hmm, well that looks correct. I would try maybe commenting out the redirect directives just to eliminate the possibility, but i don't see anything obviously wrong [10:31:07] And beyond that, I'm out of ideas [10:35:01] bawolff, just did that, still not working :'( [10:47:58] Does the mediawiki-api mailing list have a respective news group? Some mailing list it on their listinfo page, but this one does not. [10:50:29] Sveta: gmame used to do the newsgroup thing. They were a totally independent thing. But I think they shutdown [10:51:14] bawolff: gmane continues to deliver messages. [10:51:28] bawolff: wikitech-l uses it for instance, [10:51:56] Oh hmm, looks like the guy shutdown, and then got bought out, and now there are new owners [10:52:45] bawolff: I discovered newsgroups yesterday, they seem to be nicer because I can add messages without subscribing to a list. [10:53:18] bawolff: haven't tried that yet. However the prospect of not clogging email inbox with hundreds of messages also sounds attractive. [10:53:20] Well they are totally external, so if mediawiki-api isn't on the list, you could probably ask them to add it [10:53:45] I only have 22,502 unread emails... [10:53:47] hardly clogged [10:53:54] Kidding? :) [10:54:09] No. And that doesn't even include phabricator mail [10:54:28] If you ask me, that's massive, not "hardly clogged". [10:54:40] "hardly" means "barely, nearly not". [10:54:45] I gave up trying to clear my inbox a long time ago [10:54:55] missing [10:55:14] You can also subscribe to a list, and send messages to it, but disable mail delivery, if you want to send messages but not get everything [10:55:19] but then its hard to reply to emails [10:55:36] Yes, it is hard to browse the emails in the first place. [10:55:52] A newsreader ui to threaded discussions is handy. [10:57:58] I actually don't see why. usenet and email are very similar. Anything you do for a newsreader ui you should be able to do for a mailing list ui [10:58:05] most email clients now support threading [10:59:21] I guess one advantage of a news reader is that it downloads email headers without downloading message contents. [10:59:37] This means I can glance at the email subject and ignore a message (or a whole thread). [11:00:08] In some news readers I can then unhide the thread later. [11:00:35] On the other side, when using a mailing list, storing all messages in your inbox, for at least a brief moment of time, is mandatory. Unless you are OK with replying to emails in some magical ways. [11:01:07] That's ... handy for someone who skips replying to 80% of a mailing list messages. [11:03:47] That's technically not true. There's a TOP command in pop3 to only download headers ( https://tools.ietf.org/html/rfc1939#page-11 ). Of course whether anyone implements that is a different question [11:08:35] OK [12:33:20] What is the sampling of the query function. I.e. if I used the exturlusage query on a link and get 300 pages, are they the most recent pages, or a randomly selected group from all the Wikipedia pages that use the link? Are they representative? [12:36:22] jubugon: I think its ordered by page_id [12:36:44] jubugon: Which would be roughly (but not exactly) corresponding to when the page was created [12:37:28] jubugon: But that may change in the future [12:38:19] Hi! Thanks for your response. [12:38:24] Is there any place that this is documented [12:39:21] This is an unofficial behaviour and may change without notice [12:39:40] and i didn't actually test this [12:40:21] jubugon: Sorry, I'm wrong [12:40:27] Sorry I'm not sure I understand what unofficial behaviour means? As in, does the API not have a clear protocol? [12:41:21] As in the sort order of results when it is not specificly documented is undefined behaviour [12:42:24] jubugon: I was incorrect, the sort order should be first alphabetical in reverse domain order, and then use el_id (which is when the link was stored) [12:42:36] I was totally wrong on the page_id thing. I'm not sure where i got that idea from [12:42:57] No worries, I appreciate you looking into this for me [12:43:30] What is reverse domain order? [12:44:14] http://sub.example.com/foo/bar becomes http://com.example.sub/foo/bar [12:46:21] Note, that in Special:Linksearch the behaviour is different when searching for ip addresses (e.g. http://127.0.0.1/foo ) [12:47:17] Do you know why the sample sizes are different? [12:47:45] Ie I get about 300 pages for bbc.co.uk but 500 for rt.com when I query exturlusage [12:51:18] What's the exact query you are using? [12:52:01] are you using the namespace parameter? [12:53:31] parameters = { 'action': 'query', 'format': 'json', 'continue': '', 'list': 'exturlusage', 'euquery': link, 'eulimit': 'max'} wp_call = requests.get(mediawikiURL, params=parameters) [12:53:43] Nope, just euquery [12:54:30] Umm, so unless there is just simply only 300 links to bbc, but 500 links to rt.com, I don't see why that would be when namespace parameter is not set [12:54:37] *eunamespace [12:55:29] I don't think that's possible as obviously bbc.co.uk is linked to thousands if not millions of articles [12:56:27] yeah, one would assume [12:56:31] So the the API is sampling from them so somehow I end up with 374 bbc links and 438 rt links [12:56:41] But we don't really know why? [12:56:49] Can you try specificly setting eunamespace=* [12:58:04] what does = * mean? [12:58:11] jubugon: Are you doing http://bbc.co.uk or https://bbc.co.uk ? Its possible that most of the links might be one instead of the other [12:58:22] I mean actually * [12:58:31] It means return results from all namespaces [12:59:03] Just "bbc.co.uk" [12:59:07] Ok I will run that [12:59:12] but actually, try doing euprotocol=https [12:59:41] if you don't set euprotocol it will only return plain http links and not any of the https links [13:00:34] Is 'eunamespace': '*' correct? [13:00:43] I got a syntax error for 'eunamespace': * [13:06:04] Sorry, ignore that I checked the sandbox and see that 'eunamespace': '*'' is the correct one [13:06:18] Running the code now [13:08:38] Bizzarely am now only getting 2 bbc articles? [13:08:41] rt.com 47 bbc.co.uk 2 wikileaks.org 382 [13:14:00] jubugon: you're doing this on english wikipedia? [13:14:11] Yep [13:14:26] So i get different results for http and https [13:14:31] Trying to combine them now [13:14:46] But still don't think this is an exhaustive list [13:15:06] Tbh i don't need an exhaustive list, I just would like to understand what the sample is that I'm getting [13:15:36] Well there should be 83 544 entries for https://bbc.co.uk and 608319 for http://bbc.co.uk [13:16:30] err wait [13:16:52] Hmm so I'm getting rt.com: 680 bbc.co.uk: 812 wikileaks.org: 619 [13:17:06] I included a bunch of stuff like bbc-now.co.uk and news.bbc.co.uk [13:18:01] So you think that when you get 80000+ this is an exhaustive list rather than a sample? [13:18:02] jubugon: Ok, i think that's the problem. Try searching for *.bbc.co.uk [13:19:08] When I search for exactly bbc.co.uk (no subdomains) I get 3 for https and 444 for http [13:19:37] rt.com 680 *.bbc.co.uk 860 wikileaks.org 667 [13:20:38] Maybe you have more access than I do? [13:20:39] I get a total of 607603 for http://*.bbc.co.uk and 83903 for https://*.bbc.co.uk [13:20:49] I'm using a different system for querying then the api [13:20:56] to get total counts [13:21:09] I'm using https://quarry.wmflabs.org/ [13:21:23] but it should all use the same underlying data and ultimately give the same answers [13:22:11] does it use the same query as the media wiki api? [13:22:57] SQL syntax is different to python syntax? [13:22:58] Well quarry lets you make your own queries [13:23:16] so it can use the same as api, but you can also do other things if you want [13:23:57] umm, err actually. I think in my query i forgot to remove records related to page deletions [13:24:15] but that should have only a minor effect [13:25:05] Sorry but I've only started python a few months ago (as you can probably tell) and I don't know any other languages [13:28:17] How many do you get from rt.com? [13:28:25] I may not actually need subpages [13:29:36] As long as I have an exhaustive list of all the links to bbc.co.uk and rt.com [13:30:30] https://rt.com = 56 . http://rt.com = 3800 . https://*.rt.com = 2714 http://*.rt.com = 4449 [13:33:16] jubugon: Don't forgot, if there are more than eulimit results, then the api will send back a continue parameter for fetching the next page of results [13:33:28] Hello, I managed to finally find out where my Facebook like button, and google analytics was coming from. Does anyone know if there were features for these in skins (specifically Vector), or might it be that someone has manually edited the skins to add them in? And if so, is that a standard practise (to me it doesn't feel like it, since there is even no documentation) [13:35:03] "git log" would tell you about past features. Very unlikely though. [13:35:57] @bawolff OK thank you for your time [13:36:12] Hope that helped [13:36:45] harmaahylje: Vector never contained a facebook like button or google analytics [13:36:57] harmaahylje: Its most likely someone manually edited your skin file [13:37:07] We strongly discourage people from doing that, but sometimes people do [13:37:36] There do exist some extensions to add facebook like buttons (e.g. the ShareThis extension) [13:37:52] bawolff: thanks [13:38:02] that's what I suspected [13:38:15] We always discourage people from manually patching skin files, because it makes it impossible to upgrade, and if they make a mistake, its very hard to debug because nobody knows what your custom changes are [13:38:21] Someone added a few lines in there and left no comments or anything. This project is starting to feel like a disaster.. [13:38:29] exactly! [13:38:54] It is impossible to find if you don't have the original original copy of the whole setup [13:39:18] At least put it in a different file instead of editing the original source code, without mentioning about it, aargh [13:39:59] or at least put it in version control [13:42:27] I wish I had a git repository of the project :) [13:43:00] I was looking into the extensions all the time, as I am new to mediawiki. Had no idea someone would go and edit the original skin files [13:43:19] an upgrade would have been a disaster if this went unnoticed [13:44:28] Oh yeah, I am finding more fancy things in there. Oh lord. [13:44:39] Probably worth it to do a diff between what you have, and the official release of MediaWiki for whatever version you have [13:47:02] bawolff: that's a good idea. I am working on 1.19.0 [13:47:12] will do exactly that [13:48:24] https://releases.wikimedia.org/mediawiki/1.19/ [13:48:59] except that only goes to 1.19.9 for some reason [13:49:22] oh yeah, now I remember [13:49:27] originally I had to get it from git [13:49:37] lets see if I still have that somewhere [13:50:17] The git version will be slightly different in that it doesn't have extensions bundled [13:50:42] how about skins? [13:51:00] but otherwise will be the same. 1.19.0 is before we started using composer for libraries so they should all be bundled already in git [13:51:20] 1.19 is before we moved skins to a separate repo, so they should all be there [13:51:28] right, seems like the skins are in [13:51:45] thank you very much bawolff [13:51:56] glad to help [13:52:24] I think that the mysteries finally start to unfold. The bad thing is that I will have to build all these functionalities from scratch it seems. [13:52:38] (by using extensions I guess) [15:27:42] Oh, hmm. Who broke the updater :P [15:28:40] * bawolff plays the git blame game [15:30:54] bawolff: ? WFM [15:53:14] quick question: I am not able to link to an external HTTPS site using the source editor or visual editor. Is this normal? [15:54:09] What do you mean by not able? [15:54:35] if I use the [] to link via the source editor, it just ignores it [15:54:59] on the visual editor, when I use https:// instead of http://, the 'done' link becomes disabled. [15:55:24] if I change it back to http://, the 'done' becomes enabled [15:55:37] its like some setting in my mediawiki that doesnt allow https external links [15:56:39] my only extensions are parser and visualeditor [16:33:03] This is a fun one: I’m trying to write ContentHandler code to validate that a page’s title is consistent with its content. Is that a thing? [16:41:23] jacobharris: probably someone removed https from $wgUrlProtocols ? [17:14:12] Skizzerz: I read that in MSSQL a unique index doesn't ignore rows with null values like MySQL does. Is that true? And if so, why do the unique indexes on the change_tag table not break as soon as you've tagged two log entries (null ct_rev_id) or two revisions (null ct_log_id)? [17:31:54] I have something going on that is displaying a Category, and an image related to the category. Mediawiki seems to change the name of the category from "A-something" to "A_something". Is there a way around this? [17:32:14] for the link of the category (image) [17:36:07] never mind, I just figured out how this works. The template had the wrong text [17:50:15] @tgr that was exactly the problem. It is fixed. [18:42:52] "If your MediaWiki installation uses a memory cache, such as APC, memcached or Redis, then the user object is cached. Thus after making SQL changes you must flush the cache before a user can log in with the new password." <- how do you actually DO this? [18:42:59] I have tried everything, it won't purge [19:05:01] anomie: they would break, yes [19:05:20] I don't have any production wikis running MSSQL so I don't really catch that kind of stuff :( [19:06:36] Skizzerz: Thanks. [19:07:14] you could work around that by making a filtered index [19:08:20] e.g. CREATE INDEX /*i*/change_tag_rev ON /*_*/change_tag (ct_rev_id) WHERE ct_rev_id IS NOT NULL; [19:08:27] whatever the index name is supposed to be [19:10:05] Quasar`: what sort of object cache are you using? [19:10:58] well, we have php5-fpm running; the cache type is set to CACHE_ACCEL in LocalSettings; that's the extent of what I know [19:12:12] anomie: see above for fix to make the MSSQL indexes behave like MySQL in that regard (not sure if you saw that follow-up or not) [19:12:26] Quasar`: restarting php-fpm should do the trick then [19:12:33] tried that [19:12:46] didn't see any change :/ [19:13:15] and you tried logging in and it didn't work? [19:13:20] yeah [19:13:38] try resetting your password again [19:13:49] if you reset your password via mediawiki methods, that cache stuff shouldn't apply to you [19:14:03] that's only if you directly modify the user_password hash via SQL query I believe [19:14:19] if you can't do the email-based password reset, use the changePassword.php maintenance script [19:14:37] well that's the problem here; it's not my user account that needs to be reset, and our server's email is being blacklisted by the user's email account provider. [19:15:05] Well, there's nothing the software can do about that [19:15:09] so use the maintenance script :) [19:15:17] that script can change an arbitrary user's password [19:15:22] alright I'll try the script again [19:15:49] make sure to pick something >8 characters long [19:16:32] nope; not working - "Incorrect username or password entered. Please try again." is the only response I can get out of it. [19:16:43] pass in --help [19:16:46] it'll explain the flags to use [19:17:44] if you are still not getting it, copy/paste the command you are running [19:18:03] php changePassword.php --user=Esselfortium --password= [19:18:16] Skizzerz: Thanks. I saw mention of that when I was researching MSSQL's unique index null behavior, too. [19:18:27] it says it's successful when I run it [19:18:34] ok [19:19:32] did that password have symbols in it or was it just letters/numbers? [19:19:43] couple of dollar signs [19:20:19] do it without any symbols [19:20:52] * Quasar` facepalms [19:20:57] ok that worked [19:21:07] dollar sign in a shell is variable expansion [19:21:11] didn't realize $ was special in the middle of strings heheh [19:21:21] thanks. [19:21:36] if you wrap the password in single quotes it should work as well [19:21:43] --password='thing$$ie' [19:22:02] but easier to just not have them at all and then reset it on-wiki afterwards :) [19:22:19] I'll try to remember that from now on ;) [20:13:59] Can anyone help point me to the OOUI common.less location? [22:23:17] Hi. I need some help with my visualeditor. Can anyone here help? [22:23:51] what's the problem? there's #mediawiki-visualeditor but asking here may work too [22:24:16] I get "Unknown error, HTTP status 500" when saving the page or creating it. [22:24:24] I also have Curl installed. [22:24:35] Check your http server logs. [22:25:16] MediaWiki also provides debug regimes which output error messages to the web , this might help. [22:26:47] where would the logs be located. I am nginx webserver [22:27:48] What OS? On Linux, that could be in `/var/log/nginx`. [22:28:06] Ubuntu. I am there now. [22:30:01] which file would it be though [22:32:07] Which files are present? [22:32:59] Jo__, which files does that directory contain? [22:36:28] theres no error logs about my error [22:36:55] I have zlib also [22:41:09] I've written an abuse filter, and it match the edit but not catch it!. [22:42:14] The proplem occurs only in a single page. [22:47:02] Here is the filter https://ar.wikipedia.org/wiki/خاص:مرشح_الإساءة/100 [22:50:36] Jo__, what files does that directory contain? Could you pastebin them all?