[01:19:41] API:Main Page says that "there are plans to remove all formats except for JSON"--is there a timeline for this? [01:20:53] That seems like something that would be a pipe dream [01:21:17] the RFC here is split: https://www.mediawiki.org/wiki/Requests_for_comment/API_roadmap [01:21:43] iirc, AWB doesn't use the JSON format [01:22:32] AWB? [01:22:52] fhocutt: that was ages ago.. hahah [01:23:04] maybe if someone actually gets to cleaning it up [01:23:31] imaleaf: I didn't see an updated version, is there one? [01:23:49] (not that that's a showstopper, just that it'd need to be "fixed" before dropping whatever AWB is using) [01:24:07] SamB__, AWB? [01:24:26] !AWB [01:24:26] There is no such key, you probably want to try: !autowikibrowser, [01:24:35] um [01:24:38] !autowikibrowser [01:24:39] AutoWikiBrowser (often abbreviated AWB) is a semi-automated MediaWiki editor designed to make tedious repetitive tasks quicker and easier. https://en.wikipedia.org/wiki/Wikipedia:AutoWikiBrowser [01:24:41] why not just show me that one then? [01:24:50] thanks [01:26:44] fhocutt: no, it still accepts xml [01:27:02] that's actually how I still talk to the api [01:27:10] yeah, I'm just wondering when/if that's going to change [01:27:24] I'm working on figuring out which the decent libraries on API:Client code are [01:27:53] sorry, I haven't looked at them too much internally, but from working with them, I'd say the xml one is pretty solid [01:30:11] xml is kind of a pain... [01:30:26] lots of bugs related to people forgetting some characters are invalid [01:30:54] but from a client side, using either the xml or json encodings is "sane". Using other encodings, not really [01:31:08] *I should say, bugs on our end, not on the client end [01:31:56] You'd probably have to ask either anomie or yurik if the xml format is going to go away ever. I feel like that is a "we wish we could, but we can't" type of thing [01:35:32] <^d> !AWB alias autowikibrowser [01:35:33] Created new alias for this key [01:36:14] <^d> We should remove wddx. [01:36:21] <^d> I grepped the api logs awhile back. [01:36:46] <^d> (Almost) nobody uses it. The most common url is the &format=wddx url example from action=help [05:59:35] is it okay if I use sleep, or was it something that was going to be deprecated [06:00:22] if it is, I hope there would be a note somewhere on [[Manual:Coding conventions]] [06:06:04] Withoutaname: er, context? [06:06:16] :D [06:06:39] I forgot where but I heard in one of the bug reports sleep() was something of a possible vulnerability [06:08:37] ok, a brief search for "sleep" turns up https://bugzilla.wikimedia.org/show_bug.cgi?id=46887 [06:09:20] umm [06:09:30] I mean, you should never be using sleep in mw code [06:09:52] in that bug, that's discussing the use of sleep in tests (which is also bad) [06:11:08] legoktm: what if I wanted to output a wikilink onto the console as a status report. [06:11:29] say "cleaning up [[$title]]" while doing maintenance scripts [06:12:15] can you write on [[Manual:Coding conventions]] why sleep() is bad [06:12:47] (side note, I did find a few instances of sleep() in MediaWiki core, didn't write that myself of course) [06:14:16] I see 40 usages [06:14:40] the majority are in maintenance scripts, which makes sense [06:16:10] yeah would it be alright if I use it in a maintscript [06:16:30] meh, I assume no one would really freak out about that [06:16:42] there's still 25 usages in core though [06:17:31] databases, cache, and some from includes [06:18:06] if it's that evil, I have to wonder why it's used in database [06:18:26] imaleaf: how did you find usage of it [06:18:39] phpstorm :p [06:18:44] you could run a grep though [06:19:28] well, I mean in stuff like wfWaitForSlaves it makes sense [06:19:45] I am curious why it's evil though, as well [06:20:01] because mediawiki doesn't really execute anything in parallel, so I don't see race conditions [06:20:27] the database obviously takes care of any sort of conflicts itself, being a server [06:27:58] https://bugzilla.wikimedia.org/show_bug.cgi?id=65665 may warrant "critical" IMHO [06:45:18] looking for comments on a draft of a proposed standard for MW API client libraries: https://www.mediawiki.org/wiki/API_talk:Client_code#Proposed_.22gold_standard.22_criteria [06:45:54] was there loss of data from the db [06:46:08] "critical" bugs are only for major loss in data [06:47:22] "only" is an overstatement :) [06:48:26] Mithrandir, do you have a moment to take a look? [06:50:59] no answers? :) [06:51:49] fhocutt: I'm not familiar with the background of your project, what's the point of a "gold standard"? [06:52:23] legoktm, background is here: https://www.mediawiki.org/wiki/Evaluating_and_Improving_MediaWiki_web_API_client_libraries [06:53:10] basically, right now if someone wants to use an API client library, they have to go hunting on API:Client code and there's not much information there for them to decide what to use [06:53:47] the idea is to have a standard for library maintainers to shoot for--later this summer I'll be contributing to one of the listed libraries myself to try to get it there [06:54:06] neat [06:54:19] I wish PHP was on your list of languages though, that's the one that probably needs the most work... [06:54:26] xD [06:54:48] unfortunately, I have zero experience with it [06:55:27] there are like 2 base PHP frameworks, and different people have forked them as necessary to make it work when people went inactive and it's just a huge mess [06:55:32] but if anyone else wants to work on those, I have learned quite a bit about how the API works recently! [06:55:38] ah, I see [06:55:43] yeah [06:55:45] "Instances of terrible hacks/extreme cleverness are clearly marked as such in comments" I'm not sure how good of an indicator that is [06:56:00] I don't think hacks are necessarily a bad thing [06:56:38] there are plenty of bugs in mediawiki that hacks are necessary [06:56:39] not necessarily bad, but if you're just looking at the code and you aren't seeing that hack, it would be nice to have a marker for somewhere your corner case might be failing [06:56:56] there are two base frameworks within mediawiki? [06:57:29] imaleaf: no, I mean PHP frameworks for interacting with the MW API, which are external of mediawiki [06:57:49] fhocutt: sounds like the real bug in that case is that it doesn't handle your edge case...which may have nothing to do with any hacks in the code [06:58:15] ok, sure [06:58:25] meh, I don't see why frameworks for that are even necessary, the api is very straightforward [06:58:33] it's quite simple to just put the requests into code [06:59:30] imaleaf: the api is not straightforward to someone who's not familiar with it [07:00:00] and it's nice to have something to handle login/tokens/cookies for you at a minimum [07:00:02] imaleaf: oh, it's definitely not straightforward [07:02:07] Withoutaname: so yes, we have something that *looks* like loss of revision text from DB https://bugzilla.wikimedia.org/show_bug.cgi?id=65665 [07:02:45] (I can't believe it really is such. Probably "just" some hyper-aggressive caching.) [07:03:54] hm so its happening on mw.org too apparently [07:04:18] Though even Special:Export is unable to fetch that revision [07:04:54] legoktm: you don't think it straightforward? I just think the documentation leaves a few massive, gaping, CAVERNOUS holes [07:05:13] like, you know. examples on how to actually use it with curl. that took me three days to figure out [07:05:24] bug 1? [07:05:32] but it's not too hard to actually use after that [07:05:43] imaleaf: I think it depends on your usecase, and how reliable you want your code to be. Just look at pywikibot's connection layer and all the random cases we catch and handle [07:06:19] legoktm: I really don't think I'll have the patience to look at that. but I'll be submitting something soon that interacts with the api, it hasn't had any issues and might serve a good example [07:06:34] basically uploading/editing pages, and images [07:06:51] worked for 58k images and 8k pages just fine.. [07:07:02] imaleaf: especially to someone who's not a power user, it does not make sense to have to dig into query/info/revisions when you just want the page text [07:07:03] *800 [07:07:14] it's not necessary to do that anymore [07:07:22] there's extracts, yeah [07:07:25] you can just request the page name, not the revision [07:07:49] well I suppose we can -all- agree that the documentation needs work [07:08:02] ah, I mean, having page text categorized under prop=revision is super unintuitive for a lot of people [07:08:03] and I for one don't see why the api has to be separate from the main access point [07:08:13] totally with you on the docs. [07:08:21] if I remember correctly, you don't have to say that anymore [07:08:37] hm, then API:Sandbox hasn't been updated [07:09:15] it's possible. I think I discovered that just by looking in the source itself. but I do think it would be nice if people could just query index.php and it routed them, without having a separate api [07:09:33] but then again, I wish for many things :P [07:09:57] fhocutt: hiya [07:10:10] hey, Mithrandir [07:10:46] [12:08:02 AM] and I for one don't see why the api has to be separate from the main access point <-- what do you mean? [07:10:50] fhocutt: I'd add "packaged for the various distros" to the "easy to install" list. [07:11:01] ah, yes [07:11:35] (distros of the language?) [07:12:06] no, like Ubuntu, Debian, Fedora, etc [07:12:08] distro packaging is nice, but hard to do if you don't know people who can review it. I think things like pypi/cpan/composer should be preferred over distro packaging [07:12:16] legoktm: people have to go to api.php as opposed to just sending their request to the main access point. this makes a big break between "access" and "api", which I don't think is necessary [07:12:27] because, in reality, index.php -is- an api [07:12:28] legoktm: sure, but this is the gold standard, not bare minimum [07:12:46] so now we are breaking some requests into one php access point, and some into another [07:12:59] Mithrandir: also for Windows? [07:13:01] but we have an actual routing class, we don't need to do this [07:13:11] fhocutt: well, for windows you just have pypi, etc. [07:13:27] I don't think anybody is doing anything with packaging language extensions as MSIs [07:13:28] Mithrandir: I can't think of any python mw-api libraries that would meet the rest of the criteria *and* are packaged for distros. [07:13:49] legoktm: maybe we don't have any that fulfill the gold standard, then? [07:14:02] I'd say your gold standard is too high in that case. [07:14:10] distro packaging is not an easy thing [07:14:10] Mithrandir: numpy and scipy do, but they are huge [07:14:17] it's a gold standard, not a brass standard. :-P [07:14:21] nor does it make sense [07:14:56] as a framework developer, why would I spend time getting people from debian and fedora to review my package when I can put it on pypi and any OS can install it right away? [07:15:44] distro packaging is nice for stuff you have to compile and build, but for interpreted languages it doesn't really make sense [07:15:44] because it'll need to be packaged for the distro if any software distributed by the distro should be able to use it. [07:15:52] I disagree, a lot. [07:16:10] and I disagree about distro packaging being harder than non-distro packaging, but that's clearly not something people agree on. [07:16:47] it takes me about 5 minutes to upload something to pypi. I'm pretty sure getting it into debian would take much longer ;) [07:17:59] and yeah, it does need to be distro-packaged for stuff to depend on it, but what do you plan on getting distributed by the distro that depends on an api library? maybe I'm missing a usecase [07:18:48] as a trivial example, integration with an editor so you can do file → open → mediawiki url. [07:19:09] iirc, gedit can have plugins written in python. [07:19:24] interesting, didn't know that. [07:19:48] I'm not saying such exist today, but it's a reasonable and non-contrived use-case, IMO. :-) [07:19:59] sure, I agree with that :) [07:20:55] legoktm: doesn't mediawiki have a dedicated debian maintainer? I'd contact them for help :) [07:21:17] it does help get something out, I believe. if debian stable doesn't support it out of the box, I'm always wary [07:21:29] it's kind of a marker for reliable software [07:21:47] anyways, I think it would be a good idea to check how many libraries will meet your gold standard so that way at the end you don't have a super nice gold standard and no libraries that meet it :P [07:21:56] legoktm: that is a fair point. [07:22:19] the flip side to the "it takes five minutes to put something on pypi" is "random crap will end up there, so you have to do check every library to make sure it's not a trap". [07:22:31] yes! [07:22:35] ^^ [07:22:48] Mithrandir, yes, precisely. See: API:Client code. :) [07:22:48] sure, that's what this project is for ;) [07:22:50] fhocutt: xD [07:23:02] imaleaf: no clue, I'm a fedora person :P [07:23:04] fhocutt: I wonder if it'd make sense to add a criteria for "it should be easy to get help with the library, via lists, irc or forums"? [07:23:19] legoktm: I actually haven't figured out what you're working on, care to enlighten me? [07:23:19] Mithrandir, yes, definitely [07:23:32] and no comment :p [07:23:39] legoktm: how do I rollback a database change [07:23:40] ...how about also "community has a code of conduct"? [07:23:46] i think i broke my vagrant install [07:24:06] Withoutaname: $dbw->rollback() if you're in mediawiki...if the request already finished it's probably too late [07:24:15] you can run "vagrant destroy" to reset your VM though [07:24:25] but that will clear everything [07:24:50] fhocutt: I think that might be a bit too steep a requirement. Most smaller projects won't have that. [07:25:05] gold standard? :P [07:25:10] well it's popping up "Notice: Undefined property blah blah blah" on Special:Recentchanges [07:25:17] maybe they should, but they don't today. [07:25:36] fhocutt: how many API libraries even have a real "community"? [07:26:22] Mithrandir, legoktm, true, but I'd also like to not be pointing users to a place they'll be verbally abused for inexperience [07:26:22] imaleaf: I'm just discussing the project, but aside from that I'm a pywikibot dev/bot runner, and I've written at least 2 python-mw-api frameworks over the years [07:26:43] ahhh I see. I've actually read a bit of what you did then [07:27:07] it helped me when I was learning how to interact with the api :) [07:27:38] fhocutt: AGF, but also, I don't see how a lack of a code of conduct implies that [07:28:05] legoktm: it's not that a lack implies that, it's that the presence suggests it is less likely [07:30:00] maybe [07:30:52] I think that's ok covered with the "Library maintainers are responsive and courteous" [07:31:05] maybe extend to "Library maintainers and community"? [07:31:23] +1 [07:32:09] Mithrandir: if it's a gold standard, I'd like to have something like "there's a public statement of intent to be responsive and courteous", better if it lists specific no-goes [07:32:46] for some people that definitely lowers barriers to asking questions & interacting in a space [07:34:37] (slightly as a response to what legoktm is getting at): would it make sense to have a "We'd like to see this from libraries, but none of them provide it today, so we're adding it to this platinum standard list"? [07:35:23] possibly! It's good to have things to shoot for [07:35:33] to be clear, codes of conduct don't (or shouldn't) list what not to do, but what you should do - https://www.python.org/psf/codeofconduct/ [07:35:39] better than having a [07:35:47] a problem with having a standard so high none reach it is you'll not inspire maintainers to fix that, they'll go "I don't care about this evaluation, it uses impossible criteria" [07:35:53] yeah, definitely [07:36:04] I think having a "aim for this" list is a good idea [07:36:25] and once you have a couple who reach that bar, move it to the gold standard list. [07:37:51] legoktm: there are a few problems with that--one is that serially abusive people abuse loopholes, and another is that one person's "tactful" is another's "I didn't swear at you so I can insult you all you want" [07:37:54] for a bunch of those libraries, I suspect they don't even have a home page as such. They have a github repo, docs on readthedocs with a pointer to IRC and maybe a mailing list and their package on pypi/cpan/npm. [07:38:04] but getting back to the rest of the standards [07:38:14] Mithrandir, that's correct [07:38:25] ...maybe docs on read the docs. Maybe. [07:38:36] well, yes, if they have docs. ;-) [07:38:51] I like the approach of gradually raising the bar [07:38:55] under easy to use, have a "uses language idioms"? [07:39:06] yes [07:40:26] to some extent that might make it slightly harder for people who are absolute newbies to use (since they need to learn both the language with its idioms and the API at the same time), but it will make it much easier for somebody who knows the language, but not the library and MW API. [07:41:06] definitely. [07:41:31] and that's a good learning experience for newbies, probably--if they want to learn the language that is [07:41:41] right [07:41:55] this one bugs me slightly: "Issues/bugs are dealt with in a determined period of time [quantify]" [07:42:14] eheh [07:42:17] yeah, I wanted something to say "responsive" but I'm not sure what's a good way to phrase that [07:42:21] some bugs are easy, some are genuine "I don't know how to fix this/It will require rewriting everything" [07:42:30] right [07:43:18] you could say that by responding to it saying "this is hard to fix since it will require redoing the entire library +wontfix" is "dealing with", but I'm not sure if "dealt with" is the best term there? [07:43:30] maybe "responded to" [07:43:38] yeah, might be a better term. [07:44:34] I'd possibly put "reasonable amount of time" and not quantify, but I can see arguments in favour of quantifying too. [07:45:34] it should be achievable, but deadlines can be motivating and I'm not sure that there's agreement on "reasonable" [07:45:58] that is, I'm sure that there's not agreement :) [07:46:29] they can also be demotivating. If you missed the deadline by a day, is it worse to miss it by two days? [07:46:46] it's a bit of the classic "how do projects run late?" "one day at a time" [07:46:57] hm, true [07:47:45] one can set expectations without saying "14 days". "a couple of weeks" for instance. [07:48:00] right, that counts as "quantified" to me [07:48:19] so if you've failed to respond to a bug within a month, that's clearly over "a couple of weeks". Three weeks is probably streching it. 16 days is ok. [07:48:23] something like that? [07:48:31] it's the difference between "a couple of weeks" and "a month or two" that I care about, yeah [07:50:13] ok, that's fair enough. I read it initially as basically an SLA. [07:50:13] which it isn't. [07:50:55] SLA? [07:52:03] service level agreement. [07:52:35] ah, no, not at all. [07:52:41] "we guarantee 99% availability for this service, if we can't do that, we'll give you a pro-rata refund" [07:53:05] just a "maybe think about not leaving this for 3 months?" nudge. [07:54:43] one question though, should the community have that communicated or should the gold standard say "bugs should be responded to within a couple of weeks"? [07:55:00] the latter, I think [07:55:13] it will to some degree depend on the size of the project. [07:55:13] ... [07:55:38] right, pywikibot has a lot more resources than some of the tiny ones [07:55:47] I suggest to look into guidelines of established FLOSS projects instead of reinventing the wheel, some things I read above are weird. [07:56:14] For instance I hear fedora has some system to replace package maintainers *if* they're really unresponsive. [07:56:52] (Removing or slapping a volunteer maintainer when you have no replacement would be masochist I suppose.) [07:56:54] Nemo_bis: I am super new to FLOSS, where would I look for those guidelines? [07:57:01] Probably on google.com [07:58:24] ...ah, there we go, floss maintainers just gets dental stuff :P [07:59:54] this is a much looser network than http://fedoraproject.org/wiki/Policy_for_nonresponsive_package_maintainers seems designed for. [08:01:06] fhocutt: I've read quckly through your comments about the various libraries and it seems none of them are up to the gold standard? [08:01:16] mithrandir, correct [08:01:47] some of them are closer than others [08:02:15] most of them don't handle continuations [08:02:30] some of them have tests, some don't [08:03:17] none of them have complete unit tests, do they? [08:03:21] very few of them cover wikibase; I'm not sure if that's a problem or not [08:03:31] I haven't checked for completeness on any of them [08:03:46] I suspect that a handful of them might. [08:04:15] complete is a high bar [08:04:54] (and sometimes ends up with stupid tests, like tests for setters and getters in java) [08:05:09] ah, huh [08:05:43] at least for trivial setters and getters, testing them is pretty pointless. :-) [08:05:54] if not complete, what would you want to see? [08:07:03] "all MW API functions functions have tests"? Something like that. [08:07:18] * fhocutt notes [08:07:28] all "real functions". I'm not sure how to phrase it. [08:07:51] "reasonable test coverage", but that needs qualitative, not quantitative review. [08:08:03] and reasonable people can disagree about what that means. [08:08:08] right. [08:08:42] I need to head off soon, any last comments for now? [08:09:21] I don't think I have anything more. I think having that gold standard there is good and while I'm picking at many of the individual points, the collective seems well-thought out to me. [08:10:00] thank you! [08:10:32] and thanks for taking a look. I'll email the rest of the mentors with a pointer to this and the "best libraries" notes. [08:10:41] sounds great. [08:45:23] Yay, Flash is dying! http://www.webperformancetoday.com/2014/05/21/stop-presses-average-web-page-actually-gotten-smaller-bigger/ [09:57:35] Hey anyone here?? [10:00:29] What happened to the sidebar at mediawiki.org? O_o [10:00:57] ?? [10:02:05] Param_: I wasn't responding to you. Did you need help with anything? [10:05:06] did something change? it seems to be functioning [10:05:56] I think it's monobook [10:06:30] It's happening to me in Modern as well [10:08:01] Weird...it looks like somehow most revisions from 2007-2008 are gone. ex: https://www.mediawiki.org/w/index.php?title=MediaWiki%3AMw-extensions&diff=229287&oldid=64636 [10:08:14] "One revision of this difference (229287) was not found." [10:21:15] FunPika: when did they disappear? can you file a bug? or has one already been filed? [10:22:33] I'm not sure when exactly they disappeared, or even what the precise range of revisions missing are [10:22:47] Someone already filed a bug about the missing sidebar links, and I just updated it with what I observed [10:22:48] https://bugzilla.wikimedia.org/show_bug.cgi?id=65665 [10:26:12] FunPika: have we done rebuildmessages.php yet [10:29:58] actually it could still be flushing the cache after flaggedrevs was removed [10:34:46] it is not just MediaWiki: messages that look messed up, something else I found was https://www.mediawiki.org/w/index.php?title=User_talk:147.222.7.24&action=history [10:35:01] An IP was warned adding 222 characters to their user talk...it appears blank now [10:36:50] so objects from the text table were destroyed? [10:37:41] is there a sysadmin with access to mw's logs [10:40:46] it's just mw.org right? hopefully it didnt leak to production wikis [10:41:59] I checked some revisions in that date range on enwiki and meta, didn't see any problems there [11:17:49] good afternoon [11:23:52] Anyone? [11:29:27] Edokter: https://bugzilla.wikimedia.org/show_bug.cgi?id=65665 [11:29:43] You can switch your user language to portuguese for the meanwhile ^^ [11:30:57] har :) [11:37:04] Thanks for the bug. I found that purging the message makes it reappear [11:40:01] I only had tried null editing, hm. [11:41:30] Added notes to bug: purge only works for default messages. msgs with history will show only msg name after purge [11:45:45] Hi! [11:45:57] I have a problem with confirm account extension [11:46:38] is there anyone? [11:47:41] just ask [11:48:06] So what Edokter found only means that the fallback from DB to default messages works. (Phew!) [11:50:33] Ok cool! I have installed ConfirmAccount extensions. It works pretty week, I have put my email in the ConfirmAccount.config.php file as $wgConfirmAccountContact = 'hello@me.org'; [11:51:07] and when a new user submit the register form I receive the email to accept it [11:52:00] BUT the email seems html formatted: I read some strange stuffs like: " [11:54:31] nkint: that's a bit too generic, paste the whole message source somewhere maybe [11:55:05] There might be some overescaping, is the message still readable and usable? [11:55:45] this is the content of the email I received [11:55:48] "Claudia" ha chiesto un account ed è in attesa di conferma. L'indirizzo e-mail è stato confermato. È possibile confermare la richiesta qui "http://wiki.wemake.cc/Speciale:ConfermaAccount". [11:56:06] it is readable, it is usable, but it is weird [11:57:16] just to know: MediaWiki 1.22.5 PHP 5.3.3-7+squeeze18 (apache2handler) MySQL 5.1.73-1 Lua 5.1.5 [12:01:52] anyone? [12:01:56] ideas? [12:03:39] Ok, not too hard to fix I expect. Please file a bug on bugzilla.wikimedia.org, MediaWiki extensions>ConfirmAccount [12:11:05] ohu it seems to be already a bug: [12:11:05] https://bugzilla.wikimedia.org/show_bug.cgi?id=54621 [12:15:23] Had a small doubt, what exactly is the purpose of the current ContentLanguage functions associated with the title in core? I mean it is always set to the wiki language [13:16:01] hey [13:18:09] i m new to wikimedia [13:18:39] and i hv found something to work on [13:18:54] like this bug [13:19:06] https://bugzilla.wikimedia.org/show_bug.cgi?id=16691 [13:19:32] I have a performance issue with MW 1.22.6, a page is taking 7 - 15 seconds to load, yet we have APC running ($wgMainCacheType = CACHE_ACCEL) -- what could cause this? [13:19:58] how to get the code and work? [13:25:29] i m new to wikimedia and i hv found something to work on like this bug [13:25:37] https://bugzilla.wikimedia.org/show_bug.cgi?id=16691 [13:25:50] how to get the code and work? [13:26:34] Hi surbhi [13:26:34] surbhi: For code, see this https://www.mediawiki.org/wiki/Gerrit/Getting_started [13:26:38] hi ! [13:28:46] surbhi: For getting started https://www.mediawiki.org/wiki/How_to_become_a_MediaWiki_hacker and https://www.mediawiki.org/wiki/Developer_hub [13:30:00] You dont really need to read the whole of it, just understand the basic functioning of the software. [13:30:30] okay [13:31:05] when i try cloning the core i get this error [13:31:18] The authenticity of host ....... can't be established. [13:38:39] Regarding performance, the trace shows Linker::makeImageLink and PPFrame_DOM::expand being very slow. [13:54:16] hi [13:54:37] on install Project namespace: it takes the wiki's name i can change it later? [13:55:02] i think i found the answer [13:55:15] found it [13:55:15] :D [14:15:10] architect? [14:15:10] :X [14:28:06] hey .. [14:28:40] whenever i run this ssh @gerrit.wikimedia.org -p 29418 [14:28:43] i get [14:29:12] The authenticity of host '[gerrit.wikimedia.org]:29418 ([............]:29418)' can't be established. [14:29:41] then finally [14:29:43] v [14:29:45] Permission denied (publickey). [14:30:12] what could be the problem? [14:30:29] please helo [14:30:30] appie, you need to put your ssh public key in your gerrit settings [14:31:18] hey [14:31:21] i have already added that [14:31:28] still the prob [14:31:40] still the problem [14:31:52] In https://gerrit.wikimedia.org/r/#/settings/ssh-keys right? [14:32:22] yes [14:32:48] appie, make sure you ran ssh-add on your private key [14:33:40] can i generate $wgSecretKey manually? [14:34:36] you mean to run ssh-add .ssh/id_rsa [14:35:15] i hv done that too ... and got Identity added [14:35:29] "ssh-add -l" shows it? [14:36:08] appie, btw, what is your ? [14:36:21] astuti [14:36:35] okay, so you're running "ssh astuti@gerrit.wikimedia.org -p 29418" right? [14:36:50] yeah [14:37:12] Have you tried it since I got you to run ssh-add? [14:37:15] @Krenair yes [14:37:30] Hello i want to ask how to use the parsoid for the visual editor because i did it with the nodejs way [14:38:11] again the same [14:38:17] kyriakoshadj, can you go to #mediawiki-visualeditor? [14:38:24] permission denied [14:38:40] yes [14:39:15] not relevant but my client thinks the question mark is part of the channel name [14:39:27] i went there [14:39:58] kyriakoshadj, #mediawiki-visualeditor, not #mediawiki-visualeditor? . [14:40:08] ok [14:41:33] ok i went there [14:41:44] appie, and it was the contents of ~/.ssh/id_rsa.pub that you copied to gerrit right? [14:41:58] yup [14:42:14] Hey, im tryn to update an mediawiki-1.16 to 1.22.6 and i get an "MagicWord.php: Error: invalid magic word ''" error after the upgrade - i tried disabling all the extensions but still the same - any ideas / directions? [14:43:25] didn't know 1.22.6 was out [14:44:56] llwy-ar-lawr: http://www.mediawiki.org/wiki/Download [14:45:49] when I set up my wiki the one you had up there was 1.22.4 [14:45:56] I see you're moving along [14:47:42] @Krenair [14:48:00] thanks i will try doing the things again [14:48:16] thanks alot [15:22:11] Can someone turn off wm-bot4 in #mediawiki-feed? It's duplicating wikibugs [15:24:57] hi. Our wiki is allowed to read only for authenticated users. But how make the RC rss feed avail. w/o login? Use a token, is there a way [15:25:01] ? [15:37:25] hey legoktm [15:37:36] hi Withoutaname [15:37:46] do you think you could help me [15:37:50] what's up? [15:38:44] I want to manually log an entry using LogPage::addEntry(), but I don't know what to pass through $params [15:39:19] I don't know why but it keeps popping up errors [15:53:34] hey [15:53:47] anyone knows whats the mediawiki version i should get for php 5.22? [15:53:53] Withoutaname: did you try https://www.mediawiki.org/wiki/Manual:Logging_to_Special:Log ? [15:54:19] biberao: yes, the release notes know [15:56:17] any idea why "{{SITENAME}}" is displayed an not the value dfined in $wgSitename? (fresh after upgrade from 1.16 to 1.22.6) [15:58:08] I think so, but it's still acting odd [16:01:05] damn it [16:01:11] even with old version doesnt work [16:01:14] arrr [16:04:08] Nemo_bis, what does 'customname' supposed to represent? [16:05:56] Withoutaname: better ask on talk [17:02:24] I'm trying to decide between using site.org/wiki vs site.org/w urls. thoughts? the audience will be generally tech savy [17:05:34] IanKelling: site.org/wiki would probably be better, to identify the site as a "wiki". Note that you could have both urls if you can set up url rewritting to set up "short urls" [17:05:41] https://www.mediawiki.org/wiki/Manual:Short_URL [17:05:53] maybe just copy whichever WP uses? [17:06:06] It honestly doesn't matter very much [17:06:41] for SEO maybe [17:06:42] I noticed #emacs didn't seem to have any opinion; I only suggest doing what WP does for consistency [17:07:05] precisely because nobody seems to actually have any strong feelings [17:07:48] * bawolff is convinced that SEO is secretly a scam [17:07:57] thanks. ya. i don't have very strong feelings either. /wiki is standard [17:08:17] If your site has good content it will be popular, if it doesn't it wont be :) [17:08:18] "de facto" standard :P [17:08:33] bawolff: well, if by SEO you mean something like "design the site to actually say what stuff is so people and search engines alike can navigate it" [17:08:34] vulpix: yes [17:08:38] that's not a sham [17:08:53] IanKelling: To be specific, the most common is /wiki/
and /w/index.php?foo=... for long [17:09:23] yes, i'm using short urls for sure [17:10:12] SamB: Yeah that's great, but people do some rather crazy things in the name of SEO [17:10:18] having "wiki" in the URL may give a hint to people that look at those things that the site can be edited by other people (sometimes requiring account creation), and also helps in SEO when people search for "(subject) wiki" [17:10:48] oh, yes Vulpix. you've convinced me [17:11:05] bawolff: yeah, pretty much any SEO advice which doesn't work out to "make your site more useful and accessible" is fairly dumb [17:11:10] if the domain name already is in the form of wiki.domain.com then it would be redundant [17:11:17] when people see a url, they will know it's a wiki [17:11:19] Do search engines really match against pseudo-directory names in the url when doing keyword searches? [17:11:32] because search engines are likely to deliberately change to undermine such advice [17:12:17] bawolff: I expect they pay some attention to it, yes [17:12:32] bawolff: yes, otherwise most blogging systems won't use urls like example.com/look/at/my/first/blog/enty or example.com/look-at-my-first-blog-enty [17:12:56] Or you can be commons, and have the search engines change their behaviour to better index you ;) [17:12:57] even news sites [17:13:13] I always assumed that was more for humans [17:13:16] Vulpix: isn't that for humans [17:13:37] (but search engines are supposed to look at the stuff for humans) [17:13:56] do humans really look at the URLs? Current web browsers even grey them out! [17:14:03] be commons? [17:14:30] Vulpix: isn't that just so that the domain name can be in black? [17:14:43] the "important" part of the domain name, I mean [17:14:45] ya, i look all the time. its greyed but still perfectly readable [17:15:26] no idea, I haven't done enough research for that yet [17:42:55] Hello, is there mediawiki oauth handshake helper for php like a one for python [17:50:06] I assume there are OAuth libraries for PHP. [17:50:15] That could tie into MediaWiki's OAuth implementation fairly easily. [17:51:01] I have a problem, my browser is a plugin so it would be independent from the any wiki software [17:51:09] * my app [21:24:09] is tyler romeo here [21:26:11] Withoutaname: He's not usually on irc [21:26:38] he comments a lot on code reviews though [21:26:52] When he is on irc he usually uses the nickname parent5446 [21:27:10] Withoutaname: he doesn't usually sit here, but if you want to talk to him "live" you can probably ask him by email and he'll come :) [21:27:25] thx [21:27:32] Indeed he does do a lot of code review. He's one of our most active volunteer code review people [21:28:05] was he a professional developer elsewhere [21:33:29] bawolff: Hi [21:34:08] kunalg: hi [21:34:21] Withoutaname: I think he's currently attending university [21:36:27] I wanted to ask regarding the Page Language selector. https://www.mediawiki.org/wiki/User:SPQRobin/Page_language#Progress What is your view regarding the implementation? Here is the progress so far, I am slightly unsure about the implementation. [21:38:28] And I had a small question if just making the current method Title::getPageLanguage() with a database link work? [21:49:50] kunalg: Can I get back to you in a couple minutes after I finish writing this bug report [21:50:09] bawolff: Sure :) [22:06:08] kunalg: for "Made a function to update the value of the page language for all the existing Wiki pages(which is currently set as the default wiki language) in the table." [22:06:34] Have you considered making null to mean equal to page_content language [22:06:56] That way somebody could change the content language of a wiki without updating millions of rows in the database [22:09:21] bawolff: I didn't quite get it. Basically every existing page has the page language by default as the wiki language. I am just reading it using the getContentLanguage() method. [22:10:25] None of the page languages should be null if we get them for the existing wikis this way. Please correct me if I am wrong. [22:10:40] kunalg: Ok, so if you add a page_language field in the database, you could have its default value be null, and then the field would only have a value if its something other then the default [22:11:43] then when you call $title->getPageLanguage() [or however you could fetch the page language], you could do something along the lines of if the page_language field in the db is set, return that, otherwise fallback to $wgContLang [22:12:59] Ok. Got it. [22:14:35] And what about the interface part? There is already page content language implemented in core. So, if we can set it somehow and link to the database, things should work out right? [22:15:21] I'd worry about backend first, and interface second [22:15:48] But basically yes, once you hook up database to $title->getPageLanguage(), things will work, you just need a way to set it [22:17:01] You could make a very simple special page as a first approach, and then come up with something fancier after (like Nemo_bis suggested on the talk page) [22:17:34] Ok. Using a special page to set languages for all pages right? [22:18:45] I meant for a specific page. [22:19:06] e.g. you could have Special:ChangeLanguage/Pagename_here which has a language selector on it [22:19:27] Which could serve as an interim interface until its more concrete what a "good" interface would be [22:19:34] Oh. Ok. [22:20:04] And one last question. Is having a hook a good idea in this case? [22:20:53] hook to allow extensions to override the language choice for a page? [22:21:57] I was thinking much lower currently. A hook to set the language in database. [22:22:23] hooks can't really set things in the database (directly anyways) [22:24:09] You would want to keep the hook at the higher level (i.e. PageContentLanguage) to override the db choice (probably) [22:24:10] Yeah. But I meant using the hook ArticleSave and then saving it in DB. Is it sort of a temporary implementation? [22:25:42] No, I wouldn't do that. I'd suggest having the a default value in the db (null), to say user hasn't changed this page from the default, and only insert something in the database if the user has changed the language to something different then what the default is [22:26:39] Ok. Thank you. [22:48:53] hi [22:49:01] Hi biberao [22:53:53] Premature end of [22:53:57] Premature end of Premature end of script headers: index.php [22:54:02] damn it sorry [22:54:17] Premature end of script headers: index.php <- has ever happened this to you? [23:09:35] 2014-05-23 - 15:53:53 Premature end of [23:09:38] This is perfect. [23:09:48] biberao: Anyway, no, I haven't seen that before [23:11:49] back [23:11:54] i lost connection [23:11:56] damn php :D [23:17:16] bye good night