[00:00:06] namespace = Image [00:00:33] will this generate actual images or just a list of titles? [00:01:20] O_o [00:02:25] you'll have to mess with it some, but you should be able to use it to include the actual images [00:02:25] 14(INVALID) Reverting Edit Problem - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14538 (10brion) [00:03:05] i'll try - thanks darkcode [00:03:37] 14(INVALID) Redirect wm08reg root to registration form - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14545 +comment (10brion) [00:03:52] i really appreciate it [00:04:40] 03(mod) Remove bugs.wikipedia.org redirect - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14546 +comment (10brion) [00:04:49] 03(mod) Remove "bugs.wikipedia.org" redirect to Bugzilla - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14378 (10brion) [00:05:21] [00:05:24] category = Foo [00:05:27] escapelinks = false [00:05:50] mode = userformat [00:06:25] 03(mod) document.write() breaks under application/xhtml+xml in skins/ common/wikibits.js - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=2186 (10brion) [00:06:27] listseparators=[[%PAGE%]] [00:06:32] namespace = Image [00:06:34] [00:06:41] or something like that [00:07:54] 14(DUP) Cannot substitute templates withing tags - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14550 +comment (10brion) [00:08:03] 03(mod) Pre-save transform skips extensions using wikitext (gallery, references, pipe trick, subst, signatures) - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=2700 +comment (10brion) [00:09:17] 03(mod) Create proofread page namespaces on Greek Wikisource - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=14732 +comment (10andreas.schwab) [00:09:26] 03(WONTFIX) Option to remove log entries from watchlist - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14552 +comment (10brion) [00:10:35] getting a warning that says no results [00:15:21] 03(mod) Preferred video format in the user preferences - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14565 (10brion) [00:17:20] 03(WONTFIX) Create the 'MediaWiki default' user on wiki creation, or on upgrade if user does not exist. - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14569 +comment (10brion) [00:20:38] 14(INVALID) New API to give latest dump file release - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14584 +comment (10brion) [00:23:31] Hey Darkcode do you know how to create a table in DPL? [00:24:31] take a look at http://semeb.com/dpldemo/index.php?title=Main_Page [00:24:51] it gives plenty of examples of different uses for it [00:25:01] Thanx! [00:25:49] 03(mod) Special:Linksearch should be case insensitive - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14591 (10brion) [00:32:52] 03(mod) Problem with project talk namespace in Hungarian Wikinews, Wikisource and Wikiquote - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=14599 (10brion) [00:36:00] 03(mod)

tags in front of all parser functions - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14607 (10brion) [00:38:38] 03(WONTFIX) Change undo into its own seperate group permission - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14616 +comment (10brion) [00:39:04] *Charitwo bops brion [00:41:28] 14(DUP) Show log items on Special:Contributions - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=14619 +comment (10brion) [00:41:34] 03(mod) Add logs entries to user contributions - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=3716 +comment (10brion) [00:42:23] 04(REOPENED) Add logs entries to user contributions - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=3716 +comment (10brion) [00:44:06] brion: http://www.youtube.com/watch?v=gR2Uml4aUF0 [00:44:55] If I have $foobar = "== Foo Bar =="; and I want to properly output that from MonoBook.php what should I use? [00:45:51] I tried using $wgOut->addWikiText and $this->msgWiki the first does nothing, second add < and > around it but it does properly translate [00:46:13] actually sorry, no, it doesn't translate [00:47:02] if an image is in a page that is on a category it does it mean that that image is in a category? [00:47:09] brion: meow! deadly and beautiful! :) [00:48:35] no. an image is only in a category if the category is added to the Image: page [00:49:34] how do i add exsiting images to a category? [00:50:08] StarDustLilly: http://en.wikipedia.org/wiki/Wikipedia:Categorization#How_to_categorize_an_article - works the same for images [00:52:13] the same way as you would any page [00:54:36] do i do this on the image page:? [00:55:52] click [00:55:53] edit [00:56:16] and add the category [00:57:09] Why, in MonoBook.php if I use $this->msgWiki does it append < and > to the start and end of the text? how can I disable this? [00:58:58] substr($this->msgWiki, 1 ,-1); :) [01:00:16] Well, I don't want to do a bunch of substr of preg_replace's all over the place, that was my only concern [01:00:29] Thank You! [01:00:53] Ya i understand. I do not know anything about how it works. [01:01:08] hmm, well, thanks for the input anyways [01:01:21] It was more of a smartass answer anyhoo [01:01:29] :P [01:01:35] better than no reply at all, lol [01:02:09] internet chat realy is rarely used for chatting anymore [01:02:35] or is that relay chat, oh well [01:28:17] <_Vi> How to upgrade from mediawiki 1.7 to mediawiki 1.12 without reading all database entries? (such as adding rows to tables) [01:29:05] <_Vi> Tables was created using xml2sql and are very large and stored on transparently compressed filesystem. [01:38:26] I'm sorry was that a question? [01:38:35] <_Vi> How to upgrade from mediawiki 1.7 to mediawiki 1.12 without reading all database entries? (such as adding rows to tables) [01:38:37] <_Vi> Tables was created using xml2sql and are very large and stored on transparently compressed filesystem. [01:39:16] <_Vi> How to upgrade from 1.7 to 1.12 without changing the database (or with minor changes) [01:39:30] _Vi: Please stop repeating your question, you're merely spamming at this point [01:39:31] It won't work unfortunatly, the schema has changed. [01:39:44] AFAIK at least. [01:39:49] <_Vi> Dantman|Coding, I was asked to. [01:40:31] You're going to have to use update.php or manually run queries... There is no way to run MW with one schema on MW that requires another schema... We change the schema for a reason [01:41:09] <_Vi> Any process that touch each entry in any of the big tables is at least 30 hours. [01:41:55] New features, or performance tweaks... Those are the reasons for updating, and it's pretty stupid to ask 'Can I have these new features, but only update half the stuff needed for them?' [01:42:02] Then I would suggest you diable the software during that time, so your users see a "Currently upgrading" message instead of an error. [01:42:09] <_Vi> So there's no switch like "v1.7 compatibility". [01:42:16] Of course not [01:42:16] Nope [01:42:19] <_Vi> I use this wiki exclusively. [01:42:45] That would be insanely ugly inside of the code, prone to insane breakage, and slow down the development of MW [01:42:52] <_Vi> It is created from Wikipedia dump for use on my laptop when I'm offline. [01:43:22] If that's the case, then why not just import a new dump into a new wiki? [01:43:27] <_Vi> May be it is possible to provide some "emulation" on the database level.... [01:43:48] Do you think it will take you less then those 30hrs to do that? [01:43:58] <_Vi> It is _very_ slow and computer is thrashing heavily during this process, especially when index doesn't fit to memory. [01:44:42] <_Vi> HDD is rather slow + dynamic compression of filesystem. (It save more that 70% of space). [01:45:34] <_Vi> OK, Can I get progress indication of update process? Will be the database useable while upgrading? [01:45:46] <_Vi> I also need resume capability. [01:46:35] <_Vi> Computer also may crash during this, because of it is development computer too. /* Should I just stay with 1.17? */ [01:55:41] Anyone know of an extention that will let you tag images with keywords, then subesequently search by those keywords? [02:03:31] ImageTagging, Semantic MediaWiki? [02:05:35] Not quite but it will provide a base for the code. I'm looking for somthing that will let people put in words abuot the image like beach, water, sand, blue, etc. and let people search for an image like that. But ImageTagging deff provides the building blocks for me to just make it myself. [02:06:12] thanks [02:07:57] Eh? [02:08:47] Nothing. Thanks :) [02:08:49] ImageTagging was sorta good for tagging people (Wikia actually disabled it, so it's going to have issues), I'd think SMW would work better... Just create a UI for it [02:09:28] SMW would be better with just the generic tagging... ;) And it already has a query language [02:10:02] I'll look into it, never heard of it before. [02:10:20] It would be interesting If you did something like extend SemanticForms to work with InlineForms [02:11:20] I'm only limited by my time and supply of caffine, but I need to get this extension done one way or the other :) [02:12:11] ^_^ Resources is always my issue [02:14:47] Oh wow SMW is good. I'm gonna have to tinker with this. [02:28:06] Hmm... I think I finally found my spot... Parser::getTemplateDom should be the perfect spot to introduce a new hook [02:39:04] Mmm... finally ParserBeforeTranscludeTemplate [02:42:49] *G goes and reads up on adding hooks [03:05:31] Oh god, I can't stand this machine [03:09:21] 03shinjiman * r37077 10/trunk/tools/planet/zh/config.ini: [03:09:21] +Wikimedia Hong Kong [03:09:21] +Yuyu [03:22:24] ^_^ Yay, I have the perfect hook [03:22:33] http://dev.wiki-tools.com/wiki/TransTest ;) heh [03:36:25] 03dantman * r37078 10/trunk/phase3/ (RELEASE-NOTES docs/hooks.txt includes/parser/Parser.php): [03:36:25] New hook ParserBeforeTranscludeTemplate: [03:36:25] This hook allows for modification of the title and text of a template which is being transcluded. [03:36:25] Use of this hook will allow extensions to create features such as TransWiki for an alternative to ScaryTransclusions. [03:37:39] got to leave now [03:49:42] $wgUser is exposed to all extensions right? [03:50:30] it's a global object [03:50:31] http://www.mediawiki.org/wiki/Manual:%24wgUser [03:50:34] <^demon> G: $wgUser is global [03:50:58] thanks [03:52:24] i have some questions about interfacing with wikipedia. i'm trying to build a commandline application for linux that will output the body of an article to the commandline interface. is this the right channel to ask that question? [03:52:53] more specifically i'm wondering if wikipedia provides any way of isolating the body of the article from the rest of the page [03:53:21] what is your datasource? Are you trying to pull data off the wikipedia web sites or a local mirror? [03:53:43] gpowers: i was planning on getting the data from the net, via wget [03:53:48] amirman: you can use action=render via index.php [03:53:55] or use action=parse via the api [03:54:08] (if you want the rendered wikitext that is) [03:54:10] http://en.wikipedia.org/wiki/Foo?action=render [03:54:22] http://en.wikipedia.org/w/api.php?action=parse&page=Foo [03:54:39] sparkla: this is my first script really, so i don't really know what you mean [03:54:44] well [03:54:47] do you want [[foo]] [03:54:54] that is cool! :) [03:54:59] or do you want foo [03:55:19] (wikitext or finished html) [03:56:33] i guess i want the [[foo]] to show up as just foo [03:57:17] well, compare: [03:57:21] raw wikicode: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&titles=foo [03:57:26] rendered html: http://en.wikipedia.org/w/api.php?action=parse&page=Foo [03:58:42] if you're going to show it as plain text, you probably want to use the rendered html and strip all tags, \<[^>]*\> [03:58:57] hmm [03:59:30] see http://en.wikipedia.org/w/api.php for more options [03:59:57] hehe, thanks, i'm just deep in though wondering how i could use those two that you sent me [04:00:30] i'm a total newbie when it comes to writing a script, i thought this one would be a great start since wikimedia has such a good api [04:02:48] Splarka: the raw wikicode link seems to be the right one, but i don't know. i wonder if my script is trying to read that page its probably not going to see it as the simple text that i see is it? [04:04:22] the raw wikicode isn't expanded, so you won't see templates (infoboxes, boilerplates, navigation boxes, etc) [04:04:56] Splarka: the rendered html link shows me "<p>" about 1000 times interspersed throughout the text of the article and everywhere else, i'm wondering how i could isolate just the text of the article from all that [04:06:04] yes... that is escaped html [04:06:05]

[04:06:48] convert < to <, > to >, (quote, amp, etc as well), and then regex out \<[^>]*\> [04:07:29] *^demon wants to write an action for the API that strips all wikicode and returns the raw text. [04:07:49] *amirman wants ^demon to succeed [04:08:22] yeah, i dont need any links or anything like that i just want plain readable text of the article [04:08:36] *^demon is also aware that it is after midnight and no such thing will be happening tonight [04:14:42] lol [04:16:41] what would it be, action=sparse? heh [04:17:06] <^demon> Splarka: action=plaintext? [04:18:59] I think it would be somewhat futile [04:19:18] some extensions stick in