[00:09:49] 03rainman * r31134 10/branches/lucene-search-2.1/ (28 files in 10 dirs): Fuzzy queris and suggestions over all namespaces [00:11:10] hi [00:11:10] how do i google for mediawiki-features without hitting tons of wikis? [00:11:10] right now i'm searching for some way to count occurances of a symbol in a page. for example like this: [00:11:10] ;voted pro/contra: {{count|pro}}{{count|contra}} [00:11:10] {{pro}} me, {{pro}} her, {{contra}} him [00:19:44] Giszmo: other than limiting your searches to domains like mediawiki.org and wikimedia.org (many things arestill on meta.wikimedia.org for historical reasons) I don't see whatto do [00:38:39] thanx BrokenArrow. didn't find much though [00:38:46] cu [01:22:10] 03(NEW) trackback.php XML parse error? - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=13086 15enhancement; normal; MediaWiki: General/Unknown; (dsimonto) [01:59:05] I'm trying to set up a wiki, yet something in the setup form (with the passwords) are messing up [01:59:13] can someone help? [02:04:02] btw guys, thanks for linking my article in mediawiki that was a surprise [02:04:38] anyone? [02:31:37] 03(mod) Special page listing changes to all pages linking *to* a given page - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=6528 +comment (10spidermannequin) [02:39:55] 03(NEW) Organize Whatlinkshere by section ( and support transclusion) - 10https://bugzilla.wikimedia.org/show_bug.cgi?id=13087 15enhancement; normal; MediaWiki: Templates; (spidermannequin) [02:48:47] does anyone know of a text parser that searches for wildcard expression and outputs case to xml? for instance
*
| * could be additional tags or other things and so on.. [02:52:32] Wiredtape: is this for PHP or JavaScript? [02:53:16] i have a bunch of txt/html files that I want to scan through and output information based on what i find in the case [02:53:38] ok [02:53:53] any chance you know of anything like that? :) [02:53:56] use regex ;) [02:55:28] :) [02:56:10] preg_match() for PHP, node.match() for javascript, no idea what for other languages... [02:57:15] yeah.. only I would need to create quite a wildcard facility for that.. as preg_match(blablabla*blablab) won't work afaik.. [02:57:47] .* [02:58:04] preg_match(blablabla.*blablab) [02:58:11] would that work? [02:58:15] yes [02:58:26] hmm.. let's see, thanks Skizzers :) [02:58:31] . matches any one character, * says zero or more of the previous thing [02:58:57] .* therefore matches any number of any combination of characters, leaving only enough to finish matching everything after [02:59:39] is there any way for me to get .* afterwards? [02:59:47] or whatever was in there? [02:59:58] yeah, surround it in () [03:00:05] and then $1? [03:00:09] although you'd probably want it lazy then [03:00:30] Wiredtape: not in PHP [03:00:33] lazy? [03:02:22] yes, preg_match('aaa(.*)b', 'aaaaaaaaabbbbababaaaab', $matches) would give 'aaaaaabbbbababaaaa' as your first group, but preg_match('aaa(.*?)b', 'aaaaaaaaabbbbababaaaab', $matches) would only give out 'aaaaaa' [03:02:50] http://us.php.net/preg_match for more info on calling the captured bits [03:03:01] you mean $matches[0] would have 'aaaaaa'? [03:03:14] no, $matches[1] would [03:03:15] I'm actually looking at it right now.. [03:03:34] $matches[0] would contain the entire string [03:03:52] ok.. now, does preg_match look at spaces as well? [03:04:41] yes [03:06:01] Skizzerz, thanks! one last q, where can i find the different expressions preg_match uses? (like .*).. [03:06:38] http://en.wikipedia.org/wiki/Regular_expression seems like a good source [03:06:53] great.. thanks a lot :) [03:07:59] (you're|ur) (very)? ?welcome (:\) )+ [03:08:07] :) lol [03:16:33] Wiredtape: try using str_replace, strpos, or substr [03:16:41] Wiredtape: preg_match can get really slow if you call it a lot [03:17:23] xtine, I looked into those earlier.. can they handle expressions as well? [03:17:50] well you want to run a find and replace right? [03:18:08] with wildcards.. and to be able to access those wildcards.. [03:18:18] you can mix it up so you can set what you want to do [03:18:32] it's just if you are worried about speed [03:18:46] if this is just for a small thing, no worries [03:19:08] but if you are doing a lot of text crunching, preg_replace can become very expensive [03:19:38] so you're saying make string searches and the preg_match as a case option? this is a good idea, though i'm not really worried about speed , this should run on my local machine and not on the server. [03:20:13] btw, anyone know where I can find info about the xml model i need to make for special:import to work? [03:21:27] xtine, thanks :) [03:21:35] nps [03:21:56] i had a web dev friend harp on me how it's not good practice to just call preg_replace when you can use other more efficient functions to essentially do the same thing [03:22:16] :) [03:22:23] of course i don't know what level you are working at [03:22:47] i think this is the "i dont care" level :) [03:22:50] lol [03:23:20] i'm just trying to extract some content from a large database, and then import it into my wiki.. [03:23:21] :) [03:23:26] ah... [03:23:37] I don't really care about efficiency atm.. [03:24:10] yeah, do what's easiest for you now [03:24:24] but it's good to know if you ever run into situations where you need to speed things up. [03:25:20] true, hence the thank you :) [03:26:08] nps [03:26:48] though i still can't find what the hell # means in regex.. [03:27:31] or P for that matter.. [03:34:23] anyone know vbulletin? [03:40:37] Wiredtape: installed it, admined it, haven't hacked it before. [03:41:31] xtine, i'm specifically looking to find out if there's a way to view a single post by post id and if there's a way to get the raw text? [03:41:37] without admin rights.. [03:41:54] any chance you know of a way? :) [03:42:23] Wiredtape: you mean by like just having the url [03:42:36] and then just grabbing text? [03:42:51] well.. it could be a simple html as well.. [03:43:04] the closest i found was printthread.php?t= well you can view a post just by it's id [03:43:12] which atleast strips all of the skin.. [03:43:25] ah, you mean just one particular post [03:43:29] not the thread associated with it [03:43:36] yeah.. [03:44:12] you can view a single post.. [03:44:46] like /forum/showpost?p=xxxx [03:44:51] xxxxx being the number of the post [03:44:58] ok, let me see, one sec.. :) [03:45:37] actually [03:45:57] it's showpost?p=xxx&postcount=yyy [03:46:02] that works, but any chance you can see the same thing in print format? [03:46:04] xxx being thread number, post number yyy [03:46:12] ah [03:46:18] then in that way [03:46:22] you would have to make a script :) [03:46:33] unless there is some extension that does that [03:46:45] :) well, showpost.php works fine, it's pretty simple to strip from that.. [03:46:54] in the source of a post [03:46:58] you got a [03:47:00] now to only make a script to get showpost.php?p=N [03:47:10] so if you are gonig to strip [03:47:19] i saw.. but i also need a title, which makes it only a tad more difficult.. but not by much.. [03:47:20] youc an easily strip out from the message comment tag [03:47:34] Hi, I just upgraded and 1.11.1 broke my home rolled extensions. No error, just redirect to main page, or "The action specified by the URL is not recognized by the wiki" [03:47:44] then in php there is a function that you can strip_tags to get rid of html :) [03:47:50] Any ideas? [03:47:57] xtine :) thx again.. [03:48:31] iharding, explain a bit more about these extensions... [03:48:59] xtine, having a bit of a staying/leaving dillema? [03:49:15] pressed wrong key, exited room insetad of my browser tab [03:49:17] XD [03:49:18] They are special page extensions using a "3 file extension" template I found. Simple case is generating a form, creating a page based on a template. [03:49:24] Works with 1.10.0 [03:49:48] and you don't get an error but are just redirected to the main page? [03:49:51] I see a caveat about "hooks" not returning a value, but I don't think I'm doing that... [03:50:02] that would cause a break.. [03:50:07] Yes, or I get the No such action message [03:50:27] no such action? hmm.. i've never heard of that before.. [03:50:43] I can paste code somewhere if you like. I'm sure it will be obvious to a php person... [03:50:49] or you can go here and see... [03:50:51] rafb.net [03:51:03] and the link please.. [03:51:15] http://gmpartswiki.com and click on the My Part List link [03:51:32] The other one is just listed under special pages now, called Create Part. [03:53:31] iharding, the problem is either $wgarticlepath in localsettings or your htaccess file [03:53:45] take a look at http://gmpartswiki.com/w/Special:PartList [03:54:18] whereas the link on the left links to /wiki/index.php/Special:PartList/ [03:54:51] Does anybody else watch their wiki's Recent Changes RSS/Atom feed using Google Reader? [03:55:45] http://rafb.net/p/VkfgYU62.html [03:55:52] I've noticed a problem where changes show up multiple times in Reader's feed history. I just wondered if that happened to anybody else. [03:56:27] It doesn't seem to happen to any of the other feeds I watch, just my wiki feed. [03:57:05] Code is for this page [03:57:10] http://gmpartswiki.com/w/Special:CreatePart [03:57:36] iharding, the problem isn't in the extensions code [03:57:37] That one actually creates the form, but the action is not found. Gah [03:58:29] I'm using the same LocalSettings though, and all else seems to work... [03:58:54] :) as i said check your articlepath and htaccess.. [04:01:14] $wgArticlePath = "/w/$1"; [04:01:30] and script path? [04:01:41] Alias /w /usr/pkg/share/httpd/htdocs/wiki/index.php [04:01:41] Alias /index.php /usr/pkg/share/httpd/htdocs/wiki/index.php [04:02:39] iharding, ok what creates the partlist portlet? [04:02:51] did you put it directly into the skin? [04:03:22] I put that link in the skin, but that's a red herring. The Create Part extension shows the problem [04:03:41] http://gmpartswiki.com/w/Special:CreatePart [04:04:14] The form gets generated, but any submissions go to the search page. That's not supposed to happen, and didn't with 1.10 [04:04:27] I think I missed a return statement or something, and 1.10 was more forgiving. [04:05:50] iharding, i'm pretty sure this has more to do with proper linking than with a missing return statement.. though I can't tell for sure, the only thing I can tell is that if I click on the part links on your sidebar the link points to an invalid location: /wiki/index.php/Special:.../ [04:08:12] a mediawiki installation is slightly less usable out of the box than wikipedia, because it's lacking a lot of useful templates. is there some way to get commonly useful templates, such as ambox, as opposed to the ones which are specific for an online encyclopedia, such as most of the infoboxes, apart from manually copying them (and all templates they depend on, css code and images) from wikipedia? [04:08:36] sorry if this is a common request. i wasn't able to find anything about it in the faqs. [04:09:04] ehn, i was actually thinking of something like this.. but the only option i can think of is a nice list of useful templates and then using special:export [04:09:07] ehn: there's a template sharing project to get templates sync'd within WMF but it's stalled last i checked. [04:09:29] [[WP:TSP]] or something [04:09:36] checking... [04:10:19] yeah, it's pretty much the same shape from last i checked [04:10:35] meta sends you to en.wp which sends you to meta [04:11:54] respective talk pages are similarly inactive [04:12:05] would probably also have to be an extension sync and tidy sync [04:12:18] erm? [04:13:13] many templates on wikimedia won't work without Tidy enabled (if the html tags are split via transclusion or parserfunction) [04:13:38] {{#if:1|}}
foobar
[04:13:46] playing around with special:export/import for now... :/ [04:13:47] and many won't work without the proper extensions enabled/installed [04:13:59] ohhh, didn't get tidy was an ext [04:14:07] so what's "extension sync" then? [04:14:08] it isn't [04:14:11] 03laner * r31135 10/trunk/extensions/SmoothGallery/ (4 files): [04:14:11] * Refactored code [04:14:11] ** Broke apart parsing and rendering [04:14:11] ** Moved most functions into classes [04:14:11] ** Fixed broken gallery checks [04:14:12] ** Removed unneeded global $wgSmoothGalleryArguments [04:14:18] CIA-6: quit that! [04:14:23] tidy is a localsettings option [04:14:26] but it isn't enabled by default IIRC (for example, Wikia doesn't have it) [04:14:30] jeremyb: thats a bot ;) [04:14:35] i know! [04:14:39] lol [04:14:55] Splarka: so why is it on anywhere? just for the templates? [04:15:14] cleaner cleanup of messy wikicode/html I guess [04:15:21] but it conflicts with some things and is buggy [04:15:31] does it improve UA compat? [04:15:41] Splarka: i'm sure it cleans up my messy html [04:16:06] a broken table inside a table in wikicode renders amazingly bad, column-one ends up inside the bodycontent [04:16:09] i don't see how the example template you gave would be a problem anyway [04:16:29] {{#if:1|}}
foobar
won't work without tidy [04:16:42] the sanitizer escapes the inner TD to <td> [04:17:09] sanitizer? [04:17:34] http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/includes/Sanitizer.php [04:19:34] i must say that there is too much project related lingo involved :) i have no idea how a new mw user can learn all this stuff... [04:19:38] https://bugzilla.wikimedia.org/show_bug.cgi?id=9252 <-- here was a bug with tidy [04:20:13] or rather, a bug in mediawiki only visible when tidy was enabled [04:21:31] well g'night! [04:21:34] it is very annoying, MediaWiki should fork tidy and make it core and merged with the sanitizer (and not optional), rather than try to support two types of template syntax forever [04:22:14] *Splarka grumps [04:22:35] *jeremyb grumps [04:22:56] harrumph! [04:23:35] anyway, jeremyb, at least the CIA here reports naughty things immediately, instead of 25 years later [04:24:01] naughty? [04:24:02] Oh, we waterboarded some prisoners and tried to assassinate castro [04:24:16] and why are you so much faster than everyone else? [04:24:34] that's what all the girls ask me [04:24:36] Here's the prob... http://www.gossamer-threads.com/lists/wiki/mediawiki/104782 [04:24:45] hardy har har [04:31:29] *jeremyb wonders how wmf demographics compare with wikia [04:31:35] specifically geography [04:34:46] wikia keeps detailed stats via google analytics... [04:34:59] hosted? [04:35:05] private [04:35:18] ad revenue and all [04:35:19] erm... hosted? :P [04:35:37] i.e. on google's servers or on wikia servers? [04:35:49] the stats? on google's [04:35:57] so hosted [04:36:09] well, not so mcuh [04:36:13] ^much [04:36:17] more like, generated [04:36:34] mozilla has outgrown the internal version of a few analytics solutions and is now moving to google hosted [04:36:52] (was urchin) [04:36:54] they add the GA js to each page view, users view a page and their browser loads the remote .js which google sees and records their visit [04:37:26] i think it's more complicated than that [04:37:29] and then the wikia staff with access to that GA account can click and view the infos... would you call that hosted? [04:37:41] it queries all kinds of info about the browser via js [04:37:52] yes i would [04:38:02] I'd call it "offsite" [04:38:44] anyway, I wrote a temp JS web-bug for Wikia to log activity like action=, namespace, anon/logged, time, and browser type (before they switched to GA), but no geographical info was ever taken [04:38:55] as for Wikimedia... [04:39:29] the toolserver does do the localization watchlist message... but I don't know if that data is saved anywhere [04:39:52] localization? [04:40:07] you know there's a toolserver project just for tracking page stats right/ [04:40:08] if (wgPageName == "Special:Watchlist") [04:40:08] addOnloadHook((function (){document.write('