[00:02:32] do you guys know what error this means? Error, Setup.php must be included from the file scope, after DefaultSettings.php [00:02:46] oh wait, nevermind [00:04:49] augh, nope same error :-( [00:07:18] means move Setup.php after DefaultSettings.php [00:08:17] o_O [00:10:49] darkcode: err [00:11:24] darkcode: no, the problem was i forgot to make my LocalSettings.php file completely [00:11:45] ok [00:18:08] hey SudoKing [00:18:30] hey [00:26:51] is there any simple way of finding "wanted" pages that are close to already existing pages? ie. "tree" exists, but 3 pages point to "trees" [00:28:38] and do this for all "wanted" pages? [00:34:28] is "none of these things" a singular noun phrase or plural? [00:35:29] TimStarling: either, i think (works with "it is..." and "they are..."). but i'm not a native :) [00:35:54] naught101: no. "close to" is a difficult and expensive thing for a database to think about.l [00:36:40] Duesentrieb: yeah, I know. I was thinking something with a spell checker might work, but I guess "trees" is a real word [00:37:08] TimStarling: none of these things ARE [00:38:24] naught101: you'd need a special index, based on stemming (porter works well enough for english) or soundex. doable, but not trivial, and pretty much impossible to get right for a multi-language environment [00:39:02] or perhaps a thesaurus in the search engine [00:39:25] archivist: bwwwaaaahahaha! [00:39:35] sorry :P [00:40:05] archivist: make sure you never have to deal with proper names, abbreviations, or composite terms [00:40:18] *archivist imagines wikipedians editing the thesaurus........ [00:40:29] naught101: thanks, but the sentence was looking ugly either way, so I rewrote it [00:40:34] archivist: actually, extracting a thesaurus from wikipedia is my diploma thesis [00:40:37] *Duesentrieb goes back to coding [00:41:50] Duesentrieb, I have been messing with my own search engine and keep thinking I need to add a specialised thesaurus [00:42:26] I was just writing a reply to the search engine thread on the village pump, as luck would have it [00:42:36] http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29#Why_is_the_Wikipedia_search_engine_so_bad.3F [00:43:27] TimStarling: apropos... where can i find info on the "new and imporved" search stuff? [00:44:22] fuzzy keyword matching would be so awesome [00:44:48] you could try asking rainman-sr [00:44:57] at least for titles [00:45:04] rainman-sr: hii! [00:45:21] rainman-sr: wake up! [00:45:22] using backlings/anchor text for indexing would help a LOT [00:46:05] http://ls2.wikimedia.org/search?dbname=enwiki&query=mitochondria&ns0=1 [00:46:20] Duesentrieb: anchor text is unappliable for wiki [00:46:22] rainman-sr: can you build a n-gram backed index for titles? [00:47:01] domas, i was thinking of adding that... you could do domas~ and then domas whould expand to similiraly-spelled words [00:47:08] so that we could have intelligent 404 pages [00:47:37] since i already do spellchecking with ngrams, metaphones, phrases, andwhatnot [00:48:41] Duesentrieb: we already using backlinks, but again, wikipedia is not internet, a page is not linked according to it's popularity, but generality.. you cannoy link X from Y just because you like it [00:49:01] so that won't help too much either [00:49:53] rainman-sr: count how many times the word is referred without a link too! :) [00:50:25] have a Search namepace to allow suggestions to be included for search terms? [00:50:40] anyway, correcting spelling mistakes now would be one of most visible improvements [00:50:43] rainman-sr: i would actually argue that backlins in wikipedia are even more meaningfull than on the web. they don't express popularity, but general relevance. which is precisely what you want for a search result. [00:50:46] because that is what people always point out [00:51:04] rainman-sr: also, i didn't mean so much for ranking, as for finding keywords. [00:51:22] domas: yeah.. my current engine is till kinda buggy with spellchecks, but it should be ready the end of the month [00:51:26] you have to distinguish the use of link text for relevance, and the use of link count for popularity [00:51:31] rainman-sr: [[United States of America|USA]] would make USA a keyword for "United States of America". [00:51:48] google of course does both [00:51:59] rainman-sr: actually, let's talk again when i finished my diploma - then i'll have statistics and data to convince you :P [00:52:05] but we could use hit count for popularity instead of link count [00:52:15] rainman-sr: but on a related note - can i have a look at what you are doing? would be interesting. [00:52:20] Duesentrieb: yeah .. that's true .. but, people most of the time just enter nonsence there.. would need some filtering.. (I can tell becaue i tried it) [00:52:33] nonsense where? [00:52:52] ignore everything with a frequency < 3 [00:52:55] should work fine [00:53:55] damn, why when I see 'bt' it means backtrace for me and not bittorrent? [00:54:16] Duesentrieb, i've looked at that, but data was noisy.. some of it would be useful, but a lot of it is already encoded as redirects [00:54:24] same reason wp means wikipedia to me, and not word press :) [00:55:09] TimStarling: yeah, but that won't be particulary easy to efficiently integrate into the lucene ranking algorithm [00:55:23] rainman-sr: if you include linktrails, you would get inflected forms, which might be useful. from my experiments, i didn't see that much noise actually, especially not if you use some cut-off. [00:56:21] Duesentrieb, I agree, you could get acronims and inflected forms [00:56:31] rainman-sr: also, i'd do it two-way: the anchor text would become a (strong) keyword for the link target, and a (weaker) keyowrd for the link source. if you then also generate keywords from bold/italic text, headings, and, say, the first paragraph, you can probably ignore the rest of the text without much loss :) [00:57:07] it's possible google even includes hit count statistics in their rankings, you know for a while they were sampling outgoing clicks, maybe they still do [00:57:36] and of course there's a lot of sites that voluntarily send all their hit count data to google [00:57:58] rainman-sr: yes, acronyms, short/casual forms, etc, is what i'm going to use this info for. and i imagine it would be useful for search, even including popular typos. [01:02:00] Duesentrieb: I've experimented with anchor text as additional scores.. didn't find that it provides additional source of information.. in most cases, all meaningfull acronims where redirects, in other cases people entered words that made sense in the context of the original article, but don't describe the target article very well [01:03:59] TimStarling, i would think that links from blogs are more than enough to calculate popularity.. one problem is that article contain a lot of text.. sometimes it's hard to tell if something is a relevant part of the article or not [01:04:05] could try to associate search terms with page link click to refine the returned search results [01:04:12] i suppose for a search engine you would not want the more fuzzy combinations. "americans" -> "20th century american authors" or something. in my context (autmatic disambiguation), it does make sense. [01:04:54] darkcode: yes - quality feedback is a very effective way to improve ranking [01:05:13] from my IR class i remember figures of about 30% improvement [01:05:20] (not sure on what - f-measure, possibly) [01:05:24] Duesentrieb, yeah, that's a good point, could expriment with that.. [01:06:26] rainman-sr: her's my personal scrap book for my diploma project: http://brightbyte.de/page/WikiWord [01:06:58] i also collected far too many papers to read - http://www.citeulike.org/user/brightbyte [01:09:41] would be nice if you could exclude results like you can with google by using -word [01:10:22] darkcode, well you can.. [01:10:53] and if you could use quotes to search for results that include only the words together [01:11:02] like "two words" [01:11:08] you can do that as well.. [01:11:16] not that I have seen [01:11:25] neither change the results returned [01:12:26] unless it was a recent change? [01:12:41] maybe 6 months.. [01:14:25] darkcode: try following queries: two, two -bbc, "two bbs" [01:15:08] and "two bbc" and "bbc two" [01:17:20] ok well does seem to work kind of, doesn't seem to do a good job of highlighting the words that match though when using double quotes [01:18:00] or sorting by the ones with an exact match first [01:18:47] Duesentrieb: interesting, i already build a table of related words, or rather related titles, might be of interest to you.. btw, one problem with expansion like american -> "american writers" is that you need to be certain of context.. don't know how big a problem that would be.. [01:19:02] darkcode, don't look at the highlighted text.. it's (almost) random... [01:19:33] that desperately needs fixing... [01:19:38] now the only thing missing is a way to search only a specific set of pages, then Wikibooks wouldn't need to rely on google [01:20:29] english wikibooks at least makes use of google to allowing searching within only a specific book [01:20:43] darkcode, yeah, you can kinda of that as well.. but only kindof.. if you write categories directly in the article (and not with templates), you can search within categories [01:21:33] but that's not a very elegent solution ... [01:21:58] I was thinking of something like Special:Search/?search=words [01:23:39] then could use Special:Search/{{BOOKNAME}}?search=words [01:23:45] darkcode, hmm and what if the title prefix is not unique? Harry Potter and Harry Potter 2? [01:24:28] could use Special:Search/Harry Potter/ to limit it to Harry Potter [01:24:42] like you could with Special:Prefixindex [01:24:45] rainman-sr: the context for disambiguation would be provided by cooccurrance of links. doable, but can get quite expensive. will have to look into that [01:25:40] darkcode: well fine then, put it into bugzilla.. it's doable, just need somebody to invest time into it.. [01:26:07] or I could look into doing it myself, already submitted 3 bugs with patches [01:26:51] yeah, that would be even better :) [01:27:02] which might happen after I get done making {{#difftime:}} [01:32:15] darkcode: http://www.mediawiki.org/wiki/How_to_become_a_MediaWiki_hacker [01:32:20] have you read that? [01:33:14] TimStarling: supplying patches *is* a good way to get commit access :P [01:33:38] it's also a good way to get discouraged and leave [01:33:56] because it's easy to think that someone will care when you attach a patch to a bug report [01:34:01] which they usually won't [01:34:10] *Dashiva nods [01:34:49] well so far 2 got applied and the 3rd with modification [01:34:49] Maybe it's a subtle way of ensuring only jaded old grumpies get access ;) [01:34:53] so I think I've done well [01:39:20] fun stuff, the annoying little bugs link doesn't work right [01:39:49] You may not search, or create saved searches, without any search terms. [01:39:49] Please press Back and try again. [01:42:13] Hello, I have just upgraded from 1.8.3 to 1.11.0 and things went a little wacky, It looks like everything is fine but my localsettings needs adjustments, can anyone help? [01:49:13] well, if you're managing to get them committed, you're probably past the major hurdle [01:49:34] did you try running the upgrade.php script from the maintenance directory? [01:50:13] err update.php [01:50:25] i.e. working out how we operate [01:51:12] as I recall you committed one of them yourself Tim [01:51:22] Yes I ran the upgrade.php script from the maintenance dir [01:52:46] hello [01:52:48] have you tried http://www.mediawiki.org/wiki/Manual:Upgrading ? [01:52:54] when i click on install, i get this error [01:52:56] Creating tables... using MySQL 4 table defs...Query "" failed with error code "Query was empty (localhost)". [01:53:02] does someone know how to fix it? [01:53:28] system is ipod touch, lighttpd, php 5.2, mysql 4.1 [01:53:30] Yes I went through all the upgrade instructions step by step the used putty to run the script. [01:55:02] well http://www.mediawiki.org/wiki/Manual:LocalSettings.php lists all the settings available, maybe that can help you? [01:55:55] what version of mediawiki crashx? [01:56:01] the most recent [01:56:28] i get this error when clicking on install in the initial configure screen [01:56:41] wiki 1.9.3 [01:56:57] I can navagate to almose all the other pages but not the front Main page [01:58:33] I figured this was going to be pretty hard. I have a 4 language wiki with interwiki links and commons etc.. [01:58:47] crashx I could be mistaken but I think it requires MySQL 5.x to run [01:59:20] So I started on the pt language site first to see how the upgrade would go. [01:59:23] what happens when you try to navigate to the front page silly? [02:00:29] mediawiki.org says 4.0 or higher :/ [02:00:30] The page cannot be displayed [02:00:52] Hi, I'm having problems with the parser function #if in tables [02:01:11] The Main Page does show in the all pages list though [02:01:32] the solution "$wgUseTidy=true" results in a strange warning [02:01:59] did anyone meet the problem "Warning: proc_open() [function.proc-open]: open_basedir restriction in effect. File(/dev/null) is not within the allowed path(s):"...and so on? [02:02:21] you trying to include rows or columns depending if a condition is true qubodup? if so you need to add a template like Template:! which contains "|" only and use that [02:03:06] I'm using html code [02:03:45] darkcode: good solution though, I guess I'll do that but for html tags (i prefer those) [02:04:15] well it should only be a problem if using wiki tables qubodup [02:05:11] well, < and > get rendered to their non-rendered equivalent, the solution for that was to set "$wgUseTidy=true" [02:05:41] but that resulted in the error message you can see in full size at http://libregamewiki.org/ right now [02:06:17] crashx I don't know enough about the ipod touch nor lighttpd to know if those could be a source of problems or not but it does sound like a problem with MySQL rather then anything else [02:06:44] darkcode: I also copied the error message to v [02:06:45] http://meta.wikimedia.org/wiki/Talk:ParserFunctions/Archive_3#Solution [02:07:57] Ok I can navagate through all the special pages and create, preview articles but no Main page [02:08:37] The Random page link also leads you to a The page cannot be displayed error. All the rest is fine [02:09:00] does the page just not exist or is it now showing the links, layouts, etc. commonly associated with it silly? [02:09:26] 03(NEW) Add a CSS class to target IP user pages - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12509 15enhancement; normal; MediaWiki: General/Unknown; (rememberthedot) [02:10:11] 03(mod) Add a CSS class to target IP user pages - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12509 (10N/A) [02:10:26] 03(mod) Add a CSS class to target IP user pages - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12509 (10N/A) [02:10:45] hmm what's your wgArticlePath, wgScript and wgScriptPath set to in LocalSettings.php, silly? [02:10:46] the main page Does exist i see it in the all pages link [02:11:03] ok let me check please [02:11:38] I mean is the msg about it not existing a 404 error, or is it a msg within mediawiki with all the sidebars and stuff showing? [02:12:40] $wgArticlePath = "$wgScript/$1"; [02:13:25] $wgScript = "$wgScriptPath/index.php";$wgRedirectScript = "$wgScriptPath/redirect.php"; [02:14:40] those look correct, what about wgScriptPath? [02:15:04] $wgScriptPath = "/wiki"; [02:15:15] Ok I am finding out something [02:15:41] Any page with interwiki links or bringing in images from my commons dosent work [02:17:03] using php as an apache module or cgi? [02:17:20] Nope I was wrong only pages bringing in images from the commons dont work [02:17:42] qubodup I thought that was fixed already the translation, but maybe I was wrong, guess putting them inside a template couldn't hurt [02:18:02] ok [02:18:39] I don't know anything about how that works to be any help silly, so I don't know what has changed [02:19:05] so I guess the main page uses images from commons [02:20:36] Ok I have a 4 language wiki with commons, just like Wikipedia, so that images are stored in the commons and appear in all four languages wikis [02:21:19] Maybe my logic is wrong here but, the upgrade seemed to work.. so should I upgrade the commons wiki, and hope it all works?? [02:21:43] a quick search turns up http://www.mediawiki.org/wiki/Manual:%24wgUseSharedUploads [02:22:21] I was just going to say I think that $wgUseSharedUploads may work differently in 1.11.0 [02:22:53] and apparently http://www.mediawiki.org/wiki/Manual:%24wgSharedLatin1 was removed [02:23:08] oh no [02:23:17] So how does it work now? [02:23:27] Every language has its own images? [02:25:03] well wgSharedLatin1 was for setting is the repository used latin1 names [02:25:10] I guess now it must be all in unicode [02:25:36] so maybe upgrading your commons wiki would fix it, but possibly break it for the rest [02:25:42] Ok I can live with this, as long as shared uploads still works. [02:26:23] hmm [02:26:28] one other thing to try first [02:26:40] I think I have to set the character set but I forget all this now [02:26:46] Ok great [02:27:01] rebuildImages.php in the maintenance directory [02:27:44] or rebuildall.php [02:27:54] ok THANKS but on which wiki, the language or the commons? [02:28:09] the language one [02:28:20] ok Ill try that. lets see [02:29:59] 03(NEW) Can't access WikimaniaTeam wiki - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12510 04BLOCKER; normal; Wikimedia: General/Unknown; (meno25wiki) [02:33:02] 03(NEW) zh-hant centralnotice text shows up as grey box when using variant=zh-hant - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12511 normal; normal; Wikimedia: Language setup; (kbblogger) [02:33:03] I tried rebuildimages and it didnt work. rebuildall.php sounds a little risky but what do i got to loose [02:33:21] irs rebuildImages.php [02:33:28] err its [02:33:52] capital I [02:34:32] yes I copied it from the chat and I got 0 of 0 rows updated [02:34:43] ok [02:35:05] well maybe the update script runs all those [02:35:27] should i try the rebuild all? [02:35:37] 0 of 0 rows updated [02:35:47] sorry 0 of 0 rows updated [02:36:00] is an accidental copy to the chat window [02:36:35] no because I took a quick look it only updates recent changes, and text index from a look at it so wouldn't effect images I don't think [02:37:03] Well I am relieved because I think that there is only one error in the upgrade process. Everything else looks fine [02:37:18] I figured I would have 4 or 5 things to work through [02:37:30] ya [02:37:55] I'm going to guess its abandonment of latin1 might be the reason its not working [02:38:26] would at least eliminate one potential source of problems [02:38:40] Yes I am almost sure this is the problem. [02:38:54] 03(mod) Add a CSS class to target IP user pages - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12509 (10N/A) [02:39:06] So what do i do change the character set in phpMyAdmin?? [02:39:37] What do I change it to, just Unicode? [02:41:37] utf8 I think, but not sure [02:42:37] well I cant get into plesk for some reason.. But do I change this using phpMyAdmin? [02:43:42] upgrading the commons wiki would probably be the safest thing to do [02:44:18] ok thanks, you have been very helpful. Have a nice evening.. [02:44:37] yw, have a nice evening too [02:44:56] *darkcode wonders where all the developers wondered off to [02:45:23] thanks [02:45:56] lunch [02:46:25] were you asking a development-related question? [02:47:00] no I was just thinking a developer would know more about the answers then I to three people's questions [02:48:12] you seemed to be handling it well enough [02:48:20] like if upgrading a wiki from 1.8.3 to 1.11.0 would break a common image repository and if upgrading the common image repository would fix it [02:48:25] ok [02:49:26] I only have a limited amount of time to spend each day answering questions on #mediawiki [02:49:45] everyone does [02:50:08] the character set fot the lang wiki "is" UTF-8 Unicode (utf8) and the connection collation is utf8_general_ci, does this sound correct? [02:50:12] so if someone else is handling it, I'm happy to stay away [02:50:52] that is both on the language and the commons wikis [02:50:54] regarding the upgrade: it would probably work [02:51:06] sorry to interupt [02:51:11] best to upgrade both of them within a short period of time [02:51:32] Yes I upgraded and everything is fine except on pages that bring in an image from the commons [02:51:43] ok well there you go silly, confirmation from someone who knows better then I [02:51:45] although I have not upgraded the commons wiki yet [02:52:04] best to try upgrading the commons wiki [02:52:11] what sort of commons configuration is it? [02:52:26] is there a shared database? I think it's best if you have a shared database [02:52:38] I guess it is a common commons configuration not trying to be smart [02:53:05] Nope... I was told to not have shared databases. [02:53:13] This was a year ago though [02:54:11] maybe that has changed [02:55:33] hmmm, there is support for it [02:56:23] so you've set $wgSharedUploadDirectory and $wgSharedUploadPath, but not $wgSharedUploadDBname, is that right? [02:56:26] It would be a bit hard but I could start from scratch and set up the wikis again in a shared database if you think this would be advantagous [02:57:07] Yes I have set $wgSharedUploadDBname [02:57:47] I have a four language wiki working fine on 1.8 commons and all [02:57:54] right, all three then? [02:58:09] So I decided to upgrade one language to see if it would be fairly easy [02:58:20] Yes three still work [02:58:31] no, all three globals, not all three wikis [02:58:49] the pt I upgraded and it works on the interwiki links but not on commons images [02:59:09] I have one global and four languages [02:59:12] what happens when you try to use a commons image? is it a red link? [02:59:38] this is a global: $wgSharedUploadDirectory [02:59:48] no, I get a the page cannot be displayed on the 1.11.0 upgrade. the others bring in the image fine [02:59:51] here are another 2: $wgSharedUploadPath, $wgSharedUploadDBname [03:00:14] is the wiki public? [03:01:07] Yes [03:01:13] link? [03:01:15] it is public [03:01:15] what is the URL? [03:01:30] I am using all these $wgUploadNavigationUrl$wgUseSharedUploads = true;$wgSharedUploadPath$wgSharedUploadDirectory$wgHashedSharedUploadDirectory$wgFetchCommonsDescriptions$wgSharedUploadDBname$wgSharedUploadDBprefix$wgRepositoryBaseUrl [03:01:36] sorry [03:01:55] $wgUploadNavigationUrl [03:02:02] $wgRepositoryBaseUrl [03:02:27] getting the url [03:02:49] http://en.etnopedia.org/wiki/index.php/Main_Page [03:03:42] pt.etnopedia.org I take it? [03:03:50] right [03:04:02] can you give me the URL of a page with a non-working commons image? [03:04:11] http://pt.etnopedia.org/wiki/index.php/P3%A1gina_principal [03:04:18] xchat? [03:04:19] http://pt.etnopedia.org/wiki/index.php/P%C3%A1gina_principal [03:04:21] ya [03:04:38] I built the whole thing in spanish on http://www.etnopedia.org and now am migrating all the Spanish over to this wiki [03:05:19] ok, well the error message on the page darkcode just gave is pretty easy to explain [03:05:29] it's saying that you haven't upgraded your commons database yet [03:05:50] so I suggest fixing that first [03:05:54] ok well thats why im called [03:06:34] Thank you Tim and darkcode I will do this [03:06:56] I was just a little nervous about upgrading all before i could see one work fine [03:07:14] It took me about three months to get this all working [03:07:17] you could disable commons, then it would work fine [03:08:09] Yes but part of the purpose of this wiki is to share photos [03:08:53] You mentioned using a shared database, I was told to use seperate ones, Do you recommend a shared? [03:10:30] you have a shared database already, it's just a different kind of shared database to the one you're thinking of [03:10:37] you don't need to change anything [03:10:39] just upgrade [03:11:31] Well the reason I ask is that our spanish only wiki has quite a few articles and we have not begun the migration yet [03:12:05] we also plan to add double the articles in english so I can change it to be correct now, before it gets too big [03:12:53] I really want to do it right and so if you recommend a shared database i can start over. [03:12:59] I've already answered that [03:13:28] Ok thanks, do not change anything just upgrade [03:13:34] i get it [03:13:37] yes [03:13:44] ok thank you again [03:21:26] can anyone tell me why internet explorer is terrible? [03:21:59] That could take a while :) [03:22:06] ha [03:22:17] do you got forever? [03:22:43] i have until it stops breaking my wiki pages [03:22:48] and i don't have to use it anymore [03:23:06] Do you have a link to your wiki? [03:23:15] Most breakage issues have to do with css [03:23:15] http://commons.bahaikipedia.org/ [03:23:33] it's fine in firefox... [03:23:39] What's breaking? [03:23:55] at the top ie cuts off the English/French bar [03:24:12] the two top boxes get mashed together [03:25:01] Ah k [03:25:44] i copied over wikipedia's homepage and it doesn't break [03:26:07] I can't figure it out [03:26:30] Can you perchance take a screenshot of what it looks like in IE? [03:26:33] I don't have it on this comp [03:26:47] sure [03:27:34] try changing things like "+.7em" to "0.7em" [03:27:45] A seat of the pants guess would be that it has to do with "-moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;" [03:27:50] (i didn't know you could remove it) lol [03:27:51] But I can't see exactly how [03:28:40] 1 second [03:29:27] http://img120.imageshack.us/img120/3893/88978315sb6.png [03:29:53] simply, IE doesn't understand those things [03:29:54] i'm donig that now darkcode [03:30:21] so those would only have an effect in firefox and mozilla, amidaniel [03:31:02] darkcode: Yes, I'm well aware :) But they don't seem to be anything that would cause overlap issues if ignored [03:34:41] You might try removing the explicit margin jazz in the style and fake it with
, etc. [03:34:53] IE gets confused by css very easily :) [03:35:19] i noticed that wikipedia seems to have it solved [03:35:30] i just can't figure out exactly how they do it [03:37:02] you have an empty table that has a negative margin and then the lang-mp template has 0 margin [03:37:09] that's probably what's doing it [03:37:42] ok [03:37:46] where? lol [03:38:13] just before end of header section [03:38:31] the whole "style="font-size" bit? [03:38:35] Ah, indeed, you do [03:39:27] how can i fix it? [03:39:50] Remove this table: {|style="width:100%;background:none;margin:-.8em 0 -.7em 0" [03:40:11] The table right after the "Current number of files" one [03:40:28] the whole thing? [03:40:41] to "end of header section" [03:40:47] There, did it for you :) [03:41:13] heh [03:41:14] :) [03:41:31] ahh that did it :D [03:41:36] thanks guys :) [03:41:55] no problem :) [03:43:05] and I added a bottom-margin so there is spacing again [03:43:13] err margin-bottom [03:43:30] :D [03:43:31] great [03:43:48] i was trying to figure out how to get spacing [03:43:49] lol [03:44:17] now whoever is still living in the dark ages will see my site correctly [03:46:16] That would be about 80% of the internet users out there :) [03:46:36] :D [03:46:39] looks good [03:46:48] you could add a firefox button to your site that shows up on every page to encourage people to switch to firefox [03:47:04] heh [03:47:10] or just leave the page broken hah [03:47:36] with a "this site looks best in firefox" message [03:47:42] heh yah [03:47:52] "the internet looks better" [03:48:19] of course I've always been tempted to create a website that checks for the presents of IE and deny access, a reverse of sites that require IE [03:48:29] haha [03:48:45] yah i've run in to ones that require IE [03:48:49] really annoying [03:49:46] most don't really require IE [03:50:09] just use the useragent switcher extension to mascaraed as IE [03:50:23] and they'll usually let you in [03:51:25] lol [04:02:46] hmm there seems to be a bug in bugzilla [04:03:25] says you can use relative days, but things like "1 year ago" aren't working for me [04:03:31] err relative dates [04:08:11] thanks again darkcode :) [04:27:21] ok Upgrading the commons and the other languages worked, Thanks again! [04:33:24] {{REVISIONTIMESTAMP}} updates on null edit, is that expected behavior? [04:33:37] if ( in_string( '', $text ) && in_string( '', $text ) ) { [04:33:45] $this->despair(); [04:33:46] } [04:34:28] updates to what? [04:34:43] to the date of the null edit [04:35:09] *shrug* guess not [04:35:22] is amusing anyway [04:35:47] do you think anyone will mind if I remove onlyinclude? [04:36:13] hey, maybe I could transform it to noinclude/includeonly on save [04:36:16] can you migrate all usages of it with the appropriate / ? [04:36:17] hah [04:36:23] GMTA/FSD [04:38:01] so do people actually use it? [04:39:51] nobody knows [04:40:02] I wonder if #wikipedia knows [04:41:19] #wikipedia doesn't usually know much ... [04:55:55] I used it once but I didn't inhale [05:03:08] 03(mod) UTF-8 encoding problem related to the "user=" =?UTF-8?Q?=20parameter=20at=20=C2=AB=20coun?==?UTF-8?Q?t=5Fedits=20=C2=BB?= - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12053 (10gangleri) [05:07:58] 03(NEW) Add quicklinks after IPs, to common registries/queries - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12512 15enhancement; normal; MediaWiki extensions: CheckUser; (FT2.wiki) [05:12:12] certainly isn't the most commonly used tag, but it is used, though i would guess on less than .5% of templates... [05:12:34] it's a pretty obscure tag [05:17:40] 03(mod) Add quicklinks after IPs, to common registries/queries - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12512 +comment (10JSchulz_4587) [05:22:48] I think I've got a way to implement it without too much hassle, give me a minute to try it out... [05:33:59] TimStarling: http://en.wikipedia.org/w/index.php?title=Wikipedia:Abuse_reports/IP/79.x.x.x&curid=14978706&diff=182080364&oldid=182066795 :) [05:34:08] I haven't seen funny vandalism in months [05:45:34] - 10http://bugzilla.wikimedia.org/show_bug.cgi?id= summary (10gangleri) [06:00:54] 03(mod) MediaWiki should support printable UTF-8 wiki project codes - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12508 summary; +comment (10gangleri) [06:04:58] 03(NEW) ajaxwatch.js applies ID to wrong element, only processes one; tooltip doesn't update - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12513 trivial; normal; MediaWiki: User interface; (voyagerfan5761) [06:05:34] 03(mod) consistency between site setup and pages of the site - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12499 (10gangleri) [06:07:20] 03aaron * r29280 10/trunk/extensions/FlaggedRevs/FlaggedRevs.php: Message renamed [06:15:14] 03aaron * r29281 10/trunk/extensions/FlaggedRevs/ (3 files in 2 dirs): [06:15:14] * Remove egrant log action type [06:15:14] * Fix rights name typo [06:18:33] Millions and millions of bottles! [06:23:48] 03(mod) ajaxwatch.js applies ID to wrong element, only processes one; tooltip doesn't update - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12513 (10voyagerfan5761) [06:32:43] 03(mod) ajaxwatch.js applies ID to wrong element, only processes one; tooltip doesn't update - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12513 +comment (10voyagerfan5761) [06:47:14] hi [06:47:25] I have just installed v1.11.0 [06:47:30] and I can't work out how to make 'create page' appear in the navigation [06:47:47] I guess that you want to edit MediaWiki:Sidebar [06:48:00] i have looked at the group permission [06:48:06] kewlies [06:48:16] do I have to edit the php file [06:48:21] or is there a wizard for it [06:49:29] /index.php?title=MediaWiki:Sidebar&action=edit [06:49:32] on the wiki itself [06:51:15] ok yeah kewl sorry I am new to this php code [06:51:42] so I have gone there ... i tryed adding - ** createpage-url|createpage didn't seem to work [06:52:31] !sidebar [06:52:31] --mwbot-- To edit the navigation menu on the left, edit [[MediaWiki:Sidebar]] using its special syntax. For more details, see . [06:52:56] did it not update, or just not make a useful link? [06:54:40] it didnt make a usefule link [06:55:46] you can either make the link directly, eg: [06:56:19] ** Main Page|the main page [06:56:31] ** http://www.google.com|Google homepage [06:57:10] or you can create the named messages in the MediaWiki namespace -> "** createpage-url|createpage" would reference MediaWiki:Createpage-url and MediaWiki:Createpage ... and in them put the link and text respectively [06:57:37] yeah i just want a link that allows for someone to create a new page [06:57:47] and I am not sure of the code to do this [06:58:19] ahh, well, what would the page name be? [06:58:34] i have tryed every combinations of the same command as the permission array for it, createpage, createpage-url or Create Page like you suggested and it keeps coming up with page not found [06:58:34] you'll probably have to link to an intermediary page, like Project:Create_a_new_page [06:58:47] and on that page, put in an and some instructions [06:59:16] because you can't link to a blank edit page without knowing the title first [06:59:30] (well, without some fancy extension made to do that, I guess) [06:59:49] so you can't make a simple "create a new page" link [06:59:55] !inputbox [06:59:55] --mwbot-- http://www.mediawiki.org/wiki/Extension:Inputbox [07:01:35] 03(mod) Add quicklinks after IPs, to common registries/queries - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12512 +comment (10FT2.wiki) [07:04:26] cej: http://test.wikipedia.org/wiki/MediaWiki:Sidebar [07:04:29] see here for example [07:04:57] instructions can be put on the intermediary page, and above the editbox on the edit page with a preload= [07:15:03] thanks Splarka worked like a charm [07:17:40] np [08:26:53] 03siebrand * r29282 10/trunk/phase3/ (21 files in 2 dirs): Localisation updates for core messages from Betawiki (2008-01-05 9:18 CET) [08:27:13] TimStarling: I love JSTOR [08:30:59] Hi. I'd like to fix the fact that our front page is often out of date (a day behind) with day-dependent templates not reflecting the new date. [08:31:02] I'm aware of the purge command, but would like to know if there is a way to automatically update the cached version at some stage on the new day so that it will always work. [08:31:05] Currently we always seem to be "behind the time". [08:31:28] (until somebody uses the purge command) [08:31:40] 03grondin * r29283 10/trunk/extensions/ConfirmEdit/ConfirmEdit.i18n.php: Updating french language from Betawiki (2008-01-05 09:30) [08:32:01] af_alias, so maybe you write a small script that will call purge command every night at 00:00:05? :) [08:32:32] enhydra: So no "automatic" solution? [08:32:51] how does one write such a script anyway? One needs to push the buton or be logged in, not so? [08:32:53] I suspect that there is a complicated one that I don't know about... [08:33:06] heh [08:33:16] I had an idea I suggested to Wikinews... [08:33:40] af_alias, all you need is to retrieve http://path-to-your-wiki.com/wiki/Main_Page?action=purge in any way [08:33:42] have logged in users randomly make an action=purge ajax call via site-wide JS script to the main page, like 1:1000 chance on every visit [08:34:10] (anonymous users can't action=purge without a GET followed by a POST usually) [08:34:27] enhydra: That doesn't work, does it? There is a button you need to push for confirmation [08:34:42] only if anonymous, afaik [08:34:43] Splarka: My understanding is that the page is cached separately for non-logged in users [08:34:54] af: a purge is a purge [08:35:02] the caching is different, but a logged in purge purges it for anon too [08:35:31] currently if logged in, I see the updated one, on another machine, I still see the old page (browsing anon) [08:35:40] do you have server access? [08:35:45] ? [08:35:51] can you edit LocalSettings.php ? [08:35:54] no [08:35:58] I'm just a contributor [08:36:03] ahh well [08:36:10] anyway, try the action=purge on the logged in machine [08:36:17] and then check it on the other machine right after [08:36:40] or ask the people that have access to LocalSettings.php to enable ... [08:36:43] $wgGroupPermissions['*' ]['purge'] = true [08:37:06] who has access to that? [08:37:25] how should we know? you haven't told us which wiki ^_^ [08:37:27] ok, purging as logged in updates it for the anon user as well [08:37:31] usually a server owner and people who have so-called "shell access" [08:37:52] af.wp - I didn't realise it makes a difference [08:38:02] The issue with the ajax solution is that we're a small community - mostly in the same timezone [08:38:03] there are a lot of personal wiki installs on the web [08:38:08] ahh [08:38:10] ah ok [08:38:33] wikipedia survives by rabid caching, so as not to make the servers melt [08:39:17] a script that calls a get and then a post is possible too, I think... [08:39:24] yah [08:39:31] there's special code in MediaWiki that reduces the cache time of the main page. [08:39:32] or a simple bot script, but then you just have to find someone who can run it [08:39:53] JeLuF: elaborate for great justice ^_^ [08:39:55] real problem in a community with only 3 really active non-technical people [08:40:01] pardon? [08:40:25] reduces by how much? [08:40:44] something like 10 minutes [08:41:32] Is this active on all WikiMedia projects? [08:55:02] I thought so... I'm searching for it [09:04:15] 03siebrand * r29284 10/trunk/extensions/ (14 files in 14 dirs): Localisation updates for extension messages from Betawiki (2008-01-05 9:43 CET) [09:07:18] can't find it [09:08:03] Hello [09:08:18] hello [09:09:59] I wonder if there are some kind or article rename with tagging or something similar. I mean an article named "Tetris" on an article could be automatically renamed to Tetris_thenameofthetag [09:10:24] 03(NEW) horizontal scrollbar in IE when wikitable width = 100% - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12514 trivial; low; MediaWiki: Page rendering; (steppres) [09:10:40] 03(mod) horizontal scrollbar in IE when wikitable width = 100% - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12514 (10steppres) [09:20:01] siebrand: do you review those localisation updates for bad HTML before you commit them? [09:20:38] TimStarling: usually about once a week, based on Localisation checks Nikerabbit generates. [09:20:56] 03siebrand * r29286 10/trunk/extensions/ (13 files in 13 dirs): Revert 29285. Many encoding issues were introduced. Let's keep it at UTF-8. [09:21:03] TimStarling: but mostly Nikerabbit does those fixes, which I commit then. [09:21:27] TimStarling: there can be short periods with minor issues. [09:22:38] and you don't have any trust checks on your editors, do you? [09:22:56] mostly I'm thinking about malicious javascript [09:23:34] TimStarling: ah. Well, that would almost certainly be caught, either directly on Betawiki, or in pre-commit checks [09:23:51] 03rotem * r29287 10/trunk/phase3/RELEASE-NOTES: Module changerights is currently disabled. [09:24:10] TimStarling: we have basically only 3 users committing: me, Nikerabbit and Grondin. [09:24:14] do you review the diffs for malicious javascript before you commit them? [09:24:44] TimStarling: I review the diffs manually before committing. There shouldn't be any javascript in them... [09:25:00] ok [09:25:41] TimStarling: so if you ever spot it, shoot it on sight, unless the script makes sense and is explicitely mentioned in the commit message. [09:26:21] TimStarling: I think I have only once committed some css to a localisation file. [09:26:43] TimStarling: that was to circumvent a ltr/rtl issue (in ku or kk, IIRC) [09:27:01] I don't review them, I was wondering whether I should [09:27:17] but ideally a check for malicious javascript and web bugs should probably be automated [09:27:28] that way we won't have anything like that slipping through [09:27:50] TimStarling: 'monobook.css' in MessagesKu_latn.php [09:28:40] TimStarling: I use a shell export for MessagesXx.php now, so I guess I can ask Nikerabbit to put a check in there on the export file. [09:28:48] (see export.php in extensions/Translate/ [09:29:19] (afk) [09:29:37] TimStarling: if you have an easy check, feel free to put it in :) [09:30:48] 03grondin * r29288 10/trunk/extensions/ (13 files in 13 dirs): Localisation updates for extensions from Betawiki (2008-01-04 09:43 UTC) [09:40:06] Someone here? [09:41:16] Forrester, I think that yes [09:41:22] just you, 167 idle machines, and 1 sarcastic reply (just ask your question ^_^) [09:41:29] hi enhydra :) [09:41:33] so [09:41:38] er.... [09:41:45] you see [09:41:46] http://de.wikipedia.org/wiki/Bild:Toni_Ruettimann.jpg [09:41:48] ? [09:42:31] at the bottom: [09:42:33] Verwendung [09:42:35] Die folgenden Artikel benutzen diese Datei: [09:42:46] it means what pages use that image [09:42:57] yes, and? [09:43:27] i want to get a list of that [09:43:30] for javascript [09:43:45] while on that page or while elsewhere? [09:43:46] and i am sure i need the elementid to do that [09:43:56] while i am on that page [09:44:44] document.getElementsByTagName('ul')[1].getElementsByTagName('li') should get the array of li there [09:45:18] (although it would break if someone added a