[00:01:41] anyone have any ideas why my interwiki links from version 1.4.15 wiki to 1.7.1 wiki work but are displayed as if they didn't? example: http://pse.tunk.org/wiki/Test [00:02:29] Zet: well, one hint is: what the links point to is not relevant. theycould point anywhere. [00:02:43] Zet: do oether interwiki links on that wiki work as expected? [00:03:00] well I don't have other interwiki links on there [00:03:13] then make some [00:04:11] I made one, the same happens [00:05:40] oh wait [00:05:43] it's just the CSS [00:05:53] interwiki links have a different class [00:06:05] and for some reason somebody edited that class to be the same as broken links [00:11:15] Woohoo... everything went smoothly in my install. WOOT! [00:19:40] ok yeah, all is cool now [00:19:51] bye [00:24:19] 03(mod) Addition of "LIKE" parameter to doQuery() - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12127 (10innocentkiller) [00:24:52] <^demon|busy> If anyone with a good mind for the QueryPages can take a look at that, I'd like some input. [00:39:10] Hello. i don't know if this is the right place for me to come for help, but i sure do need it! I managed to screw up a client's wiki to an appalling degree, and I don't know how to fix it. [00:39:41] this morning it was just ugly, now it's incoherent and linked to absolutely nothing. [00:40:38] i almost think i should try to reinstall the mediawiki program and start from scratch. i downloaded the latest version to his server, in its own directory. but beyond that, i am lost. i just don't know what to do! [00:42:15] Hi, I have trouble to to install an extension on my wiki. When I add a a "require_once" in my LocalSettings, the wiki disappear, giving no arror message, just a blank page. Here is my MW version : http://www.teachanddiscover.net/wiki//index.php?title=Special:Version [00:43:51] lrbabe: your going to want to display errors inline or take a look at error log [00:44:38] where can I find the error log ? [00:44:40] lrbabe: are you adding the require_once to the END of the file? [00:44:56] yes, to the end, but before the ?> [00:45:40] mind linking to the extension? (there might be an error in the code or something) [00:46:44] lrbabe: error log location depends on your hosting environment [00:46:59] Skizzerz, I hope not, this extension is used on the wikiversity : http://www.mediawiki.org/wiki/Extension:Quiz [00:48:03] ok, is the file for the extension located in the correct area? (aka is it in /extensions/Quiz instead of just /extensions?) [00:49:40] Could it produce a blank page to display without any error ? [00:51:04] I wonder if it could be a problem with the PHP version (4.4.2) [00:51:45] if it's not the right path? I think it depends on how the error messages are set up and such, but normally it displays some sort of fatal error message [00:52:00] does anyone know how to properly configure the $wgImportSources entry? [00:52:16] yes [00:52:30] first of all, you have to know what's in your interwiki map [00:53:00] Skizzerz : ok, thanks, I'll try to take a look at my PHP conf [00:53:01] then, you can set $wgImportSources = array( 'wikipedia', 'wictionary'); [00:53:11] or whatever interwikis you want in the array [00:53:36] i see [00:53:45] so, those urls are in the database then [00:54:09] <^demon|away> In the iwprefix table. [00:55:13] cool [00:55:46] ok, that seems to be working [00:55:54] DAPence: I've found http://www.mediawiki.org/wiki/Extension:Special_page_to_work_with_the_interwiki_table to be helpful in managing and examining the table [00:56:26] cool [00:56:27] thanks [00:57:22] i still wish you could also import images and templates and such, but, just a little editing i guess [00:59:44] import templates just like normal pages [00:59:55] oh? [01:00:10] i see [01:00:35] there really is no difference between a page in the Template: namespace versus something in another namespace besides the fact that they're easier to transclude on pages [01:00:59] what an age we live in [01:01:01] thanks [01:01:37] np :) [01:02:48] Does anyone have info on the configuration of the new page patrol on en.wb? [01:03:10] ie. Is it different from enwiki? [01:05:30] is there a way to tell it to not export/import tamplates? [01:05:44] not sure [01:05:50] why? [01:07:21] DAPence: If you don't want the content of a page to be imported, use . If you don't want the {{...}} an a page to be expanded, use . [01:07:31] *in a page [01:07:45] Algo: isn't that for inclusion, not importing [01:08:06] I thought that's what we were talking about; sorry =) [01:08:16] it's fine :) [01:08:21] Import doesn't work like that... to import a template, you have to ask for it. It doesn't happen automatically [01:08:38] Are we really talking about Special:Import? I think DAPence may be confused [01:08:58] yes, we're talking about Special:Import, at least, I think we still are... [01:10:00] DAPence: do you mind explaining what you mean when you said "is there a way to tell it not to export/import templates"? [01:10:04] Well it doesn't import templates automatically or anything. You import them like any other page [01:10:04] no, i was just wondering if you could exclude templates and images from the export [01:10:25] or, the call for a template [01:10:27] can't export images themselves anyway, just the page they're on [01:10:48] ahh... you mean you don't want them to show up as redlinks afterwards [01:10:57] yes, that's it [01:10:59] no can do; you have to remove that after import [01:11:00] and you can't specify what to include content-wise on the export :) [01:11:10] ok, cool [01:11:24] thanks [01:12:02] well, since i've destroyed this wiki install ... is there a way to mass remove entries, or just redo the install [01:12:18] mass remove entries? [01:12:37] You can delete the database table and re-run the instalation [01:12:39] of articles? settings? [01:12:52] well, i have a test wiki here to check out these import methods [01:13:05] hmm [01:13:09] most changes you make to localsettings.php can simply be reverted without harming anything [01:13:36] no big deal, mediawiki installs in like 20seconds [01:13:37] Info on new page patrol anyone?? please? [01:13:49] thanks for the help [01:14:42] Mike_lifeguard: i'd assume they're the same version, but I don't know [01:15:12] I assume they're the same version (of the extension) as well... but it seems that they're configured differently. [01:15:30] I'd like to confirm/disconfirm and ask about changing configuration depending on those answers. [01:15:38] Not urgent, but still... [01:18:55] Mike_lifeguard: i'd recommend opening a request of some sort on bugzilla, people with access to those files seem to be there frequently [01:23:02] meh. a bug report isn't really needed just to get info [01:24:30] well, it could contain the request for configuration changes as well [01:24:48] yeah but we don't know if we want it changed because we don't know how it's configured [01:25:20] we can't really make heads nor tails of its behaviour. I used to look exactly the same, but now it's acting differently. [01:25:29] Mike_lifeguard, what is it you want to know? [01:25:32] we'll just wait; there' no big emergency [01:25:45] are admin pages autopatrolled? bot pages too? [01:25:58] and can users patrol their own pages? [01:26:00] I assume it uses the ordinary autopatrol permission for that. [01:26:08] !patrol [01:26:12] !patrolling [01:26:25] @search patrol [01:26:25] Results: [] [01:26:31] Well, anyway, by default I think sysops are autopatrol plus patrol, and no one else is anything. [01:26:31] guess not -_- [01:26:51] autopatrol = your edits are automatically patrolled, patrol = you can patrol edits (I think not including yours, not sure). [01:27:11] This distinction was made around 1.10ish, prior to that you were allowed to patrol your own revisions and there was no autopatrol. [01:27:47] ok. it seems that my pages (admin) are not autopatrolled (as seen by another admin) [01:28:00] apparently the same is true of other admin-created pages [01:28:09] Hmm, maybe gmaxwell didn't add that. [01:28:11] isn't there a patrol log? or is that disabled on wm wikis? [01:28:22] There's a patrol log, if patrolling is enabled. [01:29:13] yeah. let me check the log for those pages. . . [01:29:33] Mike_lifeguard: you appear to have autopatrol on en.wb [01:30:14] ok. I just found a page of mine that was not autopatrolled; it was patrolled by someone else [01:30:20] http://en.wikibooks.org/w/index.php?title=Special%3ALog&type=patrol&user=Mike.lifeguard&page=User+talk%3AAly4halo for example [01:30:27] so... can a page be patrolled more than once? [01:30:57] http://en.wikibooks.org/w/index.php?title=Special:Log&page=WP:TW ?? [01:30:57] No, I don't think so. [01:30:58] shouldn't be possible... but it might happen by some fluke or something... I really don't know [01:31:19] that page was created by me, but patrolled by darklama(!) [01:31:28] By default, bots and sysops have autopatrol. [01:31:53] It's possible that either 1) that default is not kept on WM wikis for some reason, or 2) there's a bug that prevents it from working for new-page patrol. [01:32:00] I suspect the latter, let me see. [01:32:16] everytime I goto Special:Newpages, I see pages for admins which haven't been auto patrolled, it almost seems like its only happening for me, since Mike.lifeguard and another admin haven't noticed [01:32:55] except for in the patrol log [01:32:59] Actually, I had noticed previously that pages by WK (for example) were patrolled immediately (I assume they were autopatrolled) [01:33:32] if it says (automatic) next to the patrol log entry, it's an autopatrol [01:33:45] and had I have marked pages created by WK as patrolled [01:33:55] err and yet I have [01:34:07] Are there any logged autopatrols? [01:34:10] What wiki is this anyway? [01:34:14] Yeah, so there's definitely something funky going on. [01:34:14] wikibooks [01:34:20] yes there are autopatrols in the log [01:34:21] (en.wikibooks) [01:35:36] Hmm, that rules out one hypothesis. [01:36:31] could it be by namespace? the autopatrolled example ^up there was User talk: and the one that was not was in the mainspace...? [01:37:04] can it be restricted by namespace? [01:37:07] Seems unlikely. Do further data substantiate the hypothesis? [01:37:15] nope. but i'm looking [01:37:41] *Skizzerz checks 1.12 defaultsettings.php... [01:38:12] Incidentally, it seems like you can only patrol pages you author if you have autopatrol, patrol alone means you can't. [01:38:20] Which is moot, in the default config. [01:38:52] I see in the code that there are two paths, one checks $wgUseRCPatrol and one checks $wgUseRCPatrol || $wgUseNPPatrol. [01:39:15] I suspect the former path is sometimes being followed for new pages. [01:39:35] Darklama: does [[Random page]] look autopatrolled to you? [01:41:30] I just created that page. In Special:Newpages, it's not highlighted, but there's no log entry for it (auto or otherwise) [01:42:31] no it doesn't [01:43:19] it appears to be patrolled, just by the fact that adding ?hidepatrolled=1 to the end of recent changes hides it [01:44:01] I don't see it in Special:Newpages with or without it hidden [01:44:05] but then again, I can't see the ! marks, so I can't make sure besides that [01:44:28] Special:Newpages may be cached [01:44:38] not sure though [01:44:52] finally shows up [01:45:23] it must be autopatrolled, won't let me mark it as patrolled [01:45:26] "Random page" is marked as patrolled in the database. [01:45:52] http://en.wikibooks.org/w/index.php?title=Special:Log&page=Random_page [01:45:54] Are you sure? [01:45:56] still not showing up in patrol log [01:46:02] oh. k [01:46:04] but it's definately marked as patrolled [01:47:02] seems like there are still bugs to work out to me [01:47:04] *Skizzerz is guessing some sort of bug in the software... considering job queue is empty, so it can't be a pending task, and it hasn't shown up after I cleared cache and action=purge'd the page [01:47:49] Mike_lifeguard, mysql> SELECT rc_patrolled FROM recentchanges WHERE rc_namespace=0 AND rc_title='Random_page'; [01:47:50] +--------------+ [01:47:50] | rc_patrolled | [01:47:50] +--------------+ [01:47:50] | 1 | [01:47:51] +--------------+ [01:47:53] 1 row in set (0.01 sec) [01:48:00] But no matching patrol log. That's definitely a bug. [01:48:11] Job queue isn't used for patrolling. [01:48:25] ok. shall we report it to bugzilla, or you? [01:48:27] I'll try to reproduce it locally. [01:48:37] Well, if I can figure it out and commit a fix right here, there's no need to use Bugzilla. [01:48:48] If not, sure, I'll file the bug. [01:48:53] oh. cool beans [01:50:08] Works as expected locally, at least the first time I try it. [01:50:33] It appears fairly reproducible, though, on enwikibooks. [01:55:50] if it's the same extension as enwiki, why would it be a problem here, but not there? [01:56:58] Who says it isn't a problem there? [01:58:10] I assume it isn't... maybe it is [01:59:43] Then, what difference is there between "locally" and "on enwikibooks" that would account for the difference in behaviour? [02:00:21] *amidaniel wonders what extension they're chatting about [02:00:24] perhaps it has something to do with sharing servers? (/me is assuming the "locally" doesn't get as much hits as a wmf site) [02:00:30] new page patrol [02:01:01] Ah [02:01:04] actually, I don't think that the base of that is an extension, but rathar built into the software. There might be an extension to change the UI... [02:01:24] but I really don't know since I don't have those rights :P [02:01:41] amidaniel, autopatrol doesn't seem to be working consistently on enwikibooks. [02:01:55] *amidaniel peeks [02:01:57] amidaniel, some pages not getting autopatrolled, some pages getting autopatrolled, at least one page getting autopatrolled with no log entry. [02:02:40] How do we know it was autpatrolled if there was no log entry? O.o [02:02:42] It's all in the core software, by the way, no extensions. [02:02:59] amidaniel, it was created within the last hour, and rc_patrolled=1. So it got patrolled somehow. [02:03:15] Ah, okay :) [02:03:46] amidaniel, I'm looking at the enwikibooks database on the toolserver, but do you have any idea what log_params is for patrolling? It seems to be a bitfield? [02:03:52] Hmm .. I don't see any autopatrols in the (last 500 entries) of the patrol log [02:04:14] Simetrical: Is log_params in revision or page? [02:04:16] amidaniel, there are, on enwikibooks. [02:04:27] amidaniel, er, it starts with "log_". It's in logging. :) [02:04:36] *amidaniel smacks himself [02:04:48] revision and page don't know about patrolling at all, anyway. It's all in recentchanges for some reason. [02:05:03] I guess everything that's too old is "obviously" patrolled. [02:05:27] (That's why we have inconsistent behavior if you visit a page with or without an rcid parameter . . . kind of messy.) [02:05:38] Anyway, I tried a simple test on localhost, but it autopatrolled my new page as expected. [02:06:00] So I'm going to try to get a list of places where it screwed up, from the toolserver (since happily, enwikibooks doesn't suffer from weeks of replag like enwiki). [02:06:00] Simetrical: well, it is called 'recent changes patrol' ;) [02:06:17] uh, enwiki replag hasn't been weeks for ages [02:06:17] flyingparchment, nice design principle, use the name to justify the functionality. [02:06:25] Well, I haven't checked for ages, so that works out. [02:07:44] *amidaniel gives flyingparchment a box of cookies for reducing the enwiki replag to 0 :D [02:08:08] Alright .. just a sec, I'm in the middle of a million things at once [02:08:42] . . . LogPage has both a static method *and* a member variable named actionText? I guess it's not ambiguous, since functions aren't first-order in PHP . . . [02:10:13] Simetrical: log_params looks like a revid to me [02:11:32] D'oh. [02:11:40] That would explain things, wouldn't it. :) [02:11:49] *Simetrical stabs untyped integers [02:11:56] So then what stored whether it's an autopatrol? [02:12:17] Ah, second param. [02:12:29] Second param? [02:12:36] I was trying to figure that out mahself [02:12:47] SELECT * FROM logging WHERE log_type='patrol' AND log_params LIKE '%\n%' LIMIT 10; [02:12:50] Then it works nicely. [02:13:03] Actually I think there's a third param, too. [02:13:06] Oh, that's right we do that all cr-delimeted params field [02:13:08] *amidaniel shivers [02:13:15] :) [02:14:01] Of course if you split it to another table, not sure it would be efficient to query it anyway, at least not in conjunction with main-table stuff. [02:14:05] But that's neither here nor there. [02:14:16] (put it in a where clause, I mean, not just join it) [02:14:21] Hmm .. okay, so I figured when the second log_param is 1 that means it's an autopatrol, right? [02:14:27] Why then am I getting anons back? [02:14:35] Oh, oh, I see. [02:14:39] No, they're all three lines. [02:14:45] Gah, wait, misread it [02:14:48] You need, like, 0\n1 or something. [02:15:00] Yeah, that works fine [02:15:03] %\n1 would be best. [02:15:44] Okay, one thing, I thought newpage patrol only worked in ns-main? [02:15:52] Why am I seeing many entries in userspace. [02:15:54] ? [02:16:45] Both auto and non-auto [02:17:11] Evidently it works in all namespaces. [02:17:59] Ah, indeed ... okay, ignore that :) [02:18:10] *Simetrical tries to left join logging to recentchanges to find more mysteriously patrolled pages [02:18:11] new mediawiki install, logged in, preference page gives me white blank screen. Any suggest? [02:18:30] Okay, and autopatrol right is sysop, correct? [02:18:33] !errors | timothywcrane_ [02:18:33] timothywcrane_ : To see PHP errors, add this to the very top of LocalSettings.php: error_reporting(E_ALL); ini_set("display_errors", 1); Fatal PHP errors usually go to Apache's error log - also check the error_log setting in php.ini (or use phpinfo). For more details in wiki error reports, set $wgShowExceptionDetails = true; and $wgShowSQLErrors = true; For full debug output, set $wgDebugLogFile to some path you like. [02:18:41] amidaniel, and bot, by default. [02:19:34] Okay [02:19:45] thank you for bot lead Simetrical [02:21:38] *amidaniel watches his sql syntax grow longer and longer and longer ... [02:24:04] Alright, so I see only four sysops who have ever autopatrolled anything [02:25:21] What were the instances where pages weren't autopatrolled? [02:25:53] *Simetrical scrolls up [02:26:01] You could query for them. :) [02:26:40] *amidaniel strains his brain to come up with a query for that [02:27:06] amidaniel, http://en.wikibooks.org/wiki/WP:TW [02:27:18] I really wish I could reproduce locally. [02:27:22] *Simetrical tries again [02:29:46] Ugg .. how do I stick a field into a like string? [02:29:54] Um, CONCAT? [02:29:56] That's not good. :) [02:30:39] Can't reproduce locally at all. [02:30:52] I don't see any rhyme or reason to it. [02:30:56] *Simetrical kicks inconsistent bugs [02:31:08] No worky [02:31:26] Shall I file a bug report, or do you want to do some more digging first? [02:31:49] Any idea why a template would display the information twice? see: http://pastie.caboo.se/122394 for an example [02:32:15] D'oh .. if I could spell that is [02:32:17] *amidaniel stabs self [02:33:40] here is the template code: http://pastie.caboo.se/122396 [02:34:58] and the page where the template is used: http://pastie.caboo.se/122399 [02:35:01] pastebin? [02:36:14] url plz [02:38:01] Parse error: syntax error, unexpected T_IF in /home/content/t/i/m/timothywcrane/html/mediawiki/LocalSettings.php on line 13 [02:40:17] Gahh .. no wonder that query was taking forever .. patrol hasn't been enabled forever :D [02:42:00] Alright, the only one I can find is Chocolate Chip Coffee Cake [02:42:05] Is that the one we already had? [02:42:10] to find line 13 I count all lines, not counting those that have # behind them, right? [02:43:52] That page hasn't been mentioned before. What are we looking at WRT that page? [02:44:52] you count all lines, including those starting with #, starting at 1 [02:47:10] thnx [02:48:40] amidaniel, the only what we already had? [02:48:40] but it is a commented (#) line, could it still produce an error? [02:49:01] it shouldn't... [02:49:38] do I count all lines [02:49:59] uh, I think so [02:50:04] just use an editor with a 'go to line' function [02:50:09] I know this sounds a bit dumb... but how do I enable SVGs for my wiki? I'm trying to port the helpfiles and they include one. [02:50:10] (don't all editors except notepad have that?) [02:50:27] flyingparchment, last I checked even Notepad has it. [02:50:30] (notepad is bad for editing anyway) [02:50:31] Ctrl-G, I think. [02:50:37] At least some versions have it. [02:50:40] oh, never knew that [02:50:54] Try notepad++ for a good coder's notepad [02:50:55] thank you, I have never edited my files before. [02:51:03] gedit has it [02:51:08] found the line [02:51:44] if( defined( 'MW_INSTALL_PATH' ) ) { [02:51:56] Aha, I tracked down a relatively interesting bug, I think .. just a sec [02:52:11] gedit has the line number of the cursor displayed in the lower right at all times. [02:52:12] j_smith: I think SVG is automatic, are you having problems with it? [02:52:17] *SVG support [02:52:20] I'm a total newb, but learn fast, plz bear [02:52:31] Simetrical: by default? surely that's too confusing for gnome ;-P [02:52:32] ".svg" is an unwanted file type [02:52:33] Skizzerz, it's not, it has to be enabled. [02:52:34] List of allowed file types: png, gif, jpg, jpeg [02:52:53] weird, it enabled it on my wiki via the install process... [02:53:02] flyingparchment, there are three pieces of interface feedback it provides by default: line, column, and insert/overwrite. All in the lower right. [02:53:07] I see it with my own eyes! [02:53:09] oh, right [02:53:15] i can't read.. i thought you meant on the left [02:53:31] you have to invoke statusbar [02:53:34] Someone really aught to change the convert the help-image to PNG to avoid this for noob-sysops like me. :) [02:53:35] Skizzerz, not if you installed the tarball from mediawiki.org. [02:53:40] of course, I have ImageMagik (or w/e it's called)... that might have something to do with it (or not) [02:53:44] I think. [02:53:46] Hmm, no, right. [02:53:50] Maybe yes, if it detects ImageMagick. [02:53:51] I take it back. [02:53:59] j_smith, what help-image? [02:54:18] Image:PD-icon.svg [02:54:19] Image:Wikimedia-logo.svg [02:54:21] Image:Tools.svg [02:54:26] from http://www.mediawiki.org/wiki/Help:Copying [02:54:28] vs. [02:54:43] Only one I've found ... but there's an entry in the patrol log with a bad revid [02:55:17] I don't have ImageMagick installed, and I hope to avoid installing it if I can. [02:55:45] Or .. something screwy [02:55:51] Would I need imagemagik to make those files work, Simetrical? [02:55:56] amidaniel, only one you've found of what? [02:56:30] The edit that created WB:MEDIA has the revid 1023360, which was patroled here [02:56:34] *amidaniel is really confused ... [02:56:47] j_smith, those icons are from Wikipedia or Commons or something. Of course you're going to have to install extra fancy stuff to get Wikimedia material to work properly. You definitely need at least ImageMagick for SVGs, but rsvg is what Wikimedia uses, so that would work best for a Wikimedia dump. [02:56:54] amidaniel, what are you looking for again? [02:57:08] don't use imagemagick for svg, it's crap [02:57:14] Meh. I'll just convert them to PNG. [02:57:21] Anyway, I already found a bug, the one that was patrolled with no entry in the log. [02:57:34] Not to mention that autopatrol doesn't work at all, most of the time. [02:57:54] errr... oh, dangit. *sigh* [02:59:07] Simetrical: Trying to find the cause of the bug .. and in my search am finding many more bugs :) [02:59:29] Nothing from Whiteknight is being autopatrolled [02:59:49] Oh, wait a minute ... isn't there a user preference to disable autopatrolling? [03:00:27] That would explain something, wouldn't it? [03:00:28] Is there? [03:00:30] I don't think so. [03:00:37] if there is, I'm not seeing it [03:00:39] I don't recall seeing any such thing in the code. [03:01:13] what? stuff from me gets autopatrolled [03:01:22] unless you are talking about a different whiteknight [03:01:31] wknight8111: I presume it's you .. on wikibooks? [03:01:43] i thought Mike_lifeguard was the one with autopatrol problems [03:01:46] yes, it is [03:02:08] at least, autopatrol worked on a test page I made, i havent checked it after that [03:02:09] yeah. well we're not really sure what the probelm is, are we? [03:02:27] apparently darklama can patrol pages I've created (and yours too) [03:02:47] and we found at least one that I made that's not patrolled at all (not even autopatrolled) [03:03:06] Alright, unless the query's bad, here is every page I can find that was created by a sysop that wasn't autopatrolled: http://mediawiki.pastey.net/78091 [03:03:20] (sorry for having everything duplicated in the table .. I was lazy :D) [03:04:00] almost all of those pages were created before the patroled extension went live [03:04:11] Or, rather, everything created by a sysop that was manually patrolled .. not necessarily everything that was ever autopatrolled [03:04:12] at least, before I was aware that it was live [03:04:27] amidaniel, you need a left join on logging if any sysop's un-autopatrolled new page hasn't been patrolled yet. [03:04:56] But that's probably not an important case. [03:05:28] wknight8111: Ah, yes .. it appears someone went through and patrolled a lot of old pages [03:05:31] Like me tweak [03:05:45] ug_group != 'sysop'? Won't that just return pages created by bots or bureaucrats? [03:05:49] When did it go live? [03:05:53] Surely you want ug_group = 'sysop'. [03:06:00] O.O [03:06:02] *amidaniel dies [03:07:17] that explains why only WK and SB Johnny showed up [03:07:24] *amidaniel waits [03:07:28] amidaniel, mysql> SELECT MIN(log_timestamp) FROM logging WHERE log_type='patrol'\G [03:07:28] *************************** 1. row *************************** [03:07:28] MIN(log_timestamp): 20071116180443 [03:07:28] 1 row in set (0.00 sec) [03:07:34] Ah, more like it [03:08:00] amidaniel, also, remember, autopatrolling is the third argument, not the second. [03:08:14] Third argument? hm? [03:08:44] I only see two log_params [03:08:57] Here's the latest attempt: http://mediawiki.pastey.net/78092 [03:09:15] Which includes WP:TW, so we're getting closer :D [03:10:52] If I adjust for simetrical's timestamp, I get http://mediawiki.pastey.net/78093 [03:12:13] Bletch .. with namespace: http://mediawiki.pastey.net/78094 [03:12:15] amidaniel, here's what I got. http://mediawiki.pastey.net/78095 [03:12:19] Which is correct, though? [03:13:08] Yours is wrong, anyway. http://en.wikibooks.org/w/index.php?title=Special:Log&page=User_talk:IdaKnow [03:13:12] Your autopatrol check is off. [03:13:28] "AND log_params LIKE '%\n0'" ... how are you checking the revid in the log? [03:13:41] Ugg .. fucking sql [03:13:42] Um, why do I need to check the revid in the log? [03:13:51] I just want to know that it's not an autopatrol. [03:13:52] Oh, wait. [03:14:03] You need to know the log entry concerns that revision [03:15:04] Hmm. [03:15:19] Oh, shit, it is the third param [03:15:42] Right. [03:15:51] Aha, I caught a whole lot more. [03:16:08] These are ones with no patrol logs at all. [03:16:30] You're right about the revision, bleh. [03:16:45] *Simetrical ignores it for the moment [03:17:08] Okay, that should be right now: http://mediawiki.pastey.net/78096 [03:17:15] Those are the ones with a manual patrol log [03:17:25] I got those too. [03:17:26] Same list. [03:17:32] But there are other buggy ones. [03:17:41] There are tons that are autopatrolled with no log. [03:18:15] linky? [03:18:57] Try this: SELECT rc_this_oldid, rc_user_text, rc_namespace AS ns, rc_title FROM recentchanges LEFT JOIN logging ON rc_namespace=log_namespace AND rc_title=log_title WHERE rc_new=1 AND rc_patrolled=1 AND log_type IS NULL LIMIT 100; [03:19:04] Oh, wait. [03:19:09] Hmm, is that correct? [03:19:12] *Simetrical inspects it [03:19:34] Every new, patrolled entry from recentchanges with no corresponding log entry, sounds right. [03:19:57] Well, I need a further left join condition to include all cases, I think. [03:20:27] mysql> SELECT COUNT(*) FROM recentchanges LEFT JOIN logging ON rc_namespace=log_namespace AND rc_title=log_title AND log_type='patrol' WHERE rc_new=1 AND rc_patrolled=1 AND log_type IS NULL\G [03:20:27] *************************** 1. row *************************** [03:20:27] COUNT(*): 154 [03:21:16] http://mediawiki.pastey.net/78097 [03:21:52] Oh, wait. [03:21:59] No, I don't need the date cutoff, never mind. [03:22:05] Nothing should be patrolled before patrolling was enabled. [03:22:08] Wow... importing is alot more work then i'd expect. [03:22:20] A lot of those are before it was enable [03:22:25] enabled* [03:22:41] that's probably not true. When it was first enabled, I went through and patrolled a bunch of pages in each namespace so we knew where patrolling came into effect [03:22:52] Oh no, misread it [03:23:12] Yeah, looks about right. [03:23:24] Mike_lifeguard, well, right. I misspoke: nothing should be patrolled *without a patrol log* ever, before or after it came into effect. [03:23:28] So all the ones I listed are errors. [03:23:32] ahh. yes [03:23:39] http://mediawiki.pastey.net/78098 [03:24:00] Simetrical: Indeed .. at least, I should think so :) [03:24:18] So, we clearly have found a bug .. now, the question that remains is how do we fix it? :D [03:24:34] I have no idea, until we can reproduce it. [03:24:46] Which is boring work, which is why brion usually does it. [03:25:09] hehehe [03:25:09] It would be interesting to run these queries on other wikis. [03:25:11] *Simetrical tries enwiki [03:25:16] Oooh ... fun! [03:26:05] *amidaniel waits [03:26:22] enwiki is where you always learn the importance of query optimization :D [03:26:39] *amidaniel is used to running his queries on databases of like 10 rows :) [03:27:07] *Simetrical tries EXPLAIN first [03:27:15] Looks like it should be fast. [03:27:34] Well, not *too* fast, evidently. [03:27:53] Seemingly scanning a few 10k of rows to find matching ones. [03:27:56] Oh well. [03:27:58] *Simetrical waits [03:28:33] *amidaniel suddenly realizes the lack of necessity of both of us running the query on the same wiki at the same time and goes to run it elsewhere [03:28:58] *Simetrical continues to wiat [03:29:00] wait [03:29:20] Geez, it's only like 70,000 rows max according to EXPLAIN, and no filesorts or anything. What's so long, here? [03:30:31] Holy shit [03:30:32] 2589 rows in set (54.63 sec) [03:30:36] on dewiki [03:31:11] For which? [03:31:17] 30 rows of not-autopatrolled-at-all. [03:31:20] On enwiki. [03:31:21] 2589 rows in set (54.63 sec) [03:31:24] woops [03:31:33] Erm, of nolog [03:32:16] PatrolLog::record is maybe unreliable? [03:32:30] Hmm, it can fail on bad input. [03:32:49] Could be ... would have thought we'd have noticed it elsewhere [03:32:55] With deletion or blocking or something [03:32:55] If $change isn't an object, or not a RecentChange, and can't construct it, for instance. [03:32:58] No, this is patrol-only. [03:33:10] Also if the Title can't be constructed. [03:33:32] Observation: it doesn't seem the Title construction uses the mater. [03:33:33] master. [03:33:37] So what if you had a couple seconds' replag? [03:33:44] Given that the page was just created? [03:33:52] I think that needs a $newMaster. [03:33:57] Oh, jeez .. that'll do it [03:34:00] Actually, that would explain total failure of everything, wouldn't it? [03:34:07] And only for new pages. [03:34:13] Yeah, should definitely use the master for such a query [03:34:24] It wouldn't explain not being marked patrolled at all, though. [03:34:48] Just on rough observation, they all seem to be clumped together at certain periods. I suspect they correlate to periods of high database lag [03:35:00] This sounds like a winner. [03:35:04] Yuppers [03:35:08] But only for one of the bugs. [03:35:24] Now how to fix it? Add a $useMaster param to makeTitleSafe()? [03:35:27] Hmm .. true [03:35:28] Seems inconsistent. [03:36:01] Why am I seeing output like this? http://rafb.net/p/JFTN7W19.html [03:36:02] I'm only assuming Title uses slave by default. Let me check. [03:36:10] The {{{inventor}}} stuff comes from what? [03:36:16] I'm sure it does .. in most cases it shouldn't matter [03:36:25] setuid, that's a template parameter, with no value provided. [03:36:30] I think adding that as a parameter would be acceptable, if a little awkward [03:36:32] !template | setuid [03:36:36] Feh. [03:36:44] amidaniel, for that one factory method? Out of, like, five? [03:36:56] Simetrical, Would it be solved by adding the right extension or template? [03:37:02] Is it the only one used in creating a new page? [03:37:04] I don't want to have to regex that out of the output [03:37:18] That or add a new constructor that always uses the master [03:37:26] setuid, no, you have to specify the parameter when you call the template, it will be replaced with the given input. [03:37:42] amidaniel, wait a sec, hmm. It just checks if Title is an object. Doesn't check if it exists. [03:37:46] So that might be a false lead. [03:38:02] *amidaniel looks at teh code [03:38:03] Ok... I'm quite confused now. Everything under [[Special:Allmessages]] is red-linked on my wiki. Whats going on here? [03:38:12] j_smith: That's normal. [03:38:18] j_smith, the pages only exist if you've customized the messages. [03:38:23] If a system message doesn't exist, it assumes the default value. [03:38:30] Simetrical, Hrm, uhm... I didn't write this content. The "Goguryeo" article is missing some params then? [03:38:33] Should it be flagged? [03:38:35] Oh, I see. Thanks Amidaniel and Simetrical. [03:38:50] setuid, I dunno, busy now. [03:38:53] ok [03:39:31] {{{inventor}}} ... [03:39:42] setuid, the template it's calling might be improperly used. [03:39:54] cite? What ext. adds that tag? [03:40:01] amidaniel, it's HTML. :P [03:40:08] bletch [03:40:13] I'm including the Cite extension already [03:40:21] amidaniel, I don't see any functions using slave here. Let me do some debug output. [03:40:26] require_once( "$IP/extensions/Cite/Cite.php" ); [03:40:35] I'll try to track it down [03:40:36] is unrelated to Cite or . is an HTML tag. [03:44:25] Good grief, debug_dump_backtrace() takes forever. [03:45:08] *setuid scratches his head [03:45:31] the References: section uses ref/cite, and must call Cite.php at some point, no? [03:48:47] is there any way to receive an email when any/all pages are edited?? [03:50:44] setuid, yes, presumably, but that has nothing to do with . is an HTML tag with no special function, it's passed through to page output like or

or whatever. [03:50:52] eldereko, yes, there's a config option somewhere IIRC. [03:50:56] !configuration | eldereko [03:50:56] eldereko : *All* configuration is done in LocalSettings.php. Do not edit any other file unless you want to get into serious hacking. You may want to *look* at the settings in DefaultSettings.php though, some are not in LocalSettings.php per default. See , , [03:51:15] Simetrical, of course, I'm wondering why the {{{inventor}}}, {{{patent-number}}} and {{{country-code}}} aren't being passed through the proper parser(s) [03:52:51] setuid, are you sure they don't just have no value passed to them in a template? If you call a template without specifying a parameter, that text will be passed through literally if no default is specified, by design. [03:53:13] Simetrical, Where would they have their value passed? [03:53:17] amidaniel, there are no slave queries on localhost involving patrols when I save a page. Still a possible hypothesis in theory, would explain irreproducibility, but I don't see a mechanism presently. [03:53:35] setuid: In the template; i.e., {{infobox|param1=joe|param2=foo}} [03:53:39] Simetrical, Remember, I'm just rendering a local mirror of the latest wiki dumps... I haven't touched any templates or content [03:53:39] setuid, in the calling page. Read the help page on templates, wherever that is, if you aren't familiar with them. [03:53:53] @search template [03:53:53] Results: [templates] [03:53:55] amidaniel, I'm about ready to go to bed, will you file a bug report if you don't figure it out? [03:53:58] !template alias templates [03:53:58] Successfully added alias: template [03:54:02] !template | setuid [03:54:02] setuid : For more information about templates, see . The most common issues with templates copied from Wikipedia can be fixed by installing ParserFunctions and HTML Tidy. [03:54:06] I also have no edit mode, so there's no way to check that [03:54:07] Simetrical: Sure thing [03:55:34] I want to have the sysop emailed whenever a page is edited/created... is that possible? [03:56:16] Ok, here's an example: [03:56:19] *{{citation|last=Lee|first=Wha|year=Unknown Year|title=Forgotten Glory of Koguryo|url= http://www.kimsoft.com/KOREA/kogu.htm|publisher=Kimsoft.com}} [03:56:37] elderek1: There's an extension that does this, I believe. [03:56:50] Alternatively, you could ask to sucscribe to the RC feed [03:57:03] amidaniel: I couldn't find one... do you happen to know what its called? [03:57:04] That renders as: [03:57:04] {{{inventor}}}, "Forgotten Glory of Koguryo", {{{country-code}}} {{{patent-number}}} [03:57:08] ask him* [03:57:15] elderek1: Lemme peek [03:57:23] thanks! [03:57:59] elderek1: Try http://www.mediawiki.org/wiki/Extension:Email_notification [03:58:24] @search Email_notification [03:58:24] Results: [enotif] [03:59:19] anyone could help me figure out whats wrong with this template with a table: http://pastebin.com/d4e44019d ? [04:00:21] It's based off the Template:Infobox_software2 from wikipedia [04:00:25] *amidaniel wonders why everyone is pastebinning wikimarkup today .... [04:00:42] are there any plugins relating to goole, e.g. to help indexing the site, tell googlebot to lay off special pages, adwords .... [04:00:50] amidaniel, should I paste it in here? [04:01:04] test34_: No, no! Just usually people link to pages on wikis [04:01:07] :) [04:01:12] 03(mod) Some way to specify heading level for new sections - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=9426 (10N/A) [04:01:17] test34_: Do you have parserfunctions installed [04:01:18] ? [04:02:04] *setuid stares at the wiki markup and gets more confused [04:02:34] amidaniel, that's an extension? no I dont [04:02:56] !parserfunctions | test34_ [04:02:56] test34_ : "Parser functions" are a way to extend the wiki syntax. ParserFunctions is an extension that provides the basic set of parser functions (you have to install it separately!). For help using parser functions, please see . For details about the extension, see . [04:03:13] for Wikimedia, what would be the least expensive ajax call to make, to check if a user was in the user group 'sysop'? eg: http://test.wikipedia.org/w/api.php?action=query&list=allusers&aulimit=1&augroup=sysop&format=xml&aufrom=Splarka (yes, I see you there, MZMcBride) [04:03:35] *amidaniel wonders who Splarka's talking to :) [04:03:39] amidaniel, what syntaxt did I use that isnt supported? [04:03:51] test34_: {{ #if: ... and others [04:03:59] ok, thanks amidaniel [04:04:02] parserfunctions really should be merged into core [04:04:14] amidaniel: it is for a script I am working on, per MZM's request, per http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28proposals%29#Public_admin_identity [04:04:29] 75% of template questions in here are parserfunctions related [04:04:32] http://en.wikipedia.org/wiki/User:Splarka/sysopdectector.js [04:05:05] there seems to be a lack of efficient ways to check if a user is sysop, Listusers would be annoying/painful, scraping logs would be unreliable, etc [04:05:34] Splarka: api's pretty quick at it ... [04:05:37] Splarka: What's wrong with checking the API? [04:05:39] what percent of template questions in here are htmlTidy related? [04:05:50] ami/pathos: it still uses an index offset, like Listusers [04:05:56] it would be nice if you could specify an exact name [04:06:10] Splarka: You can put in an exact name and limit to one ... [04:06:20] sure, and I do, and am ^_^ [04:06:44] *amidaniel suddenly remembers that he was working on extending userinfo to use for users other than oneself and never finished ... [04:06:46] but still, it returns exactly one result, unless there are no sysops alphabetically after [04:07:03] so is not optimum ^_^ [04:07:30] Bletch [04:07:35] pickypickypicky [04:08:00] Looks like I need the Citation extension, but that's broken in trunk [04:08:01] ugh [04:08:04] And all previous versions [04:08:13] well, yah, but it would also be nice, for example, if the groups could be returnable for one user (the public ones anyway) [04:08:17] thanks again amidaniel, that fixed it [04:08:25] http://rafb.net/p/vqKCxf49.html [04:08:38] Splarka: Yes, it will be so once I'm finished extending userinfo [04:08:42] 2 cents %_% [04:08:54] ami: cool *gives cookie* [04:09:11] In any case, using the api listusers is certainy not inefficient .. and using userinfo won't be any quicker :) [04:09:15] anyway, it also would be cool if an evil hack like this wasn't called for, some sort of native classifying of user pages by rights group [04:09:27] greetings all [04:09:35] Splarka: Ewww .... [04:09:43] amidaniel: is that related to http://www.mediawiki.org/wiki/API:Edit_-_User_group_membership ? [04:10:03] MZMcBride: Erm .. no [04:10:04] [04:10:08] *Splarka lets ami feel the pain [04:10:16] *amidaniel shivers [04:10:24] *MZMcBride didn't think so [04:10:49] i was just curious if efforts were being duplicated [04:10:59] Can someone help me debug this template crash with the Citation template? [04:11:09] Effors on the api being duplicated? Why, I never! [04:11:30] setuid: when you enable cite.php, it causes the issue? [04:12:06] MZMcBride, When I enable Citation, yes. [04:12:11] Cite and Citation are both enablewd [04:12:20] With just Cite, no errors (and broken citations) [04:13:01] do you have a link for Citation? [04:13:30] amidaniel: any idea of a time frame? a month? next year? whenever you get around to it? :) [04:13:44] whenever I get around to it :) [04:13:57] Or, if that takes too long, whenever someone else gets around to it :) [04:14:04] :) [04:14:22] the script that splarka created seems to work pretty well [04:16:04] I'm looking for it... it's in the extensions tree, but I can't find it in the upstream svn [04:17:28] http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/Citation/ [04:17:36] so you're using an extension from svn that doesn't have a mediawiki page and isn't being actively maintained and you're surprised there are problems? [04:17:47] MZMcBride: Hehe :) [04:18:34] setuid: what function are you after that Citation provides (or would provide)? [04:19:12] Well... let me expose my wiki and show you what's happening [04:19:40] you don't have to expose anything :) [04:20:03] i'm just not sure why you're even using Citation to begin with [04:21:03] http://65.172.152.98/mediawiki/wikis/en/ep/index.php/Goguryeo [04:21:09] Scroll to the bottom of that article [04:21:30] yeah... [04:21:47] ah.. the {{{inventor}}} nonsense [04:21:48] Something isn't being hooked, and I'm wondering if Citation is the missing bit [04:22:07] *{{citation|last=Lee|first=Wha|year=Unknown Year|title=Forgotten Glory of Koguryo|url= http://www.kimsoft.com/KOREA/kogu.htm|publisher=Kimsoft.com}} [04:22:09] renders as: [04:22:15] {{{inventor}}}, "Forgotten Glory of Koguryo", {{{country-code}}} {{{patent-number}}} [04:22:29] So what's missing? [04:22:44] the citation template [04:23:03] you code {{citation from wikipedia, i presume? [04:23:21] er.. s/code/got [04:23:24] i'm rather tired [04:23:26] This is directly from the db dump [04:23:42] Everything except my skin, is pristine, upstream bits [04:23:55] yeah, it's a template: http://en.wikipedia.org/wiki/Template:Citation [04:24:06] copy the template source code to your wiki and save [04:24:13] Right, I read that... [04:24:17] Copy it _where_ in my wiki? [04:24:23] to Template:Citation [04:24:30] !templates [04:24:30] For more information about templates, see . The most common issues with templates copied from Wikipedia can be fixed by installing ParserFunctions and HTML Tidy. [04:24:45] I read that page on Templates, it wasn't helpful at all [04:24:51] And seems completely unrelated to this issues [04:24:57] s/ues/ue/ [04:25:20] take the code here: http://en.wikipedia.org/w/index.php?title=Template:Citation&action=edit [04:25:25] I can't go running through 5.8 million articles, pasting in wikitext where it happens to be missing from the upstream dump [04:25:47] Ok, and where does that go? Parser.php? [04:26:01] http://65.172.152.98/mediawiki/wikis/en/ep/index.php/Template:Citation?action=edit [04:26:33] hrm, why isn't this part of the mediawiki standard templates? [04:26:36] actually, it looks like it's there already? [04:26:45] mediawiki doesn't come with any templates [04:26:51] it comes with the template namespace [04:27:04] MZMcBride: he imported the wikipedia dump [04:27:12] yeah [04:27:24] The dump includes templates [04:28:27] *setuid reimports templatelinks.sql [04:31:10] http://www.theonion.com/content/news/study_finds_working_at_work [04:31:12] *setuid grins [04:31:53] so, in an exported xml file, templates are all in {} brackets? [04:32:01] setuid: what are you going to do with this plucker format thing once you have it? [04:33:59] TimStarling, My intent, assuming it doesn't exceed mobile storage capacities, is to begin selling that version... and those proceeds go to a.) WMF, b.) Plucker's future development, c.) EFF, and d.) server maintenance/upkeep for re-gen'ing new versions. [04:34:36] I need to talk to Vishal (Vishtal? sp?) about the implications of using the work with/without WMF references, etc. [04:34:54] I'm not selling the content, I'm selling my time/effort/cpu cycles to generate them. [04:35:15] There's 27 total (9 languages, 3 initial projects) versions [04:35:26] setuid: Just make you sure you get your GFDL notices and attribution up before you distribute it [04:35:40] Of course [04:36:15] amidaniel, I'M VERY well-versed with copyright/lanham act issues. We fought it for 4+ years against Bluefish Wireless for their violation of our code. [04:36:39] We all learned a lot with the help of our FSF-appointed attorney (who happens to teach IP/copyright law). [04:37:04] setuid: Hehe .. alright, just making sure. A lot of people are quite ignorant to it and assume "Hey, they call it the *free* encyclopedia! That means I can do with the content what I want!" [04:37:44] wouldn't it be easier to use the HTML generated by wikimedia, rather than trying to duplicate wikimedia's setup? [04:37:48] That's why I'm making it clear that I'm not trying to profit from the work, I'm trying to provide something in exchange for my time. [04:38:05] TimStarling, No, because I can do it much faster than WMF can [04:38:15] why is that? [04:38:16] And the HTML generated, is insufficient for my needs [04:38:24] I don't want to hammer the servers [04:38:54] setuid: You could dl a static HTML dump though [04:39:00] Unless you know of some project where the HTML for every article is dumped in a downloadable compressed archive [04:39:09] yeah, the static HTML dump [04:39:20] http://static.wikipedia.org/ [04:39:24] afaicr, that has been broken for at least 2 years [04:39:28] Let me look again [04:39:29] it's currently broken, but you could probably fund some development in it [04:39:47] TimStarling: Are they all broken or just enwiki? [04:39:51] it's not *very* broken, it just needs a day or two of work [04:40:23] there was an issue in the last dump with articles that took forever, which hung the process up [04:40:30] also we were out of disk space [04:40:46] and in 1.11 there's the image URL issue which I reported in bugzilla yesterday [04:40:48] Ah [04:41:10] Well, images are a no-go anyway... I'd love to do it, but there's no mobile device right now with .5TB of storage [04:41:14] it certainly would be easier to fix dumpHTML.php than to write your own from scratch [04:41:21] And even if I could compress 99% of the image size out, that's still 50gb [04:41:37] you can filter out the image tags [04:41:44] I am [04:41:54] But I can't filter all of them out, because there are some usable images [04:42:08] I'm going to try to be a bit smarter about it for v2.0, and only pull some images [04:42:29] I may be able to break this into 26 separate, per-alpha-letter documents, and interlink them [04:42:45] Right now, Plucker has no notion of inter-document (separate .pdb file) links, but that isn't impossible to solve [04:43:47] But with my modified skin right now, I can have more granular, consistent control over what is and is not displayed when it is rendered. The static html dumps are interesting, for minor languages (pl, pt_BR, es, it), but for enwiki, it just won't cut it. [04:44:06] why not? [04:44:29] Well for one, it doesn't exist ;) And secondly, it'd be too many files to process in one directory. [04:44:39] what if you grabbed only the first image in every article (if any), and scaled it to a fixed ~200px wide thumbnail only... [04:44:43] I don't think any current OS will allow 5.8M files in one contiguous space or directory. [04:44:49] it's split into 3 directory levels [04:44:59] by the first three letters of the title [04:45:00] Splarka, That's an option, but that breaks down when you have something like... [04:45:04] (if the image tag used a larger version, it could just use browser scaling) [04:45:12] Splarka, http://en.wikipedia.org/wiki/A [04:45:15] Something like that page... [04:45:18] Which image do I take? [04:45:39] Splarka, There is no browser involved in this process [04:45:42] the fact that it doesn't exist, as I said, can be solved more easily than rewriting it from scratch [04:45:52] setuid: Well, popups distinguishes it correctly :) [04:45:53] TimStarling, Ok, where can I help? [04:46:05] amidaniel, popups? [04:46:06] picky picky ^_^ [04:46:25] Splarka, No browser, no tables, no css... we're talking about PDAs here, not desktop machines. [04:46:25] setuid: http://en.wikipedia.org/wiki/WP:POPUP [04:46:43] Was curious what it did to select the preview image .. it selects the correct on on the [[A]] article [04:46:50] ouch [04:46:53] (in other words, not the wikitionary one) [04:46:58] one* [04:47:13] I'm not sure I can code a javascript execution/interpretor into my crawler [04:47:28] neat [04:47:51] setuid: ok, three things spring to mind... [04:48:23] bug 12122, which I reported last night [04:48:32] not to interupt, but, can someone tell me what [[pl: and [[it: tags are? i assume it's to link to polish and italian version of something [04:48:44] job control which is robust to articles which hang or crash [04:49:05] and funding for server hardware, say an extra storage server [04:49:34] nothing special, just has to have ~2TB of storage space or so [04:49:35] DAPence, Interwiki links [04:49:37] DAPence: correct, those are "interlanguage" links [04:49:42] TimStarling: How much space do you need altogether for the dumps? [04:49:46] they add to the 'in other languages' sidebox [04:50:02] to use them inline, prefix with another colon [[:pl: [04:50:05] 2TB would leave some room to spare, iirc [04:50:17] Oh, that's not bad .. was expecting much worse :) [04:50:18] but I don't think you'd get the images on it [04:50:26] Yeah, I've got ~.75TB here dedicated to the dumps I mirror, but I don't mirror ALL of the dumps [04:50:52] thank you [04:51:03] Let me see what that bug is... [04:51:07] Well surely a minimal server with a couple tb of storage can't cost much [04:51:51] Servers aren't cheap ;) [04:52:22] And in 5 weeks, if I can't find a proper job, I have to claim bankruptcy myself. IBM laid us all off in July, and it's been VERY difficult to find work since. [04:52:31] Can't imagine it'd run more than $1k though [04:52:33] There's quite literally *NO* jobs left in the US, for skilled IT people. [04:53:15] Well there sure as hell are enouch incompetent IT folks who could really use replacing ... [04:53:16] I'm up to ~430 job applications, which resulted in 2 in-person interviews, 0 offers. I have my 3rd in-person interview on Wednesday with Red Hat. It's *SLIM* pickins out here. ;) [04:53:24] amidaniel, I totally agree ;) [04:53:42] I think I'm over-qualified, and that scares employers. Or something. [04:53:51] Yeah, could very well be. [04:53:57] TimStarling, How large are images now? [04:54:04] Last I checked, they were 300gb [04:54:30] I've never really understood that though .. not hiring an overqualified person willing to work at the same pay as an underqualified one ... [04:55:09] I should do some work to clean up ./maintenance/ to be more robust with table prefixes, single-source/multi-wiki installations (like mine), and so on. [04:55:18] hard to say exactly since there are some unused thumbnail images [04:55:29] but currently amane has 2.8TB used [04:55:37] amidaniel, The perceived "danger" is that they'll want to make more very quickly, or they'll get bored and leave, after you've spent cash to bring them onboard. [04:55:47] yow [04:56:24] out of 3TB, we'll have to split it very soon [04:58:01] Looks like a few of the higher-page-number wikis have broken static html dumps [04:58:08] I just tried /de/ and it too is broken [04:58:11] but /da/ is fine [04:58:31] anyway, my idea for bug 12122 is to use a special FileRepo subclass which acts as a proxy for the underlying repository [04:59:07] TimStarling, What was the bug/issue with hanging pages in the dump? [04:59:16] it could do the image snapshot when the file is requested [04:59:42] You know... how about this: [05:00:26] dumpHTML just dumps the raw HTML, but rewrites the image links using a magic token, which, when passed over with another tool (provided with the dump archive), would then fetch the images from Commons or their primary location. [05:00:34] That way, people who don't need images, don't get them... [05:00:45] And those that do, can run 'fetchImages.php' or something, and grab them [05:01:09] yeah, could do [05:01:48] the issue with the hanging pages was: there was a bug in the parser which caused some pages to hang for a very long time and then run out of memory and die [05:02:00] such bugs come and go [05:02:04] Is the full Wikipedia content now under GFDL? Or do some articles have to be flagged? [05:02:10] sometimes an article might cause the parser to crash quickly, sometimes slowly [05:02:30] setuid: The text is all under GFDL, many of the images aren't free though [05:02:40] Right, images have always been a question [05:02:49] I tend to veer away from them for that reason alone [05:03:09] Good idea :) [05:03:13] *amidaniel hates images [05:03:15] You can't programmatically tell when an uploaded image is copyrighted or not, or ripped from someone else, or cropped from a copyrighted image, etc. [05:03:21] in a perfect world, you could assume GFDL/CC-BY/PD if the images were on commons and not local [05:03:28] the HTML dump is broken into jobs, with maybe 2000 articles each [05:03:37] setuid: I wish you would sit down and explain that to these people who keep writing scripts to do jsut that :) [05:03:48] s/jsut/just [05:03:49] so if a job crashes halfway through, then there will be 1000 articles missing [05:04:20] and the job control system will continually restart it at the checkpoint (just before the crash), forever [05:04:27] TimStarling, Right... but that should be easily rectified, by creating a temporary mysql table that identifies which articles were "last". If it crashes, start with that last row, and pick up again. [05:04:40] Each "thread" (job?) hits the db and requests the next one on the stack [05:04:45] like I said, we have a checkpoint system already [05:04:50] Or has its own row, to track where it left off [05:04:51] Ok [05:04:58] So why do you lose 1,000 articles? [05:05:08] sometimes jobs will die for other reasons, like being killed by an admin [05:05:13] hahaha [05:05:16] it's ok in that case to restart them [05:05:23] Well, we can always tie brion down and tell him to stop that ;) [05:05:49] But seriously, what about a way to gracefully kill it, so it knows where to resume, instead of starting from 0 again? [05:06:03] we have that already [05:06:06] Have it trap the signal and shut down properly (i.e. write the last row to the checkpoint system) [05:06:10] it starts from *before* the crash, at the last checkpoint [05:06:18] Right [05:06:47] so it just needs to be tweaked slightly [05:06:59] Identifying why the parser crashes would be the highest priority, but since it's probably not 100% reproducable, it becomes the pink elephant [05:07:08] preferably in some way that doesn't make it lose articles every time someone presses ctrl-C in the controller [05:07:24] Is it due to someone passing crap HTML through tidy? Some XML tree that tidy consumes all memory trying to parse? [05:08:14] mediawiki is buggy, we know this [05:08:24] Well, it's been bolted onto for many years [05:08:25] it's a lot easier to work around it than to fix it [05:08:34] At least the last 3-4 when I've been involved in beating it up [05:08:59] Sure, but the problem with a quilt made of little 1" x 1" pieces, is that one loose thread, can take out a whole section [05:09:25] if that's the case then it's a bug [05:09:33] And when you get too many workarounds, identifying the REAL source of the problem (so you, I, Simetrical, robchurch, etc.) can go fix it, is no easy matter. [05:09:39] *nod* [05:09:44] all jobs have to finish before the compression job is queued [05:10:14] so one important workaround would be to have a final job timeout [05:10:33] Well, if only to just bring the load back down to a usable level before compression [05:10:35] a quantity of time beyond which finishing the job is abandoned, and the compression job is run anyway [05:10:54] the load? [05:11:27] Sure, if you're dumping 2k articles per-job, the system load is going to go way up, no? If you immediately tail from that into a compression job (high CPU), then your compression will drag-ass because the load remains high. [05:12:39] it's distributed, we use about 5 servers [05:13:09] each server has some fixed number of worker threads, usually 1 since they are dual-purpose with apache [05:13:16] Ok, I don't know enough about the underlying architecture of WMF there. It sounds like you've got most of the issues engineered pretty well. [05:13:17] the dedicated server srv142 has a few more [05:13:54] a crashing job will constantly be requeued, but at most it will only use one thread [05:14:13] less than that if there are still jobs running, because it's always added to the end of the queue [05:15:03] So let's say I get this thing out the door, and sell it for $, and get 200 new people/month buying it up, and buying updates, DVD versions of my work, and so on... and I generate enough money to give WMF a _recurring_ revenue stream from the sales of this, enough to fund a decent storage server at the beginning, and recurring money that will help fund additional disks/servers later on, would that in any way help? [05:15:47] I should think so, yes [05:16:10] I literally have no idea how broad the sales of this thing will be, but I get a lot of people emailing me asking about it... so I suppose there are a lot of interested parties. [05:17:26] so are you thinking of having Wikimedia supply the capital, i.e. the new storage server, in exchange for a revenue share? [05:17:46] Not at all [05:18:09] I'm thinking of giving WMF the money for a server, and then regular, recurring money (checks, donations, whatever) each month. From me, to WMF.. [05:18:20] ok [05:18:22] I ask for nothing back, except that the dumps keep happening, so I can keep re-generating newer versions. [05:18:50] You've provided something useful to me, and potentially to thousands of users... and if this works out, I'd like to provide something back in return. [05:19:47] we'll need to get this moving with the management... I suggest you send me a draft proposal, and then I'll endorse it and send it on up [05:19:56] Will do. [05:20:26] they get lots of crazy proposals, and they don't really have time to evaluate them all [05:20:50] so it might help to have my name on it somewhere [05:21:14] I wonder what other crazy proposals they've seen... [05:22:09] worried about the competition? [05:22:15] Not at all [05:22:32] I mean, what other wacky ways have people suggested to use/abuse the wikipedia data or projects? [05:23:16] I get the impression that it mostly revolves around trademark licenses [05:23:24] *nod* [05:24:10] If I was worried about the competition, I wouldn't openly be sharing screenshots, fixes, talking about it here, etc. I think it's fun, and there are others trying to do it... some successfully, some not. I'm just having a go at it, and it's working out pretty well. [05:34:50] 03tstarling * r27870 10/trunk/phase3/includes/GlobalFunctions.php: LF at file end [05:40:55] *setuid tests generating dawiki from the static dump [05:42:42] 03(NEW) Patrolled edits occassionally unlogged - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12129 normal; normal; MediaWiki: Page editing; (cannon.danielc) [05:55:43] MZMcBride, You still about? [05:56:07] MZMcBride, You think when I add back all of the template links, that the citation template will "Just Work(tm)"? [05:57:14] *amidaniel goes to nab another piece of pie [05:59:14] setuid: i know that the Citation extension you most definitely do not need and that you're having an issue with loading normal, everyday templates [05:59:30] rebuilding the template links may help, but i dunno [05:59:37] MZMcBride, Ok, I'll let that finish and see where that leads me [05:59:46] ok :) [06:00:12] I need to come up with a solution to remove tables, without removing the data within them. That's going to be an interesting challenge. [06:00:32] Since I can't use css to do it, and I can't use table/th/tr/td tags at all. [06:10:54] According to the docs of Article::doEdit, a duplicate key error can be thrown if EDIT_NEW is given and the article exists. [06:11:01] Would this be DBQueryError, or some other exception class? [06:11:02] Thanks much. [06:20:54] 03tstarling * r27871 10/trunk/phase3/includes/Parser.php: Fixed bug 12056: transformMsg() was not calling unstripBoth(), breaking the common use case of among other things. [06:23:37] 03(FIXED) Message transformation leaves strip markers - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12056 summary; +comment (10tstarling) [06:24:12] davidmccabe: yes, a DBQueryError [06:25:31] well, for MySQL at least, I don't know what the other DB classes do [06:25:47] hopefully they use a subclass of DBQueryError [06:27:53] <^demon|away> TimStarling: Speaking of DB queries, have a few minutes to look over and possibly discuss bug 12127? [06:30:09] oh, QueryPage::doQuery() [06:30:17] I thought you meant Database::doQuery() for a minute there [06:30:28] TimStarling: thank you. [06:31:12] TimStarling: Article::doEdit. [06:31:28] <^demon> TimStarling: Yeah, I got to thinking about it earlier today, and wrote up a bug report and (potential) basis for a patch. [06:31:43] davidmccabe: ^demon is talking about QueryPage::doQuery() [06:31:48] oh, ok. [06:33:44] 03david * r27872 10/trunk/extensions/LiquidThreads/ (Lqt.i18n.php LqtModel.php LqtPages.php): Create blank talkpages when first thread is posted; obviates the link coloring patch. [06:36:33] ^demon: I guess it would be pretty harmless [06:36:49] but there's no UI there, maybe there would be more reason to patch it in if there was an application as well [06:37:28] and also it should probably work for uncached queries [06:37:34] <^demon> I was thinking bug 6, being able to sort WantedPages by subjects. [06:37:49] <^demon> It would permit that to be fixed. [06:38:23] 03(mod) Extend Special:Wantedpages to allow users to search for keywords - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=6 (10innocentkiller) [06:38:23] 03(mod) Addition of "LIKE" parameter to doQuery() - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12127 (10innocentkiller) [06:39:23] <^demon> Does my patch look like it would work? [06:40:09] well, it would work for cached queries, but not for uncached queries [06:40:26] <^demon> Where would it need to be added to work for uncached ones? [06:41:10] getSQL(), I suppose [06:41:42] but you'd have to be careful to distinguish uncached queries from recache queries [06:42:42] <^demon> What does getSQL() do? It looks like just a dummy select statement. [06:43:19] it's overridden in the subclasses [06:46:02] <^demon> So potentially a lot of places need fixing, or just add that patch in a similar manner to the getSQL() method? [06:46:28] well, it would only need fixing in the places where you provide a UI [06:47:23] not all of the query pages could be searchable by qc_value, only the ones where qc_value is some user-readable string [06:47:59] <^demon> I wasn't clear on that, because I wasn't sure how the querycache table was set up. [06:55:29] 03(mod) extra spaces are added to text of link when rendered by wiki - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12104 +comment (10tstarling) [07:57:17] File does not exist: /usr/share/mediawiki1.7/favicon. [07:57:19] ico [07:57:24] where can i get this file [08:26:30] 03tstarling * r27873 10/trunk/backup/ (download-index.html dvd.html): Updated download-index.html to match current version on the server, added dvd.html from server and updated the info for the German Wikipedia. [08:31:27] <_wooz> lo [08:35:16] ,. [08:35:48] morning [08:35:57] hello [08:36:50] hello hello hello this is the morning *whistles* [08:37:38] File does not exist: /usr/share/mediawiki1.7/favicon.ico [08:37:39] *amidaniel shoots Nikerabbit with an Adidasrifle [08:38:21] yang: it's up to you put some icon or on it, or not, it's not mandatory [08:40:29] what could be causing nowiki tags turning into UNIQ-markers with the new parser? [08:40:50] have a quite many of those [08:41:41] *amidaniel would guess the code is to blame [08:41:55] which code? [08:42:10] I don't see reports of such on wikipedia [08:42:50] That is indeed the important question isn't it :) [08:43:12] Have an example of a problem somewhere? [08:43:41] http://translatewiki.net/sandwiki//index.php?title=User_talk:Gangleri&diff=164483&oldid=148952&curid=1045 [08:43:54] *amidaniel looks [08:44:04] Woa, lots of fun [08:45:30] Clearly there's more broken there than just the parser :) [08:46:11] Hi Raymond_ [08:47:05] Nikerabbit: apache will keep showing errors if i dont put the icon there [08:47:18] where can i get that icon [08:49:42] yang: of course it says if file doesn't exists, but that doesn't matter [08:49:53] yang: there is no *THE* icon [08:49:59] it's just icon [08:50:30] yang: Pick whatever icon you want to be your favicon, upload it, and name it favicon.ico [08:52:14] yang: For an explanation of what a favicon is, see http://en.wikipedia.org/wiki/Favicon [08:52:31] 03siebrand * r27875 10/trunk/phase3/ (29 files in 2 dirs): [08:52:31] Localisation updates from Betawiki. [08:52:31] * an, ar, br, ca, chr-latn, cs, el, et, fi, fr, hr, hsb, io, ka, kaa, la, myv, nap, nl, nov, pt, qu, sdc, sk, stq, tet, vo, wo [08:58:15] *amidaniel mosies off to bed [09:03:23] question for TimStarling - I've been tinkering with a new version of StringFunctions that uses regex to match strip markers, but I notice you've changed their format in a recent revision and eliminated strip() altogether. Do I need to match these new markers as well, or are they being used for a different purpose now? [09:06:01] Nikerabbit: a .gif or .jpg file with small size can be renamed to favicon.ico [09:09:05] http://en.wikipedia.org/wiki/Favicon [09:49:43] 03siebrand * r27876 10/trunk/extensions/ (53 files in 50 dirs): [09:49:43] Localisation updates from Betawiki. [09:49:43] * Fixes and additions to 50 extensions for ar, el, et, fr, hr, hsb, io, ka, la, nap, nl, pt, sk, stq, tet, vo [10:13:26] Hi [10:13:40] http://www.rentacoder.com/RentACoder/misc/BidRequests/ShowBidRequest.asp?lngBidRequestId=816040 [10:14:23] I'm looking for someone to implement IPN support in paypal button for mediawiki. Anyone interested can bid at the url above. [10:18:14] the site feels smelly [10:21:51] Nikerabbit: rentacoder interface is a shame, I agree [10:22:30] 03rotem * r27877 10/trunk/phase3/ (RELEASE-NOTES includes/api/ApiQueryInfo.php): Make API check for page restrictions in the old format too. [10:25:25] hi. how implement google analytics to cms mediawiki? [10:51:59] corange: http://www.mediawiki.org/wiki/Extension:Google_Analytics_Integration [10:56:16] Hi!Can anyone explain what I am doing wrong - I am trying to recieve from svn: svn checkout http://svn.wikimedia.org/viewvc/mediawiki/trunk/extensions/SemanticMediaWiki/ [10:56:32] it gives: svn: PROPFIND of '/viewvc/mediawiki/trunk/extensions/SemanticMediaWiki': 301 Moved Permanently (http://svn.wikimedia.org) [10:59:33] PROPFIND request failed [11:04:07] QuestPC: s/viewvc/svnroot/ [11:04:21] QuestPC: viewvc is not subversion repo [11:04:27] it is the viewing tool [11:04:35] for some strange reason ViewVC doesn't give links to tgz (maybe the feature is disabled) [11:04:56] domas: ah, thanks. I'll try the root dir [11:06:25] domas: svn: PROPFIND request failed on '/' [11:06:25] svn: PROPFIND of '/': 405 Method Not Allowed (http://svn.wikimedia.org) [11:10:16] domas: could that happen because I am behind transparent proxy, or it's just I am using wrong command? [11:11:24] tgz link would be enough for me - because I don't plan to submit, but I can't find tgz link in ViewVC (maye it's disabled) [11:11:43] "The Google Analytics tracking code has not been detected on your website's home page." ... code generated by google is not copy to any page? [11:19:28] http://svn.wikimedia.org/svnroot also gives the same propfind request failed [11:20:15] told you [11:20:18] svnroot/ [11:21:47] domas: I've tried both, and svnroot/ with ending slash, too. [11:21:55] ok [11:21:56] let me point you [11:22:07] svn co http://svn.wikimedia.org/svnroot/mediawiki/trunk/ [11:22:08] :) [11:22:16] svn co http://svn.wikimedia.org/svnroot/mediawiki/trunk/phase3/ [11:22:25] thanks. sorry for bothering. [11:24:14] just was in doubt that extensions will be in that dir. [11:25:26] Hi. I want to make the edit summary mandatory unless the user says it's a minor edit. Any ideas how to do that? [11:29:50] 03siebrand * r27878 10/trunk/extensions/OpenID/ (Consumer.php OpenID.i18n.php OpenID.php): [11:29:50] * add i18n file [11:29:50] * trim trailing spaces [11:36:55] 03siebrand * r27879 10/trunk/extensions/OpenID/README: Update installation instructions for Windows [11:40:04] 03siebrand * r27880 10/trunk/extensions/Translate/ (MessageGroups.php Translate.php): [11:40:05] Add support for extension OpenID [11:40:05] Party time. 100 extensions supported! [11:40:17] \o/ [11:40:24] domas: there was no extensions in phase3 subdir. proper path in my case was svn co http://svn.wikimedia.org/svnroot/mediawiki/trunk/extensions/SemanticMediaWiki [11:41:09] domas: but thanks for the hint, anyway. [11:41:10] well [11:41:13] if you need that - yes [11:41:17] I didn't know what you need [11:41:53] domas: no problem. I know that a lot more of the people ask, much less of them answer. [11:42:41] asking is much simpler than answering.. [11:42:59] answering is pie [11:43:02] once people know what they ask [11:43:14] meh, I like the word 'pie' [11:43:30] I wonder if dictionaries have the new definition of it in modern slang [11:43:49] please for google analytics: [11:43:50] oh, actually mac dictionary has it [11:44:28] $googleAnalytics = "UA-xxxxxx-x"; $googleAnalyticsMonobook = true;require_once( "$IP/extensions/googleAnalytics/googleAnalytics.php" ); in localsettings.php in root? [11:44:40] mac dictionary is Oxford dictionary [11:44:42] so likely to be good [11:45:49] corange: generally, extensions should be written so that they expect configuration settings to be defined *after* the extension is included. not sure if this is the case here. [11:46:19] though it doesn't provide direct meaning to 'pie' [11:46:22] just a phrase 'easy as pie' [11:46:51] whereas 'pie' = 'piece of cake' nowadays [11:46:51] :) [11:47:03] domas: easy as pie sounds strange. it should be tasty as pie.. pies aren't easy both too cook and to eat... [11:47:25] well, 'piece of cake' = 'pie' = 'very easy' [11:47:25] :) [11:47:29] domas: though there's not much of logic in languages. [11:47:38] its what kids speak nowadays [11:47:39] 03siebrand * r27881 10/trunk/extensions/OpenID/README: Fix typo [11:47:45] and I'm exposed to kids speaking! [11:48:00] domas: ah, PIece of cakE [11:48:24] other words that penetrated - fail, epic, .. :) [11:48:35] haha, I saw one program where fatal error routine was called epicfail() [11:48:37] 03(FIXED) [FEATURE] i18n support for OpenID - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=7033 +comment (10siebrand) [11:48:37] made me giggle [11:48:38] :) [11:48:53] hi [11:49:53] what extension should I take to enable the execution of conditional templates, like wikipedia? [11:50:11] whiles: ParserFunctions [12:03:46] 03(mod) Unescaped quote in YAML output - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12120 (10patrick.sinclair) [12:39:50] 14(INVALID) Tabs removed from textareas when editing - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12125 +comment (10acsulli) [12:58:23] I'm using MW 1.11.0 on http://wiki.freeculture.org/ , and I seem to have a problem where editing some pages causes the server to respond with an empty reply. [12:58:30] http://wiki.freeculture.org/index.php?title=Chapters&action=edit is one such page; not that other pages can be edited fine! [13:00:12] Other pages like http://wiki.freeculture.org/User:Paulproteus/SMW_Test can be edited fine - "note that other pages can be edited fine". [13:02:37] paulproteus: you know a bit about PHP don't you? [13:04:07] TimStarling, Yes, and now I've enabled debugging and the PHP error log. [13:04:40] in the apache error log you'll probably see a child process segfault [13:04:46] Er, ouch. [13:04:47] *paulproteus gulps [13:04:52] I'll look for that. [13:05:33] it's probably either a fatal error or a signal (e.g. segfault) [13:05:33] Don't see it in the main Apache error.log (this is on a Debian stable vserver fwiw). [13:05:50] fatal errors would show up in the PHP error log, and signals show up in the apache error log [13:06:23] you could also check the syslog, sometimes they go there [13:06:29] *paulproteus twiddles his thumbs [13:07:14] sometimes everything goes to syslog [13:07:20] Okay, I just caused the error again. No mention of anything exciting in the PHP error log, nor syslog. [13:07:44] that's a bit annoying [13:07:47] "anything exciting" defined as "anything not from spamd or dovecot or postfix more recent than a second before the error" [13:08:24] This wiki has gone through years and years of upgrades, for what that's worth. [13:08:30] Since v1.3 MW or so, I think. [13:08:34] well, so have ours [13:08:54] Oh, heh, good point. [13:09:08] :) [13:09:20] I'm so happy I work alone at home [13:09:22] I can laugh out loud [13:09:24] and nobody cares [13:09:25] ;-) [13:09:37] hmm [13:09:37] *paulproteus laughs out loud some too, but sometimes people turn their heads! [13:09:51] thats where working at home help :) [13:10:18] well, we could rule out a segfault, by triggering one and seeing if it appears in your logs [13:10:31] TimStarling, Sure, if you know how to. Do you? [13:10:34] classic PHP segfault script: [13:10:35] paulproteus: run httpd with -X [13:10:43] attach strace or gdb [13:10:45] have fun :) [13:10:45] Is that "Segfault every 5 minutes" mode? [13:10:46] (-; [13:10:47] foo(); [13:10:49] ?> [13:10:58] paulproteus: no, thats 'single foreground thread' mode [13:11:15] Hey, I guess I could just attach a debugger right now and try to segfault it. [13:11:25] if it segfaults, you'd see [13:11:39] if it is PHP fatal error (what is quite likely) you will not [13:11:47] but PHP fatal errors are logged, if you enabled that. [13:11:47] :) [13:11:56] Okay, so once I install gdb, any hot tips to segfault it? [13:11:57] did you restart httpd after setting it to log? :) [13:12:03] if you attach a debugger, it only attaches to one child thread [13:12:11] TimStarling: thats why httpd -X helps [13:12:11] ;-) [13:12:19] right, just explaining [13:12:19] TimStarling, But surely a child thread segfault would get noticed and logged...? [13:12:29] paulproteus: same as PHP fatal errors [13:12:33] it's difficult to do httpd -X if paulproteus doesn't have a dedicated server with the same setup [13:12:57] paulproteus: do you have a server you can dedicate to debugging? [13:13:06] TimStarling, Well, I can dedicate the production server for a few minutes. [13:13:25] if you have to take the site down, it's probably worthwhile trying other methods [13:13:44] Okay, I'd prefer non-site-down methods first if we can find some. (-: [13:13:45] anyway, when I'm chasing ghosts in our farm [13:13:50] I just attach to single child and wait [13:13:58] once it dies naturally, I attach to other child and wait [13:14:01] *Oh*, I see what domas meant. [13:14:12] Not *attach a gdb and cause a segfault* but *attach a gdb and see if one happens*. [13:14:14] yeah, I've done that one before too [13:14:15] to make it faster, I attach to multiple children [13:14:29] I'll try TimStarling's method first. [13:14:29] paulproteus: well, if you have a page, that always returns empty page, it is much easier [13:14:44] you can attach to a production child and then run ab until it hits the right one [13:15:04] if you know the URL, then you don't have to wait for a "natural" test case [13:15:06] mhm [13:15:09] I'll try the well, that one is just for testing your logging :) [13:15:38] no point attaching with gdb if it's not a segfault [13:15:39] I don't think it's a segfault, that's the thing. (-: [13:16:10] and segfaults _are_ in error logs [13:16:43] on the other hand, if you enable displaying of errors [13:16:50] blank pages would show 'fatal error' instead of being blank pages [13:17:06] Hmm, with TimStarling's foo() script I get a 0-byte return but no log entry. [13:17:24] it can be fatal error :) [13:17:29] Okay, -X enabled. [13:17:38] segfault! [13:17:48] this is a segfault with action=edit? [13:17:51] or with my script? [13:17:55] action=save I think. [13:18:01] ok [13:18:02] Er, let me double-check. [13:18:23] I got a segfault with -X with segfault.php (your script). [13:18:32] well, you need natural cases :) [13:18:33] With POSTing from an action=edit, no segfault. [13:18:55] you mean your test case isn't segfaulting anymore? [13:19:05] TimStarling, My test case never segfaulted and I made a mistake. [13:19:15] ok [13:19:19] Your segfault.php did segfault httpd -X, but saving http://wiki.freeculture.org/index.php?title=Chapters&action=edit never did. [13:19:29] but it did give a blank page? [13:19:32] because segfault.php is meant to segfault [13:19:33] *paulproteus nods [13:19:37] and action=edit is not [13:19:38] :) [13:19:45] Heh, I'm glad all is right with the world. (-; [13:19:55] then you have fatal error you have to have logged :) [13:20:02] (or printed :) [13:20:17] you could actually put a breakpoint on fatal error handler [13:20:21] and traverse zend internals [13:20:26] Freaky-deaky. [13:20:28] but that is probably something you don't want to do [13:20:33] How about I start with display_errors = On? [13:20:34] interesting way to fix broken error logging [13:20:41] paulproteus: that was what I was telling you ages ago [13:20:47] yes, try that [13:20:53] Heh, 'kay. [13:21:08] (uhm, was I?:) [13:21:19] oh I did [13:21:21] five minutes ago [13:21:42] Er, still blank? The heck? [13:21:43] thought it was one of those common cases where I tell something to myself, and believe everyone heard it. [13:21:59] *paulproteus is confused and wonders if he's editing the right php.ini [13:22:09] try phpinfo() [13:22:16] Good call. [13:22:52] http://freeculture.org/phpinfo.php thinks display_errors = On [13:22:56] But I still get a blank page. [13:23:49] suppressed fatals do that, don't they? [13:24:01] with the "@" operator [13:24:04] http://freeculture.org/phpinfo.php - would this say if I suppress fatals? [13:24:24] error_reporting = E_ALL [13:24:25] fwiw [13:24:42] try E_ALL|E_STRICT [13:24:51] apparently that disables the "@" operator [13:24:58] Oh, I see. [13:25:31] if it still doesn't work, we'll have to try something a bit more ugly [13:25:37] *paulproteus nods [13:25:50] Doesn't work still. [13:26:01] FWIW I did make some sort of one-off changes in my Semantic MediaWiki extension. [13:26:11] Let me temporarily disable loading it and other extensions to see if they're the problem. [13:27:01] Still blank page, great! [13:27:05] Okay, let's get ugly. [13:27:27] well, I have a technique for occasions like this, let me see if I can find it... [13:28:39] ah yes, the magic segfault tracker [13:28:54] but it will work for this too, don't worry [13:29:05] Heh, I could use some magic. (-: [13:29:46] http://noc.wikimedia.org/~tstarling/magic_segfault_tracker.php.html [13:30:01] roflcopter, yes. [13:30:11] This looks *great*. [13:30:21] in this case "echo" might be a bit slow [13:30:26] might be easier to send it to a local file [13:30:31] but suit yourself [13:30:50] This will spew junk all over production pages if I require_once() it in LocalSettings.php, right? [13:30:57] yes [13:31:21] *paulproteus changes the echo line: [13:31:21] echo " \n"; [13:31:27] oh, don't do that [13:31:32] it's really really slow [13:31:36] Aww, really? [13:31:37] your site will just crash [13:31:49] Oh, fine. [13:32:01] How would that be any slower than just echoing the function? [13:32:04] use $_SERVER['REMOTE_ADDR'], to only turn it on if it's you [13:32:16] these two: [13:32:16] register_tick_function('wfMagicSegfaultTracker'); [13:32:16] declare(ticks=1); [13:32:32] *paulproteus nods re: REMOTE_ADDR [13:32:32] put an if ($_SERVER['REMOTE_ADDR'] == ...) around them [13:33:14] This is so hilariously evil. [13:33:59] echo $func and echo "" are both slow enough to crash the site instantly [13:34:31] not to mention sending tens of megabytes of debugging junk for each page [13:35:25] so the general concept is to run it until it crashes, and then see what was the last line [13:35:34] that gives you an idea of what went wrong [13:35:55] oh, there's also the problem of output buffering [13:35:57] It's lots of fun seeing all this extra junk printed. [13:36:03] Output buffering! [13:36:05] you'd have to either switch it off or just use a file [13:36:12] Good thinking. FWIW I still get a 0b output. [13:36:13] *paulproteus disables it [13:36:24] output_buffering = Off [13:36:25] err [13:36:34] Fine, I'll use a file. [13:36:35] mediawiki switches it on automatically [13:36:45] Oh, "thanks". (-; [13:37:41] oh, Tim uses ticks [13:37:58] it's the only thing I've ever used ticks for [13:38:07] don't know what else it would be good for [13:38:24] for profiling! [13:38:29] what is it anyway? [13:38:35] meh [13:38:40] but I didn't get them yet either [13:38:43] 03gri6507 * r27882 10/trunk/extensions/Tooltip/Tooltip.php: v0.4.1 - Better support for parser function output [13:38:46] did read the manual page like 5 times [13:38:48] decided I'm stupid [13:38:49] 03siebrand * r27883 10/trunk/extensions/ContributionScores/ (3 files): [13:38:50] * Fix 'Non-static method ContributionScores::loadMessages() cannot be called statically' [13:38:50] * Using $wgExtensionMessagesFiles now [13:38:50] * Added 4 messages [13:38:50] * update indentation and such [13:38:51] and skipped it [13:39:16] tick functions are executed once per statement [13:39:31] using it for profiling would give you statement counts, perhaps [13:39:33] but not time [13:40:19] A tick is an event that occurs for every N low-level statements executed by the parser within the declare block. The value for N is specified using ticks=N within the declare blocks's directive section. [13:40:41] why does it need its own declare statement? [13:40:56] and what's low-level statement? [13:40:59] declare is only used for ticks, it has no other purpose [13:41:13] yay, blizzard outside [13:41:18] TimStarling: you know what is blizzard? :) [13:41:39] domas: I've heard of them [13:41:49] that's where it snows quite a lot, right? [13:42:02] "Ticks are well suited for debugging, implementing simple multitasking, background I/O and many other tasks." [13:42:08] multitasking?! [13:42:24] you can't use it for any of those things obviously, except debugging [13:42:27] which is what we're using it for [13:42:51] a low-level statement is probably the loop body of the executor, i.e. one opcode give or take [13:43:05] TimStarling: yeah, and also where it snows quite horizontally ;-) [13:43:11] with gusty winds, etc [13:43:31] it'd be pretty useless for multitasking or background I/O since it stops ticking whenever it's inside a native function [13:43:39] hmm [13:43:46] this reminds me [13:43:58] that there's new memcached library out there [13:43:59] native one [13:44:02] some say there is also "enddeclare" to go with declare :o [13:44:02] with async stuff [13:44:06] and also, 1.2 memcached [13:44:11] *domas puts that on noticeboard. [13:44:20] should remember some early european morning [13:44:23] and do the 1.2 stuff [13:45:40] TimStarling, So, um, my file gets things written to it when I GET http://wiki.freeculture.org/index.php?title=Chapters&action=edit but not when I POST by clicking "Save page". [13:45:43] And I still get a 0b response. [13:46:10] are you getting any headers with the response? maybe it's cached [13:46:30] Just a sec, I'll check. [13:46:41] *paulproteus tries LiveHTTPHeaders + Fx [13:48:23] or maybe it's filtered by some security module, and never gets to mediawiki [13:51:13] HTTP/1.x 200 OK [13:51:16] seems to be all I get back. [13:51:48] I don't think I have any smart security module. [13:51:50] mmm, always be aware of your assumptions... [13:52:19] I'm checking now! [13:52:24] let's get differential for a sec... [13:52:34] domas: the view from my window is like somebody had sprinkled powdered sugar on the landscape... [13:52:41] Yes, no mod_security or anything funky like that. [13:53:02] domas: just enough to notice, but not nearly enough to ski on. [13:53:27] Mmm, powered sugar. [13:53:34] ;-) [13:54:02] are edits to other pages working? [13:54:18] Yes, but I'll re-re-check right now. [13:54:47] I added "echo omg\n\n\n" to the magisegfaultracker function and other pages show the "omg" but no effect on our zero-byte friend. [13:56:11] *paulproteus removed the echo quickly [13:56:31] http://wiki.freeculture.org/User:Paulproteus/SMW_Test - yes, edits to other pages *do* work still. [13:56:35] 03siebrand * r27884 10/trunk/extensions/ContributionScores/ContributionScores_body.php: Use time() instead of mktime() preventing a 'PHP Strict Standards' notice [13:56:45] ok, I just tried saving the same text to another page, didn't work [13:56:47] paulproteus: are you Comcast user? [13:56:53] domas, No, why? [13:57:00] different text, works [13:57:00] compulsive comcast comment [13:57:10] domas, Heh, okay. (-: [13:57:16] I'm on some weirdo net connection at my grandmother's place in New Delhi, actually. [13:57:38] this is what I call differential debugging, the term made sense to me at the time [13:57:41] TimStarling, Think it hates the é ? [13:57:55] vary your inputs to narrow down the bug [13:58:12] *paulproteus sighs [13:58:18] Binary input search. How bizarre. [13:58:20] well, we can just bisect the page if there's some fragment of text causing trouble [13:58:30] TimStarling, That's true re: bisect [13:58:37] That's way better. I'll try that I guess. [13:59:39] Heh - http://wiki.freeculture.org/index.php?title=Chapters&oldid=14507 . [13:59:41] it's just a different approach, sometimes it's better, sometimes it's worse [13:59:51] it's worth trying it both ways [14:00:08] I'll try bisecting for now. [14:00:16] I'm doing it too [14:00:40] Um, okay! [14:02:16] can you try disable SMW? [14:02:21] just for kicks :) [14:02:30] I did earlier; it didn't help. [14:02:43] :( [14:02:47] would love to blame SMW \o/ [14:02:53] ok, I'm not being productive here [14:02:57] I'm hungry! [14:05:48] it takes quite a while to give this zero-sized reply [14:06:03] successful requests are quicker [14:06:07] *paulproteus nods [14:07:23] TimStarling, http://wiki.freeculture.org/Chapters is the longest initial substring that works. [14:07:36] it seems a bit random [14:07:58] maybe it's size-dependent [14:08:31] Consistently "[[Chapters]] are the heart of the free culture student movement. Chapters " fails for me where "[[Chapters]] are the heart of the free culture student movement." succeeds. [14:09:06] ok, let's go back to the stack then [14:09:14] ? [14:09:27] this isn't telling us anything [14:09:43] well, not anything that's useful without knowing what the faulty component is [14:09:50] Yes, agreed. [14:10:48] I can put the whole first paragraph in, I just did [14:10:55] "Yay" [14:10:59] So what is this, and how can we fix it? (-: [14:12:38] Hey, should I turn the magic backtrace ticks thing back on? [14:12:46] I disabled it a moment back. [14:12:54] "a moment back" == "when we started bisecting" [14:13:00] no, we've ruled that out already, I think [14:13:18] the first approach we were trying was to look at the stack [14:13:25] good afternoon [14:13:31] i.e. the calling stack, or as you go wider, the network stack too [14:13:35] is mediawiki oracle support completely abandoned? [14:14:29] do you have the mediawiki debug log enabled? [14:14:39] paulproteus, not rza [14:14:54] TimStarling, No I think. [14:15:08] actually look at the access log [14:15:13] the apache access log [14:15:28] does it list the failing requests? [14:16:10] Yes, I think so. [14:16:15] Let me verify by making one fail I guess. [14:16:16] BTW: [14:17:24] what file should i edit to add adsense to my wikimedia ? [14:18:01] 122.163.142.44 - - [27/Nov/2007:23:16:58 +0900] "POST /index.php?title=Chapters&action=submit HTTP/1.1" 408 - "-" "-" "-" [14:18:16] That's "request timeout" fwiw. [14:18:28] 408 is request size [14:18:43] mm [14:18:46] interesting [14:19:31] Er, are you sure it's not the HTTP response code domas? [14:19:49] I think "408 -" means "response=408 responsesize=- " [14:19:52] ok, it is [14:20:06] Is, like, the mysqld totally confused or something? [14:20:11] no [14:20:20] 04(REOPENED) [FEATURE] i18n support for OpenID - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=7033 +comment (10siebrand) [14:20:26] looks like it's on the client side of apache [14:20:57] TimStarling, "client side" of Apache? I thought Apache was a web server, and I'm not using it in proxy mode. [14:21:15] TimStarling, BTW, if you want a shell or something, feel free. [14:21:19] I mean in the stack [14:21:22] Or a DB dump or something. [14:21:39] mediawiki doesn't return 408 [14:21:44] excuse me, what file should i edit to add adsense to my wikimedia ? monobook.php I think, right? [14:21:53] whiles: isn't it on FAQ? [14:21:56] and, 'mediawiki' [14:22:07] human -> browser -> network -> apache -> mediawiki -> MySQL [14:22:16] *paulproteus nods [14:22:20] mediawiki :p [14:22:21] Er, freaky-deaky. [14:22:25] the left is the client side, the right is the backend [14:22:36] can you strace the httpd -X getting bad request? [14:22:38] and publish that somewhere? [14:22:41] Boy oh boy. [14:22:44] Sure I can publish anything. [14:22:46] well, or any httpd [14:23:10] how long does it take to get the empty response? [14:23:13] maybe the connection is being cut off by the network or the OS [14:23:13] ca. 30s [14:23:19] or comcast [14:23:22] TimStarling, It's possible I suppose. [14:23:23] *domas ducks [14:23:32] paulproteus: if you access locally, is it cut? [14:23:37] paulproteus: or if you use https [14:23:39] or SSL tunnel [14:23:40] etc [14:23:48] FWIW there is no SSL on this Apache. [14:24:21] you could use lynx or w3m to do the edit [14:24:33] in a terminal on the server via ssh [14:24:41] inetnum: 202.221.145.0 - 202.221.145.63 [14:24:41] netname: JOIITOS-LAB [14:24:41] descr: DG Incubation, Inc. [14:24:42] i'm sorry domas, i didn't find it on the FAQ.. [14:24:43] heeeeeee [14:25:04] I doubt Joi is filtering anything, but.. :) [14:25:45] that's in japan [14:26:03] yup [14:26:07] I was almost going to ask "I don't suppose you're accessing this via the great firewall of china" [14:26:14] but it seemed like a silly question [14:26:30] well, not only china has content filtering [14:27:18] http://freeculture.org/roflcopter is the strace [14:27:40] sure, Qatar has the whole of the country accessing the web through a netnanny proxy [14:28:18] whiles: http://www.mediawiki.org/wiki/Extension:GoogleAdSense [14:28:19] Hello [14:28:27] but what other country will send RST packets if you send a bad word through a connection traversing the firewall? [14:30:30] paulproteus: you could've told you have Wordpress embedded [21:09:47] Duesentrieb: Ah, that could be it [21:09:53] Well, let's see how long he survives :) [21:10:10] MZMcBride: ts stuff is being done last, it seems :) [21:10:20] ah ok [21:13:37] 03(mod) Split user right "editusercssjs" - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12110 +comment (10ThomasBleher) [21:16:30] 03siebrand * r27911 10/trunk/phase3/languages/messages/ (17 files): [21:16:30] * Fix messages not using "Project:" where MessagesEn.php does [21:16:30] * rebuildLanguage through Betawiki export [21:16:35] js can be executed inside of css? [21:16:47] eh? [21:16:56] IE suports expression() [21:17:04] i hate IE [21:17:13] 14(DUP) Give ability to make images linking to any page - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=12114 +comment (10ThomasBleher) [21:17:14] 03(mod) Allow images that link somewhere other than the image page ( using normal link syntax) - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=539 +comment (10ThomasBleher) [21:17:24] bug 12110 has a link to more info [21:18:35] @trusted [21:18:35] [wikipedia/AmiDaniel, s23.org, wikimedia/Eagle-101, wikimedia/Soroush83, 68-112-142-161.dhcp.stcd.mn.charter.com, wikia/Jack-Phoenix, silentflame/member/pdpc.base.minuteelectron, wikipedia/AzaToth, cpe-69-201-152-135.nyc.res.rr.com, wikimedia/Danny-B., cc1081997-b.harli1.fr.home.nl, wikipedia/duesentrieb, clematis.knams.wikimedia.org, wikipedia/ialex, unaffiliated/timlaqua, wikimedia/Pathoschild, wikipedia/MZMcBride, c-67-171-249-42.hsd1.wa.comcast.net, fuchsia.knams.wikimedia.org, wikipedia/simetrical, 222-153-0-95.jetstream.xtra.co.nz] [21:18:54] @untrust 68-112-142-161.dhcp.stcd.mn.charter.com [21:18:54] Removed 68-112-142-161.dhcp.stcd.mn.charter.com from trusted hostnames list. [21:19:04] ;-) [21:19:34] oops, wrong channel. ;-) my bad. [21:19:47] wb mwbot [21:21:33] lol? [21:22:25] *amidaniel wonders what TimLaqua is doing to poor mwbot :( [21:22:32] lol [21:22:33] !sop [21:22:33] Standard Operating Procedure [21:22:37] indeed [21:22:44] !mediawiki [21:22:44] MediaWiki is a free software wiki package originally written for Wikipedia. It is now used by several other projects of the non-profit Wikimedia Foundation and by many other wikis. You can find out more about it at http://www.mediawiki.org [21:22:51] !mediawiki > chuck [21:22:51] You don't have permission to do that. [21:23:08] well okay then lol [21:23:09] denied :) [21:23:18] Sorry, Dave. [21:23:19] hey, it should totatally do file sends. ;-) [21:23:30] !mediawiki | chuck [21:23:30] chuck: MediaWiki is a free software wiki package originally written for Wikipedia. It is now used by several other projects of the non-profit Wikimedia Foundation and by many other wikis. You can find out more about it at http://www.mediawiki.org [21:23:35] Is probably what you were looking for :) [21:23:41] ohh [21:23:47] lol [21:23:57] it needs a !hi or !hello saying "Hello, welcome to #mediawiki" or something like that [21:24:01] chuck: #mwbot is a good place to play w/ him. [21:24:07] ok coolheh [21:24:08] he's pretty friendly. [21:24:16] what's it written in? [21:24:21] !hi [21:24:25] !mwbot [21:24:25] Hi! I'm mwbot, a bot that was quickly whipped up by Daniel Cannon (AmiDaniel) to help out around #mediawiki. Some quick help is at http://www.mediawiki.org/wiki/Mwbot, you can find all my source code at http://amidaniel.com/viewvc/trunk/MWBot/?root=svn [21:24:34] cool [21:24:59] Oh .. I forgot to add those new features over thanksgiving like I said I was going to ... [21:25:29] Poor mwbot is so unloved :( [21:25:31] *amidaniel pets mwbot [21:25:58] amidaniel: so what is it written in? [21:26:27] chuck: java [21:26:33] Whats the best way to add tabs at the bottom to the monobook skin that would work with all browsers (edit, discussion, etc..)? [21:26:36] oh, awesome [21:28:06] test34_: look here: http://www.wikia.com/wiki/User:Splarka/tricks [21:28:19] thanks MZMcBride [21:28:38] *amidaniel is scared of any manual called "tricks" [21:28:59] splarka works for wikia [21:29:14] *amidaniel is scared of splarka too :) [21:29:35] test34_: or http://meta.wikimedia.org/wiki/Help:User_style/bottom_tabs (might be outdated though) [21:29:54] That one at least has a better-sounding title :) [21:31:19] hmm, that meta page hasn't been edited in a year [21:31:32] thanks AlexSm.. I tried the ones on this page but they dont seem to be very compatible unless you use the simple version which arent tabs [21:36:56] 03catrope * r27913 10/trunk/phase3/ (4 files in 3 dirs): [21:36:56] API: [21:36:56] * Adding rvdiffformat parameter to prop=revisions [21:36:56] * Creating formatters for unified (UnifiedDiffFormatter) and array (ArrayDiffFormatter) diffs [21:37:31] 03tlaqua * r27914 10/trunk/extensions/UserMerge/UserMerge.php: Changing extensionCredits description from 'userrights' to 'usermerge' permission. [21:39:05] amidaniel: how exactly would i get my own instance of mwbot running? [21:39:22] 03siebrand * r27915 10/trunk/phase3/languages/messages/ (16 files): [21:39:22] * Fix messages not using "Project:" where MessagesEn.php does [21:39:22] * rebuildLanguage through Betawiki export [21:39:49] chuck: How much is it worth to ya? :) [21:39:55] lol [21:40:02] amidaniel: but how do i run it? [21:40:10] chuck: Do you have a java compiler handy? [21:40:11] amidaniel: all projects on toolserver much be open source!! [21:40:17] amidaniel: i do [21:40:20] flyingparchment: Hehe :) [21:40:23] does javac work? [21:40:29] chuck: sure thing [21:40:44] amidaniel: okay [21:40:48] i'll download the source [21:40:48] chuck: So svn co http://amidaniel.com/svnroot/trunk/MWBot [21:40:57] Then compile all that into a perty jar file [21:41:10] With main class = MWBot [21:41:13] Then run her [21:41:34] you don't have a build.xml? [21:41:50] Oh, eclipse probably stuck one in there [21:41:52] okay so how do i compile it? [21:41:57] *amidaniel peeks [21:42:09] oh, eclipse.. i think it can generate them, but it won't by default [21:42:36] amidaniel: ^ [21:42:39] that's one nice thing about netbeans, it will produce a build.xml, even if it's full of netbeans-specifit stuff [21:42:43] Yeah, didn't stick one in there [21:42:51] Hmm .. I don't care much for netbeans [21:42:58] Then again I don't care much for eclipse either :) [21:43:04] i don't like both all that much [21:43:18] netbeans has some nicer stuff (a better JSP editor, better ant integration), but a horrid interface (Swing) [21:43:23] *amidaniel tries to think of what commands chuck needs to give to the compiler .. sec [21:43:41] i like IDEA, but it costs like $600 :) [21:43:45] I'm honestly just not an IDE kinda guy .. [21:43:49] *amidaniel prefers vim :) [21:43:57] i'm not either, except for java [21:45:10] bletch, javac's a pita ... can I send you a jar chuck> [21:45:28] sure [21:45:57] 03siebrand * r27916 10/trunk/phase3/languages/messages/ (5 files): [21:45:57] * Fix messages not using "Project:" where MessagesEn.php does [21:45:57] * rebuildLanguage through Betawiki export [21:46:23] Wow, dcc didn't explode on me for once :D [21:46:26] amidaniel: lol [21:46:34] amidaniel: okay, so *now* how do i run it? [21:46:40] chuck: alright, now run that with java -jar mwbot.jar [21:46:57] And type in whatever it asks you to type in [21:47:21] okay it spit out a bunch of ugly errors [21:47:29] my guess is that my java version is too old lol [21:47:37] chuck: Oh, yeah, it's java 1.6 [21:47:53] Should be compatible down to 1.5, but you'd need to compile it yourself then [21:47:54] ah [21:47:56] okay [21:48:07] 03(mod) Freezes up - 10http://bugzilla.wikimedia.org/show_bug.cgi?id=11858 (10brion) [21:50:57] amidaniel: how am i supposed to add to the trusted list if your hostmask is the only trusted one lolol [21:51:11] chuck: That is a question indeed, isn't it :) [21:51:17] Aye, the beauty of hard-coding :) [21:51:20] lol [21:51:55] amidaniel: care to join ##piratebot and fix that for me please? hehe [21:57:59] isn't there a magic word to let me specify a page's title? [21:58:18] {{PAGENAME}} [21:58:25] or {{FULLPAGENAME}} if you want the namespace [21:58:32] How can I make a numbered list, with some things in-between each # line? [21:58:42] Solifugus: such as? [21:58:46] Skizzerz: Not to GET the page name .. but to set it [21:59:03] DigitallyBorn: {{DISPLAYTITLE: }} [21:59:07] DigitallyBorn: {{Displaytitle:something}}, although I think the something has to be close to the original title [21:59:14] aha .. thanks [21:59:26] Skizzerz: such as so it shows like 1. blah blah
some other stuff
2. blah blah [21:59:36] except on multiple liens [21:59:43] s/liens/lines/ [21:59:49] not sure if that's possible with # without resetting the numbering... [22:00:31] Skizzerz: yep. it resets the numbering so it's always line 1. Is there another way to do it, then? [22:00:40] yes, via their HTML elements [22:00:51] manual... ok.. then.. [22:01:09] I'll whip up a pastey if you want it [22:01:24] sure [22:01:33] put lots of frosting [22:01:42] frosting = explanations? [22:03:58] "whip up" = baking = referance to frosting [22:04:47] "pastry" too :) [22:04:48] ah [22:04:53] *Skizzerz didn't quite get that :P [22:04:53] Is there a way I can relax the displaytitle rules? [22:05:07] sure, hack up the code [22:05:09] without hacking? no [22:05:40] DigitallyBorn: im most cases, a redirect will do what you want. [22:05:43] Skizzerz: it was a joke.. my poor sense of humor. [22:05:45] you might be able to create your own function to that :) [22:06:01] Solifugus: don't worry about it, I just completely missed the joke there [22:06:16] DigitallyBorn: a redirect is not very different from using displaytitle [22:08:32] Solifugus: http://mediawiki.pastey.net/78145 [22:16:59] Skizzerz: thanks.. [22:17:04] np [22:19:17] I've got two questions: Is there any eta on MW 1.12? And How can I tell MW to stop using gzip for delivering pages? [22:24:30] jamasi: hopefully within a month, however my schedule's cramped as i'm moving cross-country in a couple weeks :) [22:24:39] and for the gz, you can disable that in your LocalSettings.php [22:24:47] look up where it checks for gzip stuff [22:24:52] *brion_away wanders home for the evening [22:26:09] brion_away, thanks and have a good move. [22:27:10] hey got a huge problem. i'm supposed to deliver my wiki in 2 days and today this problem occured. all of a sudden, alla pics in my wiki (pics like next page, previous age, pics from the toolbox, even the pic the is always down on the rigth that says powered by mediawiki... all of them, just won't show. any ideas what's wrong? [22:28:03] the file /skins/common/images is still there and tha pics are there. where do i look now for probs? [22:29:30] help anyone? is anyone even there? [22:29:50] people are here [22:29:56] i'm not sure why you're seeing that [22:30:20] anyone know if there's a way to cc/bcc an administrator [22:30:36] neither do i :( do u know which file may contain the name of the pics? just to see if tthat file is missing [22:30:40] when emails are generated to new users - the one where it says, click here to confirm your email address [22:31:07] emy: did your apache config change? [22:31:19] can you visit one of the image urls directly? [22:31:42] e.g. http://YOURHOST/skins-1.5/common/images/poweredby_mediawiki_88x31.png [22:32:02] emy: what web server are you using [22:32:28] emy: do not put the wiki into the document root. especially not if you want to use rewrite rules or aliases to get pretty urls. [22:32:40] are you, per chance, using rewrite rules? [22:33:34] jlerner> didn't.. and no one else from what i know. though i did ask my teacher to do the chmod 777 thing that is in the instructions in mediawiki to allow fie uploads. but i don;t think he got the time to do it [22:34:50] when i try to access it directly it says access forbiden [22:35:05] so you have to allow it [22:35:13] then that's your problem... if you can't access it directly, then your pages can't either [22:35:18] a) filesystem permissions and b= check .htaccess [22:35:35] but everthing was working. why did that change suddenly? [22:36:02] emy: because someone changed the server config [22:36:10] maybe my teacher did something :/ [22:36:22] emy: very likely: chmod, wrong. [22:36:39] the .htaccess? where is that exactly:$? [22:36:55] in any web directory [22:37:01] if it exists. [22:37:11] aaa ok ;) [22:37:12] it controlls... access from the web. google it. [22:37:18] it's usually hidden though, so if you don't have hidden files shown, you might not see it :) [22:37:18] so the chmod is wrong? [22:37:29] but my guess is indeed that the permissions on the filesystem level are set wrong [22:38:10] "access forbidden" has one of two reasons: bad file permissions, or bad config in .htacess (or the same in httpd.conf) [22:38:27] I've got another question: Where can I find a manual regarding the proper use/setup of multiple squid caches in a decentralised environment to speed-up MW? [22:39:31] maybe because, i was trying to put a logo at my wiki instead of the default one. what nothing would work, and then i saw at the instructions that i had to enable uploads and that had to be done "chmod 777 or allow the Apache user to write to it," maybe that did? [22:40:04] because i asked the teacher to do it [22:40:31] jamasi: not sure if there exists mw-specific documentation for that. you should be able to find something about squids and load balancing though [22:41:02] jamasi: if you have a *concrete* question about setup, try to contact mark (if he's not busy reconfiguring servers, as he was today) [22:41:24] emy: if he did it wrong, that may be the cause. [22:41:32] emy: if he did it right, it wouldn't. [22:41:33] We're using a squid right now, but somehow there seems to be a problem wrt purging. [22:41:56] jamasi: there are special options for that in mediawiki - did you set them? [22:42:21] update in templates take a quite a long time (looks like a query has to timeout). [22:42:44] are those options documented somewhere? [22:42:56] they are documented in DefaultSettings.php [22:43:02] maybe on mediawiki.org too [22:43:06] well, those are set, afaik [22:43:14] check them :) [22:43:34] anyway... if you edit a templates, and that takes long... is that sounds like a job queue issue [22:43:41] is the template used a lot? [22:43:51] i mean, more than, say, 20000 times? [22:44:32] well, wgUseSquid and wgSquidServers are set. [22:45:01] no, more like 5 times and it's taking > 1 minute [22:45:20] that's odd. [22:45:28] unless omething is very wrong with your database. [22:45:40] sure it's specific to templates? and not an odd coincidence? [22:46:09] it's extremely obvious when editing templates. deleting pages is also quite slow. [22:46:24] moving, too. [22:46:38] soundy like your database is suffering general slowness [22:46:55] have you thought about using multiple slave dbs and replication? [22:47:06] is the db on the same box as the webserver? [22:47:11] but only on updates [22:47:13] i fixed it.. apparently he changed the permissions of the files. and haf "unchekced" the executable to all users [22:47:32] emy: executable on directories means "access"... [22:47:35] currently it's on the same box. [22:47:44] ;) [22:47:44] jamasi: change that first of all. [22:48:09] I'll talk the admins into doing this. [22:48:18] so, while i'm here... can u tell how i change the logo cause i've done everything i;ve read and nothiung has worked :( [22:48:42] emy: set $wgLogo to the full url of whatever you want as a logo [22:48:52] the strange thing is that I only started to observe the lags after the squid was enabled. [22:49:15] jamasi: turn it off and see if it gets better :) if yes, something strange is happening... [22:49:56] the url to the pic i want in /skins/common/images? [22:50:48] yes [22:51:58] in local settings? and that's it? [22:52:02] yes [22:52:14] it should then set the logo [22:52:33] won;t work:/ [22:52:46] do i have to enable something first maybe? [22:52:49] what's the problem you're having? [22:52:52] how do I make a "welcome" page for someone who logs in for the first time? [22:53:04] If you aren't seeing the new logo, try clearing your cache [22:53:16] aaa ok i'll try that ;) [22:53:38] carambola: like, a template to put on their page to welcome them, or an auto-redirect to the welcome page after they log in for the first time? [22:53:53] *Skizzerz isn't sure the second one is possible... [22:53:53] the same:/ [22:54:24] Duesentrieb, without the squid edits are not taking as long as before. [22:55:30] so I think it might have something to do with the purges and squid locking the tables or something. [22:55:53] Skizzerz: i've got a {{welcome}} template ready to go [22:57:05] i think i'd prefer to populate their user page with the template and send the user to their user page for the first login [22:57:41] *Skizzerz has no idea how to do that, but there might be an extension that has that functionality [22:58:17] jamasi: "squid locking the tables"? i don't know that much about squid, but i'm pretty sure they should not use a database at all. [22:58:41] can somebody please check this sql syntax? http://rafb.net/p/kPDzjZ71.html [22:58:46] jamasi: it may indeed have to do with purging though. although i though that's done via udp, so, non-blocking. but i can be mistakin [22:59:01] I can't figure what's wrong with the sql syntax [22:59:04] i don't know the details really, sorry [22:59:16] Looks like I'll have to dig deeper into this. [23:14:00] how to I allow upload of additional extenstions from the wiki upload page? [23:14:33] !upload | pyurt [23:14:33] pyurt : File uploads are an often-used feature of MediaWiki, but are disabled by default in all current release versions. To enable them, first make the upload directory (default images) writable by PHP, then set $wgEnableUploads to true in LocalSettings.php (i.e. "$wgEnableUploads = true;"). See for more info. [23:15:30] thnks [23:20:49] anyone know how to auto-populate a new user's userpage? and then send the new user there after first login? [23:21:21] i just skimmed through all 2751 lines of DefaultSettings.php to no avail [23:22:00] you'd need an extension for that functionality [23:22:26] allow me to find a few you could potentially use [23:22:42] I thought I remembered that en.wikipedia.org had that functionality... such an extension is not listed in their Special:Version [23:23:04] nope [23:23:04] might've been a bot then [23:23:08] it does not have such functionality [23:23:16] but I don't remember en.wp having that... [23:23:26] maybe commons? [23:25:46] who could i go to to get my own mediawiki mod made? [23:26:08] yourself! [23:26:35] domas, i don't have the skill necessary [23:27:16] Messedrocker: perhaps what you need is already written [23:27:30] i hope [23:27:31] Messedrocker: What are you looking for? [23:27:56] what i want is a mod that for each page there will be a column running down the side where people can add comments per article section [23:28:15] one article section would have a comments section, another would have its own... [23:28:58] Sounds a little bit like some of what's been implemented in liquidthreads [23:29:21] carambola: http://www.mediawiki.org/wiki/Extension:NewUserMessage might be close to what you're looking for [23:30:32] they'll see the message there regardless of if you have that extension or not, but the extension is needed if you want to save it on their talk page [23:30:48] amidaniel, what in particular [23:31:08] Skizzerz: thx... testing now [23:31:42] Messedrocker: I think david put in some kind of a per-section comment dealio. I dunno .. try it out or talk to him [23:34:43] Skizzerz: beautiful. Now, if only my template was correct. thx [23:34:49] np :) [23:35:05] something about {{SUBST:BASEPAGENAME}} isn't very friendly :) [23:37:31] 03siebrand * r27917 10/trunk/phase3/languages/messages/ (129 files): Correct namespaces in messages [23:40:28] carambola: if you want it to subst on the talk page but NOT on the message page, try {{SUBST:BASEPAGENAME}} [23:41:46] I fixed it... I just removed the SUBST: part. works now. [23:42:14] that works too :) [23:42:52] I also modded the NewUserMessage to subst the template rather than transclude it [23:43:21] it transcluded it o.0 [23:43:26] that's weird [23:43:36] that way, the user can dive right in to see the wiki markup [23:44:20] well, it created a talk page with the text {{$wgNewUserMessageTemplate|username}} [23:44:34] i just modded it to put a subst: after the {{ [23:44:48] hi [23:44:52] hello [23:44:53] hi [23:45:12] this is someone that work on http://www.mediawiki.org/wiki/Bitfields_for_rev_deleted this? [23:45:38] lusum: You're probably looking for AaronSchulz who doesn't appear to be on right now [23:45:49] lasum: it's on mw, you just have to enable it [23:45:55] well, it's on mw 1.11 at least [23:46:00] really? [23:46:02] yes [23:46:10] just add the following to your localsettings.php: [23:46:11] we can enable it on it.wiki? [23:46:26] ok italian wikipedia too? [23:46:29] and how? [23:46:30] lusum: No [23:46:38] why? :( :( [23:46:40] sorry, thought this was for local install :P [23:46:43] It's not stable enough / hasn't been tested thoroughly enough [23:46:45] there are still a few bugs with it [23:47:01] when will be enabled for all wikis? [23:47:10] Once code review is completed and the bugs fixed :) [23:47:11] we need it very much [23:47:12] for example, deleting a page the normal way erases all revision deletion info [23:47:31] we have a lot of copyright problems that need that [23:47:39] so, by undeleting, sysops can superficially view info that bureaucrats have decided to hide from even the sysops [23:47:47] lusum: Do you have any oversights on itwiki? [23:47:53] If not, it would be wise to get some [23:48:03] i'm sysop and checkuser [23:48:18] oversight? [23:48:26] yeah, all wikimedia wikis have Oversight (IIRC), so get some of those [23:48:30] !oversight | lusum [23:48:30] lusum: Oversighting removes revisions from access by normal users and sysops. More information is available here: . [23:48:54] Skizzerz: No, not all wikimedia wikis. It's gladly turned on when requested though. [23:49:14] also, oversight's permissions clash with the bitfield revision deletion permissions [23:49:19] Messedrocker: I think david put in some kind of << david who [23:49:23] no, ni [23:49:30] Messedrocker: mccabe [23:49:34] we don't need oversight [23:49:40] Messedrocker: davidmccabe in other words [23:49:47] we need only to hide versions to normal users [23:50:02] lusum: A normal sysop can do that easily already. [23:50:09] yes [23:50:30] for example, 'hiderevision' is used in oversight for the ability to use Special:HideRevision to remove revisions, 'hiderevision' is used in the bitfield for the ability to hide revision info as well, so you would inadvertently give all sysops the ability to use oversight by accident if it was installed [23:50:47] ok [23:51:03] Skizzerz: I presume there are bugs for all of these? [23:51:06] not sure [23:51:17] that's just something I found out while I was toying around with it on my wiki [23:51:20] the problem is restoring a voice, some version already deleted could be restored [23:51:40] yes, it can [23:52:05] and in the case of huge voices, could be cool to hide only the latest versions instead cancel the voice and restore all but the deleted versions [23:52:05] lusum: sure, but rev_deleted doesn't do much in the way of repairing that "probleM" [23:53:03] well, in a way it does, it just lets you specify what info to hide [23:53:12] how? [23:53:48] yes, but we don't need oversight [23:54:04] we need only a simplier method to delete/restore [23:54:08] only that [23:54:22] lusum: Well, that will be coming in rev_deleted, but it may be a while. [23:54:50] yes [23:55:05] we have something like 600 pages to clean up [23:55:14] and 50 every day more [23:55:23] Umm .. why? [23:55:28] we have perfectioned the bot for copyright control [23:56:21] and we have 50 new signalations every day [23:57:08] how do I quote a chat log in mediawiki? [23:57:08] Why do copyright violations need to be selectively deleted from page histories? [23:57:22] italian law [23:57:30] no trace left [23:57:45] Hm .. odd [23:57:52] if I just paste it, it doesn't preserve line jumps between seperate statements, but if I use

 the lines don't wrap.
[23:58:02] 	how do I get the lines to wrap, but also preserve newlines?
[23:59:00] 	run a find-and-replace in a text editing utility on the newline character from the log, and then replace it with 
[23:59:39] I was going to do that, just seems really ugly [23:59:46] but if that's the only way...