[01:38:49] is petan still breaking SQL? [01:54:47] !log incubator Assigned public IP to instance incubator-web [01:54:48] Logged the message, Master [02:05:06] PROBLEM Current Load is now: CRITICAL on bots-cb bots-cb output: CRITICAL - load average: 88.40, 41.74, 16.41 [02:05:46] PROBLEM Total Processes is now: CRITICAL on bots-cb bots-cb output: CHECK_NRPE: Socket timeout after 10 seconds. [02:10:36] RECOVERY Total Processes is now: OK on bots-cb bots-cb output: PROCS OK: 101 processes [02:13:28] a load avg of 88 [02:13:37] well... something is going down on that thing [02:14:56] PROBLEM Current Load is now: WARNING on bots-cb bots-cb output: WARNING - load average: 0.68, 10.74, 13.86 [02:17:57] !log incubator Deleted obsolete instance incubator-squid [02:17:58] Logged the message, Master [02:23:55] PROBLEM Current Load is now: CRITICAL on incubator-sql incubator-sql output: Connection refused by host [02:24:35] PROBLEM Current Users is now: CRITICAL on incubator-sql incubator-sql output: CHECK_NRPE: Error - Could not complete SSL handshake. [02:25:15] PROBLEM Disk Space is now: CRITICAL on incubator-sql incubator-sql output: CHECK_NRPE: Error - Could not complete SSL handshake. [02:26:05] PROBLEM Free ram is now: CRITICAL on incubator-sql incubator-sql output: CHECK_NRPE: Error - Could not complete SSL handshake. [02:27:25] PROBLEM Total Processes is now: CRITICAL on incubator-sql incubator-sql output: CHECK_NRPE: Error - Could not complete SSL handshake. [02:28:15] PROBLEM dpkg-check is now: CRITICAL on incubator-sql incubator-sql output: CHECK_NRPE: Error - Could not complete SSL handshake. [02:28:55] RECOVERY Current Load is now: OK on incubator-sql incubator-sql output: OK - load average: 0.52, 0.59, 0.37 [02:29:35] RECOVERY Current Users is now: OK on incubator-sql incubator-sql output: USERS OK - 1 users currently logged in [02:30:15] RECOVERY Disk Space is now: OK on incubator-sql incubator-sql output: DISK OK [02:31:05] RECOVERY Free ram is now: OK on incubator-sql incubator-sql output: OK: 84% free memory [02:31:29] hi TParis [02:31:32] Hey [02:31:38] !accountreq [02:31:38] in case you want to have an account on labs, please contact someone who is in charge of doing that: Ryan.Lane, m.utante or ssmolle.tt [02:31:44] hmmm [02:32:06] !account-questions | TParis [02:32:06] TParis : I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your preferred email address. 3. Your SVN account name, or your preferred shell account name, if you do not have SVN access. [02:32:25] RECOVERY Total Processes is now: OK on incubator-sql incubator-sql output: PROCS OK: 87 processes [02:32:29] looks like we have a new guy! [02:33:15] RECOVERY dpkg-check is now: OK on incubator-sql incubator-sql output: All packages OK [02:33:26] Well I wanted to throw out some ideas first to see if these ideas flow right with WMF Labs. [02:33:35] TParis: you can of course PM me your email if you do not want it logged, etc. [02:33:35] OK [02:34:23] Well, we may as well get you a Labsconsole account since that will also allow you to push diffs via Git to our new Gerrit code review system, put your tools code into our git repo at https://www.mediawiki.org/wiki/Git/New_repositories etc [02:35:05] RECOVERY Current Load is now: OK on bots-cb bots-cb output: OK - load average: 0.47, 0.78, 4.25 [02:35:12] But go ahead with your idea, TParis [02:35:14] So as many of y'all might know, I took over most of X's tools. The problem is that right after I took them over, my workload jumped from paying clients and I had to table fixing up X's code to work on my account. I've managed to find the time now and they are working, but someone presented this idea to me and I sort of like it. To prevent a similar 'account expired' in the future, and since X's code is released under the GNU (and I have his perm [02:39:24] TParis: your line got cut off! [02:40:34] Ohh, sorry. I'll copy/paste from the halfway point. [02:40:41] 01 To prevent a similar 'account expired' in the future, and since X's code is released under the GNU (and I have his permission anyway), the idea is that perhaps we can set up a "tools" project to throw all of X's code onto that can be maintained by more than one user. [02:47:49] Oooh [02:47:53] That seems like a good idea to me! [02:48:08] TParis: yeah - Labs is supposed to be a community for sure [02:48:11] where people can help each other out [02:48:16] so I think this sounds like a good idea [02:48:52] You might consult with Coren, who is moving a bot over to Labs [02:53:19] Alright, I'll read more about Labs and send in an account request tomorrow. [02:53:31] If you like! [02:53:33] I just got home from a 4 hour drive and I'm pretty tired tonight. [02:53:36] Oh dear. [02:53:53] OK, talk to you later! if I am not around, https://labsconsole.wikimedia.org/wiki/Help:Access#Access_FAQ can help you get an account. [02:54:31] Thanks. Please do consider my suggestion about helping out at the event, I can provide ample references to my experience. [02:55:25] Thank you! Well, if you would like to help out at the event, just add yourself to https://www.mediawiki.org/wiki/Berlin_Hackathon_2012#Volunteers , TParis [02:55:52] and I'll hear more from you later, and we'll see! [02:56:21] But I think you would be more valuable as a technical participant and learner than as an event org :-) (We have people at WMF and WMDE who will be doing event org) [02:57:10] Sure, either way would be great. Just figured I'd throw it out there since I have experience doing it. [02:57:19] Understood. :) [02:57:21] Sleep well! [02:57:58] Ohh, I'm still a little ways from crashing. I just am too tired to research it all tonight. That'll have to be saved for tomorrow. [02:59:07] Nod. [03:00:09] I'll have plenty of time tomorrow ;) I'm on 'pee duty' all week. One of the great incentives to being an NCO. I get to watch folks pee in a cup. How exciting :D [03:00:36] WOW [03:03:33] hi multichill [03:08:45] PROBLEM host: incubator-sql is DOWN address: incubator-sql CRITICAL - Host Unreachable (incubator-sql) [03:42:25] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 19% free memory [05:02:25] RECOVERY Free ram is now: OK on mobile-enwp mobile-enwp output: OK: 21% free memory [05:20:25] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 19% free memory [05:25:25] RECOVERY Free ram is now: OK on mobile-enwp mobile-enwp output: OK: 30% free memory [06:39:28] PROBLEM Current Load is now: WARNING on bots-sql3 bots-sql3 output: WARNING - load average: 10.03, 10.25, 7.05 [06:40:37] PROBLEM Current Load is now: WARNING on mobile-enwp mobile-enwp output: WARNING - load average: 3.60, 6.48, 5.12 [06:45:57] RECOVERY Current Load is now: OK on mobile-enwp mobile-enwp output: OK - load average: 3.03, 4.29, 4.52 [06:48:18] RECOVERY Disk Space is now: OK on aggregator1 aggregator1 output: DISK OK [07:34:26] RECOVERY Current Load is now: OK on bots-sql3 bots-sql3 output: OK - load average: 1.16, 2.07, 4.35 [07:46:16] PROBLEM Disk Space is now: WARNING on aggregator1 aggregator1 output: DISK WARNING - free space: / 513 MB (5% inode=94%): [07:52:42] !log incubator Created wikis for documentation purpose and development testing [07:52:43] Logged the message, Master [08:11:17] PROBLEM Disk Space is now: CRITICAL on aggregator1 aggregator1 output: DISK CRITICAL - free space: / 258 MB (2% inode=94%): [08:13:09] New review: Dzahn; "The part about changing the logo should be removed, since it has been merged in another change (even..." [operations/puppet] (test); V: 0 C: -1; - https://gerrit.wikimedia.org/r/2012 [10:02:27] PROBLEM Free ram is now: WARNING on mobile-feeds mobile-feeds output: Warning: 12% free memory [10:07:27] PROBLEM Free ram is now: CRITICAL on mobile-feeds mobile-feeds output: Critical: 4% free memory [10:08:57] Please don't oom, I'm at work... [10:09:29] heh [10:12:27] RECOVERY Free ram is now: OK on mobile-feeds mobile-feeds output: OK: 35% free memory [10:13:12] :) [10:20:01] I told u [10:20:04] it's puppet!! [10:20:20] it easts all ram [10:32:44] Hydriz: don't pm me [10:32:47] I don't read pm's [10:32:52] oh lol [10:32:54] only on petan [10:32:58] not on this [10:33:15] this is running in terminal I don't see it [10:33:26] as expected :P [10:33:40] no pings work either here [10:33:45] so, deployment-sql is broken... [10:33:48] I know [10:34:00] I am moving all files to new storage [10:34:06] 70gb of data [10:34:11] it was running whole night [10:34:15] /data/project? [10:34:17] yes [10:34:19] it's faster [10:34:27] and it's bigger [10:34:36] No wonder why its growing so fast [10:34:40] heh [10:34:52] full clone of simple wiki is 98% of data [10:35:41] But I wanted to ask you for steward access on the beta cluster so that I can lock accounts [10:36:00] ok [10:36:27] but seeing the cluster down, I scratched my head on whats happening [10:36:40] but seeing that you got hold of it, then good :) [10:37:27] I just got a moment of acces [10:37:29] *access [10:38:08] looks back? [10:49:06] petan: Puppet shouldn't use that much ram... wtf is it doing. [11:07:18] Boom, deployment-sql is down again [11:10:12] is it possible to use labs for testing bots? for example check the behavior of an anti-vandalism bot? [11:10:52] hmm [11:12:25] Just treat Labs as a place of virtual machines [11:13:46] well, there's a good feature for second fase [11:26:54] Alchimista: As long as you compily to the bot rules :D [11:28:40] Damianz: if possible, that would be quite usefull for me, in some bots, i need to test it hard, so i must create some pages, make a lot of edits, and then delet them, if there where a project where i could make those edits, would be usefull to make the tests there [11:29:40] You could have an instance with a mw install to do that, if needed you could also import edits out the production db dumps. It doesn't sound like something you'd want to point at the production wikis. [11:30:00] We have a beta mediawiki setup but that's more for testing extensions rather than bot spam. [11:39:45] a mw instalation would be quite usefull to testing bots, for example, i have an anti-vandalism bot to develop, but some features are difficult to perform live tests with the original code, i must make changes to test it [11:41:06] Even if your bot was hard coded to the production wiki you could use your instances host file to spoof it -- that + db import from production is about near to production as you'll get. [11:42:48] yes, what i'm saing is that in an official project, some features are hard to test, if there where a prodution wiki to test it, it would be great [11:49:20] We could set one up -- it would be interesting for bots as testing is a bit hmm atm. Wonder if anyone else would be interested in something like deployment prep but designed to possible get trashed and just have the db restored every few days. [11:50:55] Hydriz: is labs back? [11:50:58] if it's broken let me know [11:51:15] Seems to work for me but I could just be hitting the cache [11:51:30] there is no cache heh [11:51:34] squid is fucked [11:51:41] It's kinda fucked [11:51:45] it's rather proxy [11:51:50] Really need re-arching with lvs :) [11:51:56] heh [11:52:03] but we have a lot of space for sql [11:52:08] I hope it's faster [11:52:11] 330gb? [11:52:18] It's less hops though to the actual disk. [11:52:18] ls on /data/project/db took 2 seconds [11:52:20] :D [11:52:38] dinno if's better than before [11:53:08] Damianz: we didn't have problem with space [11:53:36] but now it's even less of problem [11:53:47] I guess we can just extend it anytime now [11:54:08] it's just a quota now [11:54:13] not really separate vd [12:10:27] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 15% free memory [12:11:57] petan|wk:Looks like its back, but just slow [12:17:23] !ping [12:17:23] pong [12:18:10] zzz how do you actually run wm-bot [12:18:31] ? [12:18:38] I read the docs [12:18:41] svn co-ed [12:18:42] @help [12:18:42] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.1.4 source code licensed under GPL and located in wikimedia svn [12:18:53] yes [12:19:00] but the docs says something about make [12:19:05] yes [12:19:07] which is wtf how do you do that [12:19:16] I click make [12:19:19] in visual studio [12:19:23] zzz [12:19:23] build [12:19:28] in terminal? [12:19:33] how would you do that [12:19:33] no idea [12:19:39] download mono develop [12:19:40] zzz [12:19:47] you don't have a gui [12:19:47] yes [12:19:51] ok [12:19:55] then download it [12:19:58] then compile it [12:20:00] run it [12:20:02] easy [12:20:18] compiling is the issue [12:20:21] why [12:20:31] what is the command? [12:20:57] click make [12:20:59] in gui [12:21:15] but the thing is how to do it in purely terminal [12:21:24] create a makefile for that [12:21:27] getting the GUI means having to download the entire software [12:21:29] then type make [12:21:35] yes [12:21:43] apt-get install monodevelop [12:21:48] it's hard [12:21:49] software = Visual studio [12:21:50] :P [12:21:52] no [12:21:55] mono develop [12:22:07] I have linux :P [12:22:10] so, any makefile template? [12:22:18] there is tool create makefile in mono [12:22:22] but I didn't make it [12:22:33] if u want to compile it in terminal make one [12:22:56] ah I see monodevelop in my own comp [12:23:11] it comes with bubuntu [12:23:15] :P [12:23:47] but anyway, can I has stewardship to lock accounts when I come across some on the beta cluster? [12:24:14] * Damianz locks Hydriz [12:24:29] You guys see a damned spambot? [12:24:39] It hops over a huge range of ips [12:24:46] and spams the same kind of content [12:24:59] ASCII porn? :D [12:25:32] think so [12:25:33] done [12:25:41] I don't check what kind of vandalism that is being made [12:25:52] thanks :) [12:26:30] and oh yes [12:26:40] the interwiki tables of the entire cluster is screwed [12:27:38] :D I want to be an administrator, so I can block and unblock, delete, restore, etc. I will not vandalize [12:27:48] https://www.mediawiki.org/wiki/Project:Requests haha [12:28:15] should we approve it? :P [12:28:33] but already getting rights on mediawikiwiki is hard [12:28:35] he promises he won't vandalize when we give him +sysop [12:28:50] that sounds quite reasonable [12:29:24] lol [12:30:45] hmm [12:31:01] oh, nevermind [12:40:27] PROBLEM Free ram is now: CRITICAL on mobile-enwp mobile-enwp output: Critical: 5% free memory [12:42:40] just realised my userid on beta is 69 [12:42:44] wtf [12:50:27] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 16% free memory [13:05:35] !testdb is Test [13:05:35] Key was added! [13:05:41] !testdb del [13:05:41] Successfully removed testdb [13:57:38] !log hugglewa iworld: old HuggleWA files /var/www deleted [13:57:40] Logged the message, Master [13:59:24] 03/12/2012 - 13:59:24 - Updating keys for hydriz [13:59:54] ah labs-logs-bottie reminds me [14:00:09] 03/12/2012 - 14:00:08 - Updating keys for hydriz [14:00:12] 03/12/2012 - 14:00:11 - Updating keys for hydriz [14:00:14] petan|wk: What is the script that you are currently using to enable such ircechos? [14:00:18] *ircechos [14:00:19] 03/12/2012 - 14:00:18 - Updating keys for hydriz [14:00:19] log [14:00:25] it's in my home [14:00:36] * Damianz points the fbi at petan|wk's home [14:00:42] if u want to install it anywhere tell me [14:00:43] any chance of putting it on labsconsole? [14:00:49] it's on pastebin [14:00:53] wait [14:01:00] its a bin... [14:01:35] no [14:01:46] http://pastebin.mozilla.org/1515592 [14:01:53] copy paste it to /bin/log [14:02:00] chmod a+x /bin/log [14:02:12] then you can do log blah blah [14:02:20] on every instance? [14:02:28] don't forget to insert project="name of project" [14:02:30] to beginning [14:02:47] sigh, okay will be doing it now [14:03:23] I would like to puppetize it but Damianz is against that [14:03:34] he sabotage it by not telling me how to do that [14:03:43] Only against throwing it on stuff without telling people :P [14:03:46] :D [14:03:46] wait [14:03:54] oh oh [14:03:54] Might look at puppetizing it when I'm not busy :P [14:03:55] nevermind [14:04:02] * petan|wk tells people we will do it [14:04:24] now he sabotage by pretending work on something [14:04:26] nc bots-labs .... [14:04:30] whats that? [14:04:41] that's a vm [14:04:46] ignore it, just paste it [14:04:51] oh good [14:05:10] that's where bottie listen to your log [14:05:36] * Hydriz failed in shell programming [14:05:44] how do you define $project again? [14:05:49] project=blah [14:06:03] should be in script [14:06:05] !log incubator root: test [14:06:06] Logged the message, Master [14:06:13] \o/ [14:06:28] yeah, I am root haha [14:06:44] you should use sudo [14:06:48] instead of root [14:07:07] being logged in as root is not really safe :o [14:07:16] * Hydriz is just too lazy to type sudo for everything :P [14:07:22] but yeah [14:07:31] I know that :P [14:07:36] if u need to use sudo for everything your instance is poorly configured [14:07:47] I barely use sudo on my own desktop [14:08:28] thats just for /var/www [14:08:41] chown hydriz /var/www ? [14:08:51] try it [14:09:12] yeah [14:10:28] but anyway since you are available [14:10:40] mh [14:10:42] hm [14:10:43] can you check why I can't add incubator.wikimedia into the wm-bot settings [14:10:50] yes [14:10:51] recentchanges thingy [14:11:01] @RC+ blah . [14:11:01] Unable to insert the string to the list because there is no such wiki site known by a bot, contact some developer with svn access in order to insert it [14:11:04] ^ [14:11:21] it require bot to reboot [14:11:41] :( [14:12:20] and until now I can't find the make button in mono develop [14:12:27] build menu [14:13:29] Could not find a C# compiler [14:13:41] s/find/obtain/ [14:13:50] maybe someone in here can help, Mediawiki 1.18.1 moved from DB/APC based cache to a memcached and my template for recent changes on my main page is no longer updating for non logged in/no session users (its 2 days old now) [14:15:16] JRWR: is memcache working [14:15:24] you need to configure it in LS.php [14:15:41] if there is no cache, mediawiki doesn't work properly if u have cache enabled [14:15:54] it seems that you enabled cache but it doesn't work [14:15:58] !log dumps hydriz: enabled logging on all instances by typing log in terminal. This excludes dumps-nfs1. [14:15:59] Logged the message, Master [14:16:32] petan|wk: its working and my munin scripts did see a increase in size of the ram usage of memcached [14:16:53] petan|wk: I disabled the cache all together, and its still out of date [14:16:59] ok, that template may need to be purged [14:17:24] I think that problem is that mediawiki doesn't dynamicaly create templates unless it's needed [14:17:35] there is ?Action=purge for this [14:17:49] that force it to recreate it [14:18:02] otherwise it uses cached version [14:18:11] only way to disable it is to turn off cache [14:18:22] including browser cache [14:18:37] or insert header which enforces no cache [14:20:04] petan|wk: I do have the file based cache enabled [14:20:06] that would do it [14:20:42] yes [14:20:44] probably [14:22:40] Im going to take a hit to performance [14:22:50] but it should work out, with memcached and nginx doing anon caching [14:23:52] crap.. it started adding a IP address to the top... how do I disable that [14:24:37] ? [14:25:02] $wgShowIPinHeader [14:25:10] likely [14:32:19] no offense, but the huggle logo looks like a middle finger, especially if its scaled quite small [14:33:52] what is the huggle project, I havent been able to find any info [14:34:05] !project Huggle [14:34:05] https://labsconsole.wikimedia.org/wiki/Nova_Resource:Huggle [14:34:06] https://meta.wikimedia.org/wiki/Huggle [14:34:12] @ JRWR [14:35:02] that does look like a middle finger [14:35:22] Hydriz: I have designed the logo..... And the middle finger is for the vandals ;) [14:35:33] LOL [14:35:44] no, that's a broom [14:39:34] petan|wk: You there? [14:40:03] I am trying to globally block an IP (‎109.194.140.77) in metawiki, but it keeps insisting that it is a username [14:41:46] yes [14:41:57] â109.194.140.77 is a username [14:41:58] true [14:42:04] try 109.194.140.77 [14:42:10] without extra symbol [14:42:28] heh [14:42:37] Are you actually getting an extra symbol? [14:42:40] yes [14:42:53] oh [14:42:59] then thats an encoding issue [14:43:04] probably yes [14:43:05] which I can't see [14:43:12] I use english encoding [14:43:15] I don't see any extra symbols [14:43:20] I see some weird kind of a [14:43:26] cool [14:43:29] * Hydriz check logs [14:43:43] Oh [14:43:45] its there [14:43:55] and I don't see it on my XChat [14:43:57] heh [14:43:59] weird [14:44:05] hm [14:46:54] Now to check... 109.194.140.77 [14:47:32] weirdness, looks like I copied with the extra space [14:56:49] Zzz Quite a bit of Russian spambots are attacking the beta cluster [14:57:10] and without an IRC gateway for the RC feeds, its hard to track which wikis it attacks [15:26:50] Ok.. another template issue, man this thing hates me, its no longer loading the edited template for the sidebar, its loading the default only now [15:28:07] JRWR: that is beause of broken cache [15:28:26] check your config [15:28:30] there is surely a problem [15:28:50] sidebar code is always loaded from cache [15:29:00] if there is default code it means it could not read cache [15:29:38] at some point this is not a best solution, we should probably load Mediawiki:Sidebar instead of defauld code, but it's good to debug where is problem [15:30:15] try to check that memcache is properly configured in ls [15:30:33] if you don't want the template be cached, you should disable cache at all [15:31:45] hrm, looked like old data in memcached [15:31:50] after restarting it, it fixed [15:35:27] !log incubator hydriz: Updating all files via scap to r113625 [15:35:28] Logged the message, Master [17:38:27] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 19% free memory [17:58:27] RECOVERY Free ram is now: OK on mobile-enwp mobile-enwp output: OK: 26% free memory [17:59:56] Ryan_Lane, can has public IP and domain name? [18:00:10] what project, and what do you need it for? [18:00:34] orgcharts, and so people can access the web app externally [18:04:05] 03/12/2012 - 18:04:05 - Creating a home directory for maxsem at /export/home/mobile-sms/maxsem [18:05:05] 03/12/2012 - 18:05:04 - Updating keys for maxsem [18:24:29] !project orgcharts [18:24:29] https://labsconsole.wikimedia.org/wiki/Nova_Resource:orgcharts [18:25:11] That's handy :) [18:25:20] marktraceur: I upped your quota. you can allocate an IP, associate it with your instance, and add a hostname to the IP address [18:25:27] PROBLEM Free ram is now: WARNING on deployment-web2 deployment-web2 output: Warning: 19% free memory [18:25:38] Mmkay, and I do that on labsconsole? [18:25:55] Oh, "manage addresses" [18:25:59] I need to learn to read [18:30:27] RECOVERY Free ram is now: OK on deployment-web2 deployment-web2 output: OK: 20% free memory [18:31:49] that special page takes a long time to load [18:32:00] so you know it's normal behavior ;) [18:32:01] Yes, it sure did [18:32:13] It's cool, I'm not rushing anywhere [18:32:24] I have a code fix for it, but it's part of a larger change that's like 80% done :D [18:32:28] Aw, I have to load it again [18:32:59] yeah, i have a war on clicks in that same change :) [18:38:56] Ryan_Lane: I was working on the nginx test, and I wanted to do some tests, what should I use as the backend? I was thinking of setting up a fake install of mediawiki with a dump of content from somewhere [18:39:16] that's like a good idea [18:39:19] in my limited tests, nginx is harder to clear the cache [18:39:27] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 18% free memory [18:39:29] but faster then varnish under high loads [18:39:45] we clear cache using multicast UDP [18:39:55] you could do a outsite script [18:40:12] since the cache is disk based, and its a simple hash with a definable key [18:40:20] we have a daemon that runs on the varnish boxes that listens for the packet, then clears the cache locally [18:40:30] outside script? [18:40:39] same thing you just said really [18:41:12] * Ryan_Lane nods [18:41:26] uses md5 for the cache key (I think) [18:41:49] in thoery you could write the to the file it self [18:42:56] from by barebones basic tests, nginx handles 500k requests a second before slowing, and varnish handled about 300k [18:43:17] and I think that was a polling issue, both where not using any real CPU [18:43:52] I could get nginx to respond in under 1ms, but that was the same for varnish as well [18:44:15] I need to build better testing tools, ab/siege just dont do it [18:44:33] yeah, they are single-threaded [18:44:39] You'll find the fun point where ab is too slow :D [18:44:41] you should also test with a non-static backend [18:45:01] it makes a big difference [19:46:19] New review: Rich Smith; "(no comment)" [operations/puppet] (test) C: 1; - https://gerrit.wikimedia.org/r/2157 [19:57:42] Anyone 'round here have a clue as to why http://orgcharts.wmflabs.org:8888/ isn't loading? Testing locally returned the index page. Does DNS take an uber-long time to propagate? [20:10:07] marktraceur, i get an ip back right away but verrry slow response pinging it [20:10:21] Hrm. [20:10:40] That's consistent with my experiments, but not very helpful :) [20:10:45] heh [20:11:07] So I guess it's not DNS, could it be the IP assignment? I did both in quick succession [20:29:27] PROBLEM Free ram is now: WARNING on deployment-web2 deployment-web2 output: Warning: 19% free memory [20:34:27] RECOVERY Free ram is now: OK on deployment-web2 deployment-web2 output: OK: 21% free memory [20:42:52] is there a method to just include the number of pages in a Category? [20:51:34] JRWR: hm. you can export from wikipedia from just a category, I think [20:52:21] I found it already, {{PAGESINCATEGORY:categoryname}} [21:24:36] petan: petan|wk: around? [22:00:28] is there a way to reset the css cache for the web instances? [22:00:59] after fixing TMH i still get the old css that points to the wrong resources with /w/extension instead of /w/extension-trunk [22:01:17] this also only happens for chrome while firefox picks up the more recent version already [22:12:51] ok. touch on the outdated files helped [22:28:31] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 17% free memory [22:46:33] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509774 edit summary: [22:46:58] andrewbogott: https://www.mediawiki.org/wiki/Wikimedia_Labs/Reverse_proxy_for_web_services <— in case you get stuck again and need something else to work on :) [22:47:43] cool -- I'll add it to the list! [22:48:14] That shouldn't be really hard tbh, just painful at integrating it so it's user friendly. [22:48:16] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509775 edit summary: [22:48:31] RECOVERY Free ram is now: OK on mobile-enwp mobile-enwp output: OK: 22% free memory [22:48:35] Damianz: well, if it's made as an openstack service, it shouldn't be bad [22:49:12] Is the openstack api extendable? I can't say I looked past the ec2 one that's just.. horrid. [22:49:25] create proxy, associate proxy, associate proxy, delete proxy :) [22:49:29] yes [22:49:40] shinyness. [22:49:59] if you associate more than one instance to a proxy. it just load balances them :) [22:50:03] which would be awesome [22:50:20] Ryan_Lane, If you have a sec, help diagnose why my web app isn't showing up? http://orgcharts.wmflabs.org:8888 [22:50:34] Security rules? [22:50:36] marktraceur: why use port 8888? [22:50:41] very likely security group rules [22:50:50] Ah, will investigate [22:50:54] did you read the docs on security groups and instances? [22:50:58] !instances [22:50:58] https://labsconsole.wikimedia.org/wiki/Help:Instances [22:50:59] !security [22:50:59] https://labsconsole.wikimedia.org/wiki/Help:Security_Groups [22:51:17] Mmmkay, will do [22:51:35] It's 8888 just because that's the port that was in the example :) [22:51:39] yeah, I think implementation of this reverse proxy service should be pretty easy [22:51:45] what software is this that you are using? [22:52:12] Everyone gets bored at some point and decides to write an integrated webserver into their app :P [22:52:17] heh [22:52:59] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509776 edit summary: Adding some obvious missing and required fields :) [22:53:22] would obviously be hard to have a proxy without the from and to ports :) [22:53:39] lol [22:53:45] I think we should skip ssl and caching in the first implementation [22:53:58] otherwise we need to figure out how to deal with multiple IPs, uploading certs, etc [22:54:08] also how to accept cache configuration [22:54:10] that's too hard [22:55:01] Something that domain abc.com -> 10.x.x.x port 80 load balence mode, round robin would do for most stuff anyway. [22:55:10] I guess the caching could rely on server sent headers [22:55:22] but then how do you purge? [22:55:26] Woot, it works, thanks again Ryan_Lane [22:55:31] yw [22:56:24] hm. I think a transparent proxy is likely the best option [22:56:31] PROBLEM Free ram is now: WARNING on mobile-enwp mobile-enwp output: Warning: 19% free memory [22:56:32] if someone wants to cache, they can do it behind the proxy [22:57:08] Even loadbalencing could be over kill for v1. [22:57:42] no, we'd want that [22:57:50] it's also fairly easy to configure [22:58:05] take for instance deployment-prep right now [22:58:07] Yeah you just specify a couple of servers instead of 1. [22:58:33] they could benefit from being able to load balance between the squids [22:58:46] True [22:59:02] Would be cleaner than hacking lvs in or a proxy as well. [22:59:43] yep [22:59:53] though we'd still want LVS for the nginx servers ;) [23:00:30] nah, just stick one box there and limit everyone to 512kbps :D [23:01:22] heh [23:06:54] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509782 edit summary: [23:10:16] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509784 edit summary: [23:18:02] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509785 edit summary: [23:44:18] PROBLEM Current Load is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:44:58] PROBLEM Current Users is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:45:48] PROBLEM Disk Space is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:46:28] PROBLEM Free ram is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:47:48] PROBLEM Total Processes is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:48:38] PROBLEM dpkg-check is now: CRITICAL on bastion-prod1 bastion-prod1 output: Connection refused by host [23:51:10] andrewbogott: so, there's already a spec for a load balancing service: http://wiki.openstack.org/Atlas-LB [23:51:43] unfortunately, atlas is java, it has no code released, and its only backend is zeus, which is a proprietary load balancer [23:52:56] Is that a proposal, or description of an existing service? [23:53:03] description of a service [23:53:11] of course, they haven't released the damn code [23:53:26] and they are apparently working on an haproxy backend for it [23:53:40] ah, I see. [23:53:55] I *hate* when organizations do this [23:54:21] I'll need to give the rackspace people some crap about this at the summit :) [23:54:58] it may be better for us to just wait for this [23:58:04] I think I'm a few beats behind. Load balancing was a feature you wanted for the proxy service you were talking about earlier, right? [23:58:11] Or is there a more general load balancing need? [23:58:21] Change on 12mediawiki a page Wikimedia Labs/Reverse proxy for web services was modified, changed by Ryan lane link https://www.mediawiki.org/w/index.php?diff=509895 edit summary: [23:58:41] well, this load balancing service would act like a reverse proxy, in a way [23:58:51] since all we needed was a transparent reverse proxy [23:59:23] a tcp proxy, like haproxy is actually more relevant, since it would allow us to pass ssl through, rather than needing to terminate [23:59:43] haproxy is open source? [23:59:50] yep