[02:38:39] RECOVERY Free ram is now: OK on deployment-web6 i-000001d9 output: OK: 21% free memory [02:39:03] PROBLEM Current Load is now: WARNING on bots-sql3 i-000000b4 output: WARNING - load average: 5.88, 5.70, 5.27 [02:40:33] RECOVERY Free ram is now: OK on deployment-web2 i-00000125 output: OK: 23% free memory [02:46:33] PROBLEM Free ram is now: WARNING on deployment-web6 i-000001d9 output: Warning: 18% free memory [02:48:33] PROBLEM Free ram is now: WARNING on deployment-web2 i-00000125 output: Warning: 19% free memory [02:55:43] RECOVERY HTTP is now: OK on deployment-web5 i-00000213 output: HTTP OK: HTTP/1.1 200 OK - 453 bytes in 0.015 second response time [03:03:34] RECOVERY Free ram is now: OK on deployment-web2 i-00000125 output: OK: 20% free memory [03:25:25] RECOVERY Free ram is now: OK on deployment-web5 i-00000213 output: OK: 95% free memory [03:25:35] RECOVERY dpkg-check is now: OK on deployment-web5 i-00000213 output: All packages OK [03:26:15] RECOVERY Current Users is now: OK on deployment-web5 i-00000213 output: USERS OK - 0 users currently logged in [03:26:15] RECOVERY Total Processes is now: OK on deployment-web5 i-00000213 output: PROCS OK: 127 processes [03:26:20] RECOVERY Current Load is now: OK on deployment-web5 i-00000213 output: OK - load average: 0.00, 0.02, 0.00 [03:26:20] RECOVERY Disk Space is now: OK on deployment-web5 i-00000213 output: DISK OK [03:49:05] RECOVERY Current Load is now: OK on bots-sql3 i-000000b4 output: OK - load average: 3.51, 4.40, 4.90 [03:50:35] PROBLEM Free ram is now: WARNING on utils-abogott i-00000131 output: Warning: 17% free memory [03:51:35] PROBLEM Free ram is now: WARNING on test-oneiric i-00000187 output: Warning: 17% free memory [03:59:25] PROBLEM Free ram is now: WARNING on orgcharts-dev i-0000018f output: Warning: 14% free memory [04:04:09] PROBLEM Free ram is now: WARNING on nova-daas-1 i-000000e7 output: Warning: 15% free memory [04:10:39] PROBLEM Free ram is now: CRITICAL on utils-abogott i-00000131 output: Critical: 4% free memory [04:11:39] PROBLEM Free ram is now: CRITICAL on test-oneiric i-00000187 output: Critical: 4% free memory [04:15:57] RECOVERY Free ram is now: OK on utils-abogott i-00000131 output: OK: 96% free memory [04:16:37] RECOVERY Free ram is now: OK on test-oneiric i-00000187 output: OK: 97% free memory [04:19:27] PROBLEM Free ram is now: CRITICAL on orgcharts-dev i-0000018f output: Critical: 3% free memory [04:24:27] RECOVERY Free ram is now: OK on orgcharts-dev i-0000018f output: OK: 95% free memory [04:29:07] PROBLEM Free ram is now: CRITICAL on nova-daas-1 i-000000e7 output: Critical: 4% free memory [04:34:07] RECOVERY Free ram is now: OK on nova-daas-1 i-000000e7 output: OK: 93% free memory [05:02:07] PROBLEM Puppet freshness is now: CRITICAL on nova-production1 i-0000007b output: Puppet has not run in last 20 hours [05:06:07] PROBLEM Puppet freshness is now: CRITICAL on nova-gsoc1 i-000001de output: Puppet has not run in last 20 hours [05:46:07] PROBLEM host: deployment-web is DOWN address: i-000000cf check_ping: Invalid hostname/address - i-000000cf [05:46:27] PROBLEM host: deployment-web3 is DOWN address: i-00000162 check_ping: Invalid hostname/address - i-00000162 [05:47:27] PROBLEM host: deployment-web2 is DOWN address: i-00000125 check_ping: Invalid hostname/address - i-00000125 [05:47:40] Ryan_Lane: hey [05:47:50] howdy [05:47:50] I can't create new instance in deployment :O [05:47:53] no? [05:47:56] what happens? [05:47:58] quota? [05:48:04] Failed [05:48:10] no error message for that [05:48:10] must be quota [05:48:17] but I deleted 3 instances [05:48:22] now I try to replace them with larger [05:48:30] heh [05:48:31] InstanceLimitExceeded: Instance quota exceeded. You cannot run any more instances of this type. [05:48:31] I created 2 [05:48:44] gimme a sec [05:48:46] ok [05:49:05] 20 instance limit [05:49:17] oh [05:49:22] must be the storage size [05:50:17] for some reason it has 80gb of storage [05:50:25] you said to create instances with big ram [05:50:31] yeah [05:50:34] RECOVERY host: deployment-web is UP address: i-00000217 PING OK - Packet loss = 0%, RTA = 166.10 ms [05:50:35] I don't know why storage is big as well [05:50:57] because you can't specify them separately, which is annoying [05:51:03] you're using larges? [05:51:11] the one w 8 gb [05:51:13] yes [05:51:14] should be fine. I wonder which quota you are hitting [05:51:29] must be RM [05:51:30] *RAM [05:51:54] try now [05:52:04] PROBLEM dpkg-check is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [05:52:21] cpus may be an issue too [05:52:22] sec [05:52:34] PROBLEM Free ram is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [05:52:34] PROBLEM Total Processes is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [05:52:52] should definitely work now [05:53:34] PROBLEM Current Load is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [05:55:16] kk [05:55:35] working? [05:55:38] ye [05:56:11] I switched 2 server so far [05:56:18] it loads as fast as on prod :) [05:56:20] the site [05:56:23] ah. good [05:56:27] much better [05:56:32] must be cpu's [05:56:36] or ram [05:56:37] it's the memory [05:56:40] hm... [05:56:49] the cpus don't hurt, but the other instances were going into ram death [05:57:08] !projects | Shujenchang [05:57:08] Shujenchang: https://labsconsole.wikimedia.org/wiki/Special:Ask/-5B-5BResource-20Type::project-5D-5D/-3F/-3FMember/-3FDescription/mainlabel%3D-2D [05:57:24] also, having more memory means more cache [05:58:24] PROBLEM Disk Space is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [05:58:24] PROBLEM Current Users is now: CRITICAL on deployment-web i-00000217 output: CHECK_NRPE: Error - Could not complete SSL handshake. [06:01:52] Thanks for your help, I'll work hard on the development and cooperate with other developers [06:02:01] sounds good. have fun [06:03:44] PROBLEM dpkg-check is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:04:24] PROBLEM Current Load is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:04:24] PROBLEM Current Load is now: CRITICAL on deployment-web3 i-00000219 output: Connection refused by host [06:05:04] PROBLEM Current Users is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:05:04] PROBLEM Current Users is now: CRITICAL on deployment-web3 i-00000219 output: Connection refused by host [06:05:45] PROBLEM Disk Space is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:05:45] PROBLEM Disk Space is now: CRITICAL on deployment-web3 i-00000219 output: Connection refused by host [06:06:19] PROBLEM Free ram is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:06:19] PROBLEM Free ram is now: CRITICAL on deployment-web3 i-00000219 output: Connection refused by host [06:07:34] PROBLEM Total Processes is now: CRITICAL on deployment-web3 i-00000219 output: CHECK_NRPE: Error - Could not complete SSL handshake. [06:08:14] PROBLEM Total Processes is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused by host [06:08:19] PROBLEM dpkg-check is now: CRITICAL on deployment-web3 i-00000219 output: CHECK_NRPE: Error - Could not complete SSL handshake. [06:13:44] PROBLEM HTTP is now: CRITICAL on deployment-web3 i-00000219 output: Connection refused [06:18:14] RECOVERY Total Processes is now: OK on deployment-web2 i-00000218 output: PROCS OK: 172 processes [06:18:24] RECOVERY Disk Space is now: OK on deployment-web i-00000217 output: DISK OK [06:18:24] RECOVERY Current Users is now: OK on deployment-web i-00000217 output: USERS OK - 0 users currently logged in [06:18:34] RECOVERY Current Load is now: OK on deployment-web i-00000217 output: OK - load average: 0.07, 0.08, 0.03 [06:18:44] RECOVERY dpkg-check is now: OK on deployment-web2 i-00000218 output: All packages OK [06:19:24] RECOVERY Current Load is now: OK on deployment-web2 i-00000218 output: OK - load average: 0.04, 0.07, 0.14 [06:20:04] RECOVERY Current Users is now: OK on deployment-web2 i-00000218 output: USERS OK - 3 users currently logged in [06:20:44] RECOVERY Disk Space is now: OK on deployment-web2 i-00000218 output: DISK OK [06:21:14] RECOVERY Free ram is now: OK on deployment-web2 i-00000218 output: OK: 95% free memory [06:26:54] PROBLEM HTTP is now: CRITICAL on deployment-web2 i-00000218 output: Connection refused [06:28:07] Ryan_Lane: automount fail on web4 [06:28:20] can't open /data [06:29:18] hm [06:30:33] brb [06:30:41] I'm betting it isn't shared to it yet [06:30:44] RECOVERY Disk Space is now: OK on deployment-web3 i-00000219 output: DISK OK [06:31:05] ok. that's not true [06:31:14] RECOVERY Free ram is now: OK on deployment-web3 i-00000219 output: OK: 96% free memory [06:32:34] RECOVERY Total Processes is now: OK on deployment-web3 i-00000219 output: PROCS OK: 140 processes [06:32:40] yep, it isn't shared yet [06:32:44] because of DNS [06:32:54] it has the old cached entry [06:33:00] that's problematic [06:33:14] RECOVERY dpkg-check is now: OK on deployment-web3 i-00000219 output: All packages OK [06:33:44] PROBLEM dpkg-check is now: CRITICAL on deployment-web5 i-00000213 output: DPKG CRITICAL dpkg reports broken packages [06:33:44] RECOVERY HTTP is now: OK on deployment-web3 i-00000219 output: HTTP OK: HTTP/1.1 200 OK - 453 bytes in 0.005 second response time [06:34:24] RECOVERY Current Load is now: OK on deployment-web3 i-00000219 output: OK - load average: 0.06, 0.15, 0.25 [06:35:04] RECOVERY Current Users is now: OK on deployment-web3 i-00000219 output: USERS OK - 1 users currently logged in [06:46:14] PROBLEM dpkg-check is now: CRITICAL on deployment-web3 i-00000219 output: DPKG CRITICAL dpkg reports broken packages [08:42:07] New patchset: J; "add timedmediahandler manifest" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5599 [08:42:21] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/5599 [08:49:13] petan|wk: can you have a look at that puppet patch (https://gerrit.wikimedia.org/r/5599), it should make it easier to have all dependencies and config required for TMH installed on the deployment-web instances [08:50:20] j^: I just reinstalled most of web servers, you need to fix this I guess it's missing there now [08:50:33] I don't know what software it is, neither where it is [08:51:25] petan|wk: noticed that, thats why i pushed a puppet patch so in the future adding the timedmediahandler::web class should do it [08:51:59] or should i do it manually? [08:52:23] (not much experience with puppet myself) [08:52:41] New review: Petrb; "I think that apache server should be reloaded after this patch, it should be probably somewhere in t..." [operations/puppet] (test) C: 0; - https://gerrit.wikimedia.org/r/5599 [08:53:01] j^: I would use the puppet but someone need to merge it [08:53:20] see /home/petrb/patch1 [08:53:28] this file is started on every new server [08:53:53] it preconfigure each instance if you insert aptitude install blah blah in that file it will be on every web server [08:59:34] petan|wk: you can add the lines from /home/j/tmh-deployment-web.sh [08:59:41] ok [09:07:22] New patchset: J; "add timedmediahandler manifest" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5599 [09:07:35] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/5599 [09:28:14] PROBLEM Puppet freshness is now: CRITICAL on wikidata-dev-2 i-0000020a output: Puppet has not run in last 20 hours [09:39:00] so anyone around that would be able to look at/merge a puppet patchset? [09:39:14] 04/23/2012 - 09:39:14 - Creating a home directory for shujenchang at /export/home/bastion/shujenchang [09:40:13] 04/23/2012 - 09:40:13 - Updating keys for shujenchang [10:01:44] PROBLEM dpkg-check is now: CRITICAL on deployment-web2 i-00000218 output: DPKG CRITICAL dpkg reports broken packages [10:03:44] RECOVERY dpkg-check is now: OK on deployment-web5 i-00000213 output: All packages OK [10:06:02] New review: Dzahn; "see inline comments" [operations/puppet] (test); V: 0 C: -1; - https://gerrit.wikimedia.org/r/5599 [10:14:09] New patchset: J; "add timedmediahandler manifest" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5599 [10:14:23] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/5599 [10:17:24] New review: J; "comments addressed in new patchset. space in mime.types it there to make it align with other entries..." [operations/puppet] (test) C: 0; - https://gerrit.wikimedia.org/r/5599 [10:19:35] What shoud I do after I have an account and uploaded my SSH public key? Can I have my experiment on the bastion, or I must join or create another project? [10:28:16] eh, what form of "experiment"? [10:31:42] development on mediawiki [10:42:38] you probably need to join a project [10:42:49] or get one created yourself [10:47:41] I see, thx~ [12:02:18] Shujenchang: what kind of experiment it is? [12:02:24] maybe I could add you to some [12:02:51] is it general developement or some certain extension [12:44:25] petan|wk What projects do you have? [12:45:34] it would be better if you told me what kind of development are you doing [12:45:41] are you a core dev? [12:45:46] or extensions? [12:46:01] does it need to be viewable by public? is it going to be deployed to production? [12:47:19] Well, I'm intrested in edit and administration tools, such as ProveIt, Twinkle [13:05:12] !puppet [13:05:12] http://docs.puppetlabs.com/learning/ [13:05:19] !puppet del [13:05:19] Successfully removed puppet [13:05:43] !puppet is learn: http://docs.puppetlabs.com/learning/ troubleshoot: http://docs.puppetlabs.com/guides/troubleshooting.html [13:05:43] Key was added! [13:09:50] Shujenchang: ok, that sounds like a development of gadgets [13:10:05] in fact you don't even need to use labs for this, althought you can [13:10:35] in order to create gadgets you can just use test.wikipedia.org or deployment site at deployment.wikimedia.beta.wmflabs.org [13:11:07] you don't need to use shell access to create gadgets [13:15:00] oh, and I've just found an extension interested in, the Upload-wizard [[Nova_Resource:Upload-wizard]], shoud I contact [[User:Jeremyb]] or contact admins to join it? [13:17:40] yes, probably [13:23:44] and who's take in charge of Mobile project? I'm interested in Mobile project too [13:24:41] lol [13:25:13] * Hydriz feels hurt that nobody is interested in the Incubator project, despite mass publicity for it :P [13:26:14] RECOVERY dpkg-check is now: OK on deployment-web3 i-00000219 output: All packages OK [13:27:56] What is the Incubator Project about? Wikimedia Incubator? [13:28:33] yep :) [13:28:53] but the project itself is currently idle [13:29:07] its waiting for "orders" from our "programmer-in-chief" [13:29:21] and what works are mainly on it? [13:30:31] test wikis for future languages of projects [13:30:48] we are trying to make the test wikis look quite like the real wiki [13:30:54] which they get in the future [13:31:11] Lezgian ftw [13:31:19] by implementing interfaces that look nice :) [13:31:41] seems interesting~ [13:31:45] but actually we don't need more developers right now haha [13:31:46] Can I join it? [13:31:56] we already push out code that is what we wanted [13:31:58] what a shame... [13:32:05] haha :) [13:32:13] and the project is just a testing zone [13:32:19] our code resides somewhere else though [13:35:11] When you need more developers, you can email me anytime~ my emal: i@blue.cat [13:35:48] heh [13:36:07] Shujenchang: try to contact sumana, she is volunteer coordinator and has a good overview which project need help that matches your interest [13:36:39] info is listed on the page you used to apply for the account [13:37:14] thx~ [13:41:45] Do you mean Sumanah? [13:47:43] petan|wk mutante is there a list / map of all beta servers? [13:47:56] !nagios [13:47:56] http://nagios.wmflabs.org/nagios3 [13:47:58] hashar: ^ [13:48:01] :-) [13:48:06] everything deployment* [13:48:35] actually just open group http://nagios.wmflabs.org/cgi-bin/nlogin/status.cgi?hostgroup=deployment-prep&style=detail [13:48:49] er [13:49:00] this will probably need a password :) [13:49:13] it does :-D [13:49:25] http://nagios.wmflabs.org/cgi-bin/nagios3/status.cgi?hostgroup=deployment-prep&style=detail [13:49:27] try this [13:49:38] there are two ways to open nagios [13:49:48] anonymous and login [13:49:56] so that give me the monitoring, but is there any architecture / design documentation? [13:49:58] I am using login so my links don't work so well [13:50:08] you wanted list of servers :) [13:50:18] yes there is http://deployment.wikimedia.beta.wmflabs.org/wiki/Help [13:50:23] it's a bit outdated atm [13:50:25] \o/ [13:50:37] we have6 apache servers right now [13:50:43] that page describes only 1 [13:51:12] is http://deployment.wikimedia.beta.wmflabs.org/ a stable wiki or is that part of the beta cluster and hence could be deleted at any point? [13:52:17] it is supposed to be stable but in fact that "documentation" isn't really stable, it's just a simple overview I wrote [13:52:28] it wasn't meant to be official docs [13:52:55] anyway deployment wiki is supposed to work most of time :-) [13:53:04] ok ok :) [13:53:06] anyway since it runs on labs, any outage of labs will affect it of course [13:53:31] feel free to move it somewhere else [13:54:02] it's quite funny that apache servers have 8gb ram and memcached has only 2 [13:54:18] you could install memcached on the apache boxes :-P [13:54:20] Ryan_Lane: can I create 8gb for memcached as well? [13:54:27] hashar: it's supposed to be shared [13:54:35] it would need to be one of apache's [13:54:47] I think I shouldn't ask Ryan and just do that [13:54:56] memcached as a hashing mechanism that let you share the load across several servers [13:55:16] though whenever a memcached server disappear, you lost its cache :-D [13:55:19] hm, I don't know how to configure memcached to work in cloud [13:55:39] but I think 8gb server is pretty ok [13:55:43] let me create one [14:07:04] PROBLEM host: deployment-cache is DOWN address: i-0000021a check_ping: Invalid hostname/address - i-0000021a [14:07:35] !log logging something [14:07:36] logging is not a valid project. [14:07:40] !log help [14:07:40] Message missing. Nothing logged. [14:08:00] log [14:13:36] !logging [14:13:36] To log a message, use the following format: !log [14:13:45] PROBLEM Current Load is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:13:54] petan|wk: /usr/local/apache is supposed to be in git@github.com:johnduhart/deploymentprep-conf.git [14:14:00] which does not seem to exist anymore [14:14:06] hm... [14:14:16] cant we host that on WMF Gerrit instead? [14:14:22] johnduhart left the wikimedia project few months ago [14:14:25] PROBLEM Current Users is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:14:44] ok [14:14:45] yes I think we could do that [14:14:49] I will [14:14:52] right [14:15:05] PROBLEM Disk Space is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:15:08] just need to figure out if all the stuff in /usr/local/apache really need to be in the same repo [14:15:45] PROBLEM Free ram is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:16:55] PROBLEM Total Processes is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:17:35] PROBLEM dpkg-check is now: CRITICAL on deployment-mc i-0000021b output: CHECK_NRPE: Error - Could not complete SSL handshake. [14:18:03] hashar: you better check not to commit some private content [14:18:11] there are passwords in wmf-config [14:18:22] aaahhhhh [14:18:26] I think these files are git ignored now [14:18:33] so just don't add some extra files :) [14:19:23] can you possibly add me to the "depops" group please? [14:19:33] as a default group that would be gerat [14:19:38] my current one is svn(550) [14:19:39] :-( [14:20:05] RECOVERY Disk Space is now: OK on deployment-mc i-0000021b output: DISK OK [14:20:43] I've got a problem [14:20:45] RECOVERY Free ram is now: OK on deployment-mc i-0000021b output: OK: 97% free memory [14:21:24] When I clicked "Manage Your SSH Keys" just now, it is said "No Nova credentials found for your account." [14:21:30] What's up? [14:21:55] RECOVERY Total Processes is now: OK on deployment-mc i-0000021b output: PROCS OK: 133 processes [14:22:35] RECOVERY dpkg-check is now: OK on deployment-mc i-0000021b output: All packages OK [14:23:45] RECOVERY Current Load is now: OK on deployment-mc i-0000021b output: OK - load average: 0.16, 0.12, 0.05 [14:24:15] logout and login again [14:24:25] RECOVERY Current Users is now: OK on deployment-mc i-0000021b output: USERS OK - 0 users currently logged in [14:26:05] PROBLEM Current Load is now: CRITICAL on bots-cb i-0000009e output: CRITICAL - load average: 43.00, 32.50, 13.61 [14:26:11] It's ok~ thanks for your help~ [14:26:15] :D [14:26:37] hi Shujenchang [14:26:47] hi [14:27:29] Could I join your Upload-wizard Project? [14:27:40] hashar: ok, I will do that on dbdump only, ok [14:27:47] ok :)) [14:27:52] other servers are not useful for anything though [14:28:01] that is what I noticed [14:28:25] Shujenchang: i guess so [14:28:30] have you thought about having the conf files / mediawiki files etc in a shorter directory? [14:28:37] Shujenchang: i don't think i've seen you before. are you from zhwiki? [14:28:44] done [14:28:45] something like /beta instead of /usr/local/apache/common ;-) [14:28:53] you need to relog for sure XD [14:29:06] yep, I'm from zhwp [14:29:07] hashar: that's how does it work on prod [14:29:10] * jeremyb takes a look at labsconsole [14:29:16] it's in /usr/local/apache/common [14:29:18] on prod [14:29:23] Shujenchang: did you get your keys managed? [14:29:24] this is supposed to be mirror [14:29:32] oh [14:29:42] well in prod conf is in /home/wikipedia/ ;-D [14:29:47] that's what I said too [14:29:58] other told me that's only on fenari [14:30:05] anyway, could you make me in depops by default ? I am in 550 right now: $ getent passwd hashar [14:30:06] hashar:x:1010:550:Hashar:/home/hashar:/bin/bash [14:30:08] so I don't really know [14:30:16] jeremyb: you mean SSH Keys? yes [14:30:19] hashar: I can't do that, ldap admins can [14:30:22] oh [14:30:31] Shujenchang: i mean "no Nova credentials" [14:30:47] petan|wk: so that must still be 550 [14:30:50] hashar: /home/wikipedia is on apaches too? [14:30:52] on prod [14:30:53] no [14:30:56] ok [14:31:03] on production /home/wikipedia is only for fenari host [14:31:05] PROBLEM Current Load is now: WARNING on bots-cb i-0000009e output: WARNING - load average: 0.47, 12.13, 9.97 [14:31:06] or any other bastion [14:31:09] right [14:31:16] hashar petan|wk I'm following along, let me know if I can be useful [14:31:17] that's like dbdump :) [14:31:18] hashar: why does it matter what your primary group is? [14:31:22] then the scap deployement script copy from /home/wikipedia/ to wherever is needed [14:31:40] jeremyb: I followed Hydriz's advice, loged out and loged in again, and it's ok~ [14:31:46] jeremyb: having devops as default group ensure that whenever I create a file it will belong to devops group :-D [14:31:46] hashar: it doesn't matter that your group is different, the group for files on nfs will be depops ;) [14:31:49] Shujenchang: good [14:31:50] don't worry [14:31:53] there is cron job [14:31:57] sup [14:32:01] chgrp ftw :D [14:32:12] chgrp depops -R /export [14:32:20] I need something like uname but for group :-D [14:32:38] err: I need something like umask but for group [14:32:46] hashar: i havent read everything. but you need a repo for private stuff? [14:32:57] mutante: yeah I will [14:33:05] mutante: I need to have a look at all the configuration mess [14:33:28] we might end up needing several repositories [14:33:30] meh [14:33:38] ie: one for apache confs, one for mediawiki configuration, one for password [14:33:39] hashar: there is labs/private [14:33:39] ok, you gonna manage them :D [14:34:04] petan|wk: yeah that is why I am here for. Been asked to assist you :-] [14:34:08] I am pretty happy I know how to pull in git [14:34:14] hashar: should be able to put passwords in labs/private and use puppet variables in their place in public config , to read from there [14:34:19] asked by who? [14:34:27] petan|wk: so any recurring / boring tasks could probably be assigned to me :-]]]]]] [14:34:34] someone who doesn't believe I can make it :D [14:34:42] not at all [14:34:51] someone believing you need a guinea pig to assist you :-] [14:34:55] heh [14:35:06] think about me as being an assistant hehe [14:35:23] you only had one brain, now we have one and a half :-D [14:35:38] mutante: does beta uses puppet ? [14:35:40] actually we had about half before as well [14:35:53] hashar: sort of [14:36:03] we do use puppet for apaches [14:36:22] we don't want to use it for memcached :) and we can't use for mysql [14:36:24] 04/23/2012 - 14:36:24 - Creating a home directory for shujenchang at /export/home/upload-wizard/shujenchang [14:36:47] hashar: if you are in the project (do you want to?) you can just check instance config and see which puppet groups/classes they use [14:36:51] !log upload-wizard adding Shujenchang (+ granted sysadmin) [14:36:53] Logged the message, Master [14:36:53] yes he is [14:37:11] Shujenchang: any particular plans for it? just curious but i can't stick around very long now [14:37:11] hashar: i am reading your jenkins doc now [14:37:24] 04/23/2012 - 14:37:24 - Updating keys for shujenchang [14:37:43] mutante: I just added some context, nothing fancy :-/ [14:38:29] petan|wk: I can't see the project on https://labsconsole.wikimedia.org/wiki/Special:NovaProject [14:39:02] there is filter [14:39:08] new feature :D [14:39:10] ohhh [14:39:19] and of course by default new projects are unchecked :/ [14:39:23] yeh [14:39:23] done [14:39:26] jeremyb: not yet, but I'll try to improve it~ [14:39:28] it was created by Ryan :P [14:39:31] which sound totally wrong haha [14:39:32] don't blame us [14:41:25] the filter is broken (at least the page is) [14:41:42] yay [14:41:47] try firefox [14:41:48] it works to me [14:41:53] "Toggle" appears for projects that you have sysadmin in, but not in those that don't [14:42:01] hm... [14:42:10] chrismcmahon: I am just looking at / discovering the beta cluster today [14:42:23] it seems I have sysadmin in all projects I am in then [14:42:27] chrismcmahon: then will try to fix the obvious stuff ;-D [14:42:46] hashar: just keep in mind our backup sucks :) [14:43:00] including bastion? [14:43:05] before doing permanent fixes XD [14:43:06] git for the win!! [14:43:15] database isn't in git :P [14:43:25] ok ok [14:43:32] jeremyb: and how can I log onto the project? [14:43:34] anyway, I am out for now. Need to fix my bicycle. [14:43:45] Will think about the local configuration git repo meanwhile [14:43:54] see you later tonight or tomorrow morning! [14:43:59] ok [14:44:20] !docs | Shujenchang [14:44:20] Shujenchang: View complete documentation at https://labsconsole.wikimedia.org/wiki/Help:Contents [14:44:37] you can also try to use [14:44:41] @search login [14:44:41] Results (found 1): newgrp, [14:44:47] !access [14:44:48] https://labsconsole.wikimedia.org/wiki/Access#Accessing_public_and_private_instances [14:44:55] this bot knows a lot, just ask :) [14:45:29] jeremyb: It seems no Instances on the Upload-wizard project? Is it on the Bastion? [14:45:41] !bastion [14:45:42] http://en.wikipedia.org/wiki/Bastion_host; lab's specific bastion host is: bastion.wmflabs.org; see !access [14:45:52] there is nothing on bastion in fact [14:46:05] RECOVERY Current Load is now: OK on bots-cb i-0000009e output: OK - load average: 0.41, 0.94, 3.97 [14:47:27] So I wonder which is the server of Upload-wizard project [14:48:42] there is no server in project called upload-wizard [14:49:38] Shujenchang: there are none atm. you're welcome to start a new instance. i'm not going to be too responsive here until ~wed or thursday but if you're still stuck then I can help more [14:50:45] jeremyb: OK, I'll start an new instance~ [14:51:02] * jeremyb wonders why Shujenchang is sometimes orange... [14:53:02] jeremyb: and install Mediawiki and Upload-wizard extension after started new instance? [14:53:33] jeremyb: I don't know why I'm sometimes orange too... [14:53:53] you are pressing ctrl + b [14:54:10] that's why [14:55:32] jeremyb: How should I type Instance name and chose Instance type? [14:56:03] !gerrit [14:56:03] https://gerrit.wikimedia.org/ [14:56:11] !gerrit 5000 [14:56:11] https://gerrit.wikimedia.org/ [14:56:16] zzz [14:57:05] PROBLEM Free ram is now: CRITICAL on deployment-web6 i-000001d9 output: CHECK_NRPE: Socket timeout after 10 seconds. [15:03:14] PROBLEM Puppet freshness is now: CRITICAL on nova-production1 i-0000007b output: Puppet has not run in last 20 hours [15:03:57] (you're still orange) [15:04:11] Shujenchang: have you read the labsconsole docs yet? [15:06:08] jeremyb: You mean https://labsconsole.wikimedia.org/wiki/Help:Instances ? [15:06:27] I'm reading it [15:07:14] PROBLEM Puppet freshness is now: CRITICAL on nova-gsoc1 i-000001de output: Puppet has not run in last 20 hours [15:09:14] PROBLEM Current Load is now: WARNING on deployment-web5 i-00000213 output: WARNING - load average: 17.13, 13.08, 6.42 [15:09:21] yay, no orange! [15:09:43] instance type should be something on the small end unless you have a reason not to be small [15:11:16] but you'll probably need to have your own DB on it so make sure you have enough ram/disk for that. and of course enough disk for the uploaded files [15:11:34] PROBLEM Current Load is now: WARNING on deployment-web i-00000217 output: WARNING - load average: 21.80, 16.35, 8.45 [15:12:23] PROBLEM Current Load is now: WARNING on deployment-web3 i-00000219 output: WARNING - load average: 13.71, 13.42, 7.58 [15:13:15] jeremyb: I see~ [15:13:33] PROBLEM Current Load is now: WARNING on deployment-web4 i-00000214 output: WARNING - load average: 10.21, 13.34, 7.98 [15:13:43] PROBLEM Current Load is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:14:23] PROBLEM Current Users is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:15:03] PROBLEM Disk Space is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:15:43] PROBLEM Free ram is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:16:53] PROBLEM Total Processes is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:17:33] PROBLEM dpkg-check is now: CRITICAL on upload-wizard i-0000021c output: Connection refused by host [15:19:13] RECOVERY Current Load is now: OK on deployment-web5 i-00000213 output: OK - load average: 0.16, 3.57, 5.00 [15:20:22] Shujenchang: on second thought the uploaded files can probably just go on gluster. maybe someone else (petan|wk) can say more about the current best practices for where to store a DB? [15:20:36] i think it was in flux [15:21:23] best way right now is to install a local sql server :/ [15:21:29] don't use gluster for it [15:21:37] use instance storage [15:22:23] RECOVERY Current Load is now: OK on deployment-web3 i-00000219 output: OK - load average: 0.17, 2.18, 4.25 [15:23:33] RECOVERY Current Load is now: OK on deployment-web4 i-00000214 output: OK - load average: 0.10, 2.08, 4.38 [15:26:33] RECOVERY Current Load is now: OK on deployment-web i-00000217 output: OK - load average: 0.13, 1.37, 4.14 [15:39:54] jeremyb: It's night in China, I'll take a shower and go to sleep, see you next time~ [15:43:44] PROBLEM Current Load is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [15:44:24] PROBLEM Current Users is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [15:45:04] PROBLEM Disk Space is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [15:45:44] PROBLEM Free ram is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [15:46:54] PROBLEM Total Processes is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [15:47:34] PROBLEM dpkg-check is now: CRITICAL on wikidata-dev-3 i-0000021d output: CHECK_NRPE: Error - Could not complete SSL handshake. [16:01:45] the instance not working yesterday now magically works :/ [16:02:14] but it got a different certificate hash [16:03:23] (it's the same instance, but it no longer uses the ECDSA fingerprint) [16:08:44] PROBLEM host: grail is DOWN address: i-00000216 check_ping: Invalid hostname/address - i-00000216 [16:34:09] Platonides: huh [16:34:20] I fixed one [16:39:33] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [17:09:33] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [17:38:54] Ryan_Lane, why are dhcp leases so short? [17:39:33] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [17:49:38] Platonides: something to do with being able to make changes quickly [17:50:12] I'm not sure we need it to be so short. it's the default nova uses [17:56:25] <^demon> Ryan_Lane: Improving gerrit account auto-provisioning is planned for Q2 this year :) [17:56:32] <^demon> That includes issue 1124 :D [17:56:39] rt? [17:56:49] and what do you mean by auto-provisioning? [17:56:52] <^demon> Gerrit issue 1124, use ssh keys automagically. [17:57:00] that would be ideal [17:57:14] <^demon> Pulling stuff from LDAP directly rather than copying selected fields to gerrit's database. [17:57:25] yeah [17:57:27] that would rock [17:57:46] pull only. no read/write :) [17:57:54] I'd like to keep it as simple as possible [17:58:18] I also like that OSM now handles keys really well (converting improper formats and such) [18:00:43] <^demon> Oh, we've added linking to r1234 (for referring to legacy revisions) as well as RT tickets. [18:00:47] <^demon> https://gerrit.wikimedia.org/r/#change,5033 just needs approving :) [18:01:33] ok [18:01:39] this is going to bounce gerrit when it goes through [18:02:34] <^demon> People will live :) [18:04:50] ok. done [18:09:34] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [18:11:35] ^demon: bad config line! [18:12:38] <^demon> Ugh :( [18:13:13] ^demon is always confusing because I think she's talking to the person above... or he depending what the mood is [18:14:24] ^demon: you're getting me in trouble [18:14:27] see -dev [18:14:59] * RoanKattouw apologizes for blaming Ryan_Lane [18:15:04] heh [18:15:27] <^demon> Anyone else see the irony in having a single point of failure with our dvcs? ;-) [18:17:21] we always have [18:17:31] at least it's distributed now [18:17:44] so it's technically possible to continue on, even if the server is down [18:20:28] <^demon> Ryan_Lane: Distributed points of failure :) [18:20:42] :D [18:25:23] Ryan_Lane: What are the plans for en.wiki database mirroring for labs? [18:26:03] or is it possible to read straight from a slave? [18:26:07] from labs? [18:26:19] it isn't [18:26:40] we have eventual plans for database mirroring, but it isn't in a short term [18:27:03] so maybe a year from now? [18:27:13] maybe 6 months? [18:27:16] cool [18:27:32] 2nd question: Can I get added to the bots project? [18:27:56] I thought I already did [18:27:59] oh [18:28:03] maybe you did [18:28:36] yep [18:28:37] thanks [18:28:39] heh [18:28:40] yw [18:33:36] does the bots server have a web host? e.g. http://bots.wmflabs.org/~kaldari ? [18:34:06] yes [18:37:34] What's the URL scheme? [18:38:17] is it just one single shared web host? Or do individual account have their own space? [18:38:59] moduserdir with the same structure your just asked [18:39:34] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [18:48:01] Tried creating a public_html directory and putting some stuff in it, but no luck. Do I need to create some sort of config file for moduserdir? [18:51:03] it's not in your homedir, I think [18:55:42] I'll just set something up under /var/www then [19:05:06] Done: http://bots.wmflabs.org/kaldari/index.html - hope that's kosher [19:05:54] congrats to Suhas HS who will be working on improvements to the OpenStackManager extension as part of GSoC [19:05:54] https://blog.wikimedia.org/2012/04/23/wmf-selects-9-students-for-gsoc/ [19:07:27] Ryan_Lane: ^ [19:07:45] \o/ [19:07:48] yay [19:07:54] kaldari: you should talk to petan [19:07:58] about how you should be doing this [19:08:08] you want to do it in a consistent way [19:08:11] yes [19:08:17] if you don't, then if we move your bot things will break [19:09:07] petan: let's talk [19:09:34] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [19:10:39] kaldari: hey [19:10:59] do you have bots successfully running on labs? [19:11:10] there's a bunch :) [19:11:47] petan|wk: any advice on where to put them, how to set up the job scheduling, etc.? [19:12:10] just want to make sure we're doing things consistantly [19:12:35] kaldari: it's not kosher [19:12:53] use /mnt/public_html [19:12:53] :( [19:12:54] :D [19:12:56] ah [19:13:01] don't use /var/www [19:13:11] just create a folder there [19:13:58] also for bots, use bots-4 [19:14:03] that's a free server now [19:14:11] there are 3 sql servers [19:14:17] my user isn't in the www-data group so I can't create a dir [19:14:19] also mutante created some doc [19:14:31] ssh to bots-nfs [19:14:34] then sudo su [19:14:43] then mkdir /export/public/kaldari [19:14:59] chown kaldari /export/public/kaldari [19:15:08] there is a script for that [19:15:13] just dunno where it was [19:15:17] probably in my home [19:15:55] don't forget to give access to www-data [19:19:09] awesome [19:19:12] that works: http://bots.wmflabs.org/~kaldari/index.html [19:19:26] I'll delete the other one [19:20:16] petan|wk: Thanks for the guidance! [19:20:31] what kind of scheduler are you guys using? [19:20:44] just cron or something else? [19:29:14] PROBLEM Puppet freshness is now: CRITICAL on wikidata-dev-2 i-0000020a output: Puppet has not run in last 20 hours [19:35:31] kaldari: none yet [19:35:35] cron is fine [19:39:34] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [20:08:08] New patchset: Dzahn; "let human user do the cron jobs until cron for system users is fixed" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5666 [20:08:22] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/5666 [20:09:44] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [20:21:40] New review: Dzahn; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/5666 [20:21:42] Change merged: Dzahn; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5666 [20:21:51] hi guys [20:22:02] i'm trying to set up a local VM in which to test my puppet changes [20:22:07] i'm working on some udplog stuff [20:22:11] i'm sooooo close! [20:22:39] i've got apt.wikimedia.org set up as a apt source [20:22:40] root@wmvm:/etc/apt/sources.list.d# cat wikimedia.list [20:22:40] ## Wikimedia APT repository [20:22:40] deb http://apt.wikimedia.org/wikimedia lucid-wikimedia main universe [20:22:40] deb-src http://apt.wikimedia.org/wikimedia lucid-wikimedia main universe [20:22:50] Pastebin [20:23:40] http://pastebin.com/QE1Ep9Qw [20:24:08] oh [20:24:10] ergh [20:24:17] I should be using amd OS? [20:24:21] http://apt.wikimedia.org/wikimedia/dists/lucid-wikimedia/universe/ [20:35:20] ottomata: what do you want to install? [20:38:14] Ryan_Lane: I am a bit confused about the domain concept in LDAP. Is it possible for one MW user to log in from different domains? [20:40:24] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [20:40:28] welp, for now, trying to install udplog [20:42:05] i'm doing better now with lucid 1md64 [20:42:07] amd64* [20:42:12] just have this now: [20:42:14] W: GPG error: http://apt.wikimedia.org lucid-wikimedia Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 09DBD9F93F6CD44A [20:42:44] re: mutante above [20:43:47] ottomata: use apt-key to add the key [20:45:33] mutante: where do I get the key? [20:46:08] oo, is this it? [20:46:08] http://apt.wikimedia.org/autoinstall/keyring/ [20:47:00] yay got it! [20:47:05] ottomata: yea [20:47:30] woot [20:47:30] Package[udplog]/ensure: ensure changed 'purged' to 'latest' [20:47:32] thanks mutante! [20:47:37] i was distracted, you got it imported? nice [20:47:44] np:) didnt do much [20:47:53] running though..its late [20:48:01] cool, thank you! [20:48:02] laters [20:48:06] cya [20:52:34] ottomata: you can do the apt-key stuff via puppet as well. example in ./manifests/misc/mariadb. apt::key { ... bye [20:54:37] vvv: yes [20:54:48] vvv: it's mostly meant for transitional purposes [20:55:06] though, ideally, two domains would have totally separate user sets [20:55:10] Ryan_Lane: then what happens with preferences, email, etc? [20:55:13] I never finished support for that [20:55:21] ottomata: definition is in generic-definitions.pp define apt::key , uses keyserver.ubuntu.com though, not sure if ours is own it [20:55:26] out for real now [20:55:45] vvv: the support is mainly there for people who have active directory for windows, and some other ldap server for their unix/linux hosts [20:56:49] Ryan_Lane: well, but what I mean, wouldn't two authentication backends with different user data mess the stuff up? [20:56:59] some users may exist in both, but in that situation we have to assume that stuff is being synchronized, or that the admins aren't configuring wikis to pull the preferences from LDAP [21:05:08] Ryan_Lane, how should I do to let users login to an app running in a labs instance using their LDAP credentials? [21:10:24] PROBLEM host: grail is DOWN address: i-0000021e PING CRITICAL - Packet loss = 100% [21:10:37] hm [21:10:44] I'm thinking we shouldn't allow it [21:10:56] otherwise you can capture their ldap creds [21:11:08] let's put oauth or openid on the roadmap [21:11:16] openid could be pretty easy, I think [21:11:55] if the final system would be using it, it would need to accept the LDAP keys [21:12:16] sure, I could be an evil guy trying to trick anyone into revealing their credentials [21:12:26] someone in the project could be [21:12:33] but that could be said of anyone in ops [21:12:43] hmm... that's a point [21:12:45] and the community manages the project membership [21:13:05] and anyone in the project could be socially engineered to allow access [21:13:23] infiltrating one project should allow someone to infiltrate the entire system [21:13:28] *shouldn't [21:14:17] one could argue that deployment-prep has the same problem for reused wiki passwords :/ [21:14:30] yes. it does [21:14:34] we have an open bug for that [21:14:36] maybe we could have a dummy ldap user with a public password, for testing [21:15:00] hm. [21:15:23] I don't think I like that... [21:15:34] openid/oauth really solves this problem [21:15:43] it annoys me that it isn't available in mediawiki, still [21:17:21] two major issues keep coming up with labs right now: 1. it's hard to modify puppet 2. we need sane authn/z against labs and the projects [21:20:08] ok, an easier question: [21:20:15] where is nagios pinging from? [21:20:22] the nagios server [21:20:29] in the nagios labs project [21:20:40] dumb answer for a dumb question :) [21:20:53] what should I add in the security group to allow the ping? [21:21:45] it should be there by default [21:21:57] even on a custom security group? [21:23:49] did you remove default? [21:24:23] you should always use default. it's preconfigured for ping, ssh, nagios, etc [21:25:28] but, it's icmp −1, −1 on CIDR 0.0.0.0/0 [21:26:09] ok [21:27:04] RECOVERY host: grail is UP address: i-0000021e PING OK - Packet loss = 0%, RTA = 0.56 ms [21:27:05] that doesn't allow nrpe to access it, though [21:27:53] that's tcp 5666, 5666, 10.4.0.0/24 [21:28:04] RECOVERY dpkg-check is now: OK on grail i-0000021e output: All packages OK [21:29:29] Platonides: you know you can use multiple security groups on an instance, right? [21:29:51] and they are all applied together [21:36:30] I know very little about this :) [21:36:43] so in doubt, I chose the less options [21:36:56] I guess the total permisisons are the union of those of all the groups [21:37:12] the docs say to always leave default checked, unless you have a really good reason to do otherwise [21:37:34] I must have missed it [21:37:55] btw, the oneiric instance that yesterday failed [21:38:00] yeah [21:38:00] today was magically working [21:38:06] is it still oneiric? [21:38:20] or did someone recreate it as lucid? [21:38:28] could be that someone fixed puppet [21:38:29] I changed it to lucid anyway [21:38:33] * Ryan_Lane nods [21:38:34] a good idea [21:38:44] I think it was the same instance [21:38:47] we leave the oneiric images there because we use them [21:38:59] openstack development is currently being done in oneiric [21:39:00] even though it was giving me a different fingerprint [21:39:10] a RSA one instead of the ECDSA I had cached [21:39:18] I don't know why it would change [21:39:22] dunno [21:39:44] could be that puppet reconfigured ssh [21:40:51] what are the diffs outputted by puppet? [21:41:14] changes done, patches which failed to apply... ? [21:41:36] it should never be a failure to apply [21:42:15] I don't believe it applies a patch [21:42:33] it should replace the file, and the diff is what changed as a result [21:42:59] good [21:47:32] New patchset: Ryan Lane; "Remove project groups from sudo, add ops group" [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5676 [21:47:45] New review: gerrit2; "Lint check passed." [operations/puppet] (test); V: 1 - https://gerrit.wikimedia.org/r/5676 [21:48:13] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/5676 [21:48:16] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/5676 [21:49:03] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 0 C: 2; - https://gerrit.wikimedia.org/r/4986 [21:49:13] New review: Ryan Lane; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/4986 [21:49:16] Change merged: Ryan Lane; [operations/puppet] (test) - https://gerrit.wikimedia.org/r/4986 [21:51:00] hey ryan, [21:51:05] howdy [21:51:27] is there a place /space where i can find the log of a recently run puppet on a machine? [21:51:38] /var/log/messages [21:51:41] thx [21:52:19] or maybe /var/log/daemon.log [21:52:57] i can't read both :) [21:53:02] as root [21:53:08] which i am nt [21:53:08] you can sudo to it [21:53:14] on which box? [21:53:16] stat1 [21:54:12] i can't sudo either [21:54:28] what are you looking for? [21:54:40] I don't have any immediate answers for you [21:54:45] ok. thx [22:14:04] 04/23/2012 - 22:14:04 - Creating a home directory for platonides at /export/home/gitorious/platonides [22:15:04] 04/23/2012 - 22:15:04 - Updating keys for platonides