[00:02:53] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 21% free memory [00:10:53] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 17% free memory [01:09:43] PROBLEM Total processes is now: WARNING on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS WARNING: 168 processes [01:14:42] RECOVERY Total processes is now: OK on bots-salebot.pmtpa.wmflabs 10.4.0.163 output: PROCS OK: 91 processes [01:44:52] PROBLEM dpkg-check is now: CRITICAL on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: DPKG CRITICAL dpkg reports broken packages [01:54:53] RECOVERY dpkg-check is now: OK on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: All packages OK [02:02:52] PROBLEM dpkg-check is now: CRITICAL on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: DPKG CRITICAL dpkg reports broken packages [02:30:52] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 20% free memory [02:37:34] RECOVERY Free ram is now: OK on swift-be1.pmtpa.wmflabs 10.4.0.107 output: OK: 20% free memory [02:39:33] PROBLEM Current Users is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:40:13] PROBLEM Disk Space is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:40:53] PROBLEM Current Load is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:40:53] PROBLEM Free ram is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:42:23] PROBLEM Total processes is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:44:33] PROBLEM dpkg-check is now: CRITICAL on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [02:47:27] slow cluster is slow ;( [02:47:41] 10+ mins to provision a node?? ;-( [02:50:32] PROBLEM Free ram is now: WARNING on swift-be1.pmtpa.wmflabs 10.4.0.107 output: Warning: 18% free memory [02:54:32] RECOVERY Current Users is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: USERS OK - 0 users currently logged in [02:54:32] RECOVERY dpkg-check is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: All packages OK [02:55:12] RECOVERY Disk Space is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: DISK OK [02:55:52] RECOVERY Current Load is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: OK - load average: 0.45, 1.10, 0.94 [02:55:52] RECOVERY Free ram is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: OK: 897% free memory [02:57:03] so somewhere in the neighborhood of 17 mins to provision [02:57:22] RECOVERY Total processes is now: OK on testing-jeremyb-tmp.pmtpa.wmflabs 10.4.1.63 output: PROCS OK: 84 processes [03:00:23] PROBLEM Disk Space is now: WARNING on deployment-apache32.pmtpa.wmflabs 10.4.0.166 output: DISK WARNING - free space: / 572 MB (5% inode=74%): [03:13:53] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 17% free memory [03:21:23] PROBLEM Free ram is now: WARNING on dumps-bot2.pmtpa.wmflabs 10.4.0.60 output: Warning: 19% free memory [04:16:32] PROBLEM Disk Space is now: WARNING on deployment-apache33.pmtpa.wmflabs 10.4.0.187 output: DISK WARNING - free space: / 572 MB (5% inode=74%): [04:46:58] Ryan_Lane: any end in sight for the labsconole edits by 127.0.0.1 ? (i'm assuming that that happening is always a bug?) [05:01:23] jeremyb: yes, a bug [05:02:55] https://bugzilla.wikimedia.org/show_bug.cgi?id=43603 [05:02:58] added a bug for it [05:05:43] * jeremyb has subscribed to it :) [05:16:21] Ryan_Lane: so do you use devstack? or mostly just nova on nova? [05:16:35] devstack for development [05:16:44] nova on nova for development of openstackmanager [05:17:25] * jeremyb is considering maybe trying to replicate the 15-20 mins from initial creation to being able to ssh in [05:17:53] well, part of it is how long puppet takes to run [05:17:56] (which seems pretty standard on the cluster for me? or is that just me?) [05:17:57] right [05:18:04] i knew that ;) [05:18:11] the other part is that all of the nodes have 60+ instances on them [05:18:13] i don't know if it's all one run or multiple runs [05:18:20] right [05:18:33] in this version of nova the periodic tasks run in the same thread as everything else [05:18:39] which is going to make things slow [05:18:49] it's not actually 15-20 mins, though, it is? [05:18:53] yes [05:18:54] I thought it was closer to 10 [05:19:30] grizzly will hopefully fix this some [05:19:40] we still need to upgrade to folsom. heh [05:19:48] hah [05:19:55] so, essex atm? [05:19:58] yeah [05:21:22] hrmmmm [05:21:40] Ryan_Lane: so, https://labsconsole.wikimedia.org/w/index.php?title=Nova_Resource:I-00000560&action=history has both edits in the same second? what's up with that? [05:21:46] i wanted to see when it was created [05:22:16] well, nova actually does the edits [05:22:57] oh, i guess i know the answer [05:23:13] "I" deleted the page and then nova immediately created a new one with the same name [05:23:17] ? [05:23:17] ah [05:23:20] could be [05:23:37] why'd you delete the page? [05:23:50] or do you mean OpenStackManager did? [05:23:52] i deleted the instance [05:23:53] right [05:23:57] ah. yeah [05:24:02] that's likely what happened [05:24:02] but it's deletion is attributed to me [05:24:06] i'm not even a sysop [05:24:10] so i can't delete [05:24:13] yes you can [05:24:20] oh [05:24:20] wait [05:24:23] no, you can't :D [05:24:47] I may want to not delete the page in OpenStackManager, now that nova handles that [05:24:58] right [05:25:22] same with creating the page [05:25:33] also, it's nice to have some links between the pages too. not just a blank page. so e.g. a project page could have a "former instances" section [05:26:10] https://bugzilla.wikimedia.org/show_bug.cgi?id=43604 [05:26:14] (and the instance page could link to it's project page) [05:26:31] heh, churning them out i see [05:26:34] "former instances"? [05:26:41] well, it's good to track things [05:26:55] there's tons of bugs in labs, just like there's tons of bugs in mediawiki ;) [05:27:23] well at least 127.0.0.1 i thought was already very well known :) [05:27:29] the others are less shocking [05:28:03] how are we doing for tests in OSM? [05:28:52] i wonder if we have any code (in any repo) that could be checked regularly for code coverage [05:33:30] it was well known. I just never put a bug in [05:33:35] jeremyb: tests? :D [05:33:36] hahaha [05:33:52] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 20% free memory [05:36:29] Ryan_Lane: where's hashar when i need him? [05:36:32] ;-) [05:46:52] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 16% free memory [06:28:33] PROBLEM Total processes is now: WARNING on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS WARNING: 155 processes [06:28:53] PROBLEM Total processes is now: WARNING on parsoid-roundtrip4-8core.pmtpa.wmflabs 10.4.0.39 output: PROCS WARNING: 159 processes [06:33:53] RECOVERY Total processes is now: OK on parsoid-roundtrip4-8core.pmtpa.wmflabs 10.4.0.39 output: PROCS OK: 147 processes [06:38:32] RECOVERY Total processes is now: OK on parsoid-spof.pmtpa.wmflabs 10.4.0.33 output: PROCS OK: 149 processes [08:01:18] 01/03/2013 - 08:01:18 - Updating keys for rubin at /export/keys/rubin [08:01:53] RECOVERY Free ram is now: OK on swift-be2.pmtpa.wmflabs 10.4.0.112 output: OK: 20% free memory [08:19:53] PROBLEM Free ram is now: WARNING on swift-be2.pmtpa.wmflabs 10.4.0.112 output: Warning: 16% free memory [10:23:42] PROBLEM Free ram is now: WARNING on dumps-bot3.pmtpa.wmflabs 10.4.0.118 output: Warning: 19% free memory [10:33:46] @search log [10:33:46] Results (Found 12): morebots, labs-morebots, credentials, terminology, newgrp, hyperon, hashar, Thehelpfulone, blehlogging, initial-login, beta, search, [10:33:53] !morebots [10:33:53] source code http://svn.wikimedia.org/viewvc/mediawiki/trunk/tools/adminlogbot/ [10:34:00] !labs-morebots [10:34:00] adminbot: http://svn.wikimedia.org/viewvc/mediawiki/trunk/tools/adminlogbot/ [10:34:20] @infobot-detail morebots [10:34:20] Info for morebots: this key was created at N/A by N/A, this key was displayed 1 time(s), last time at 1/3/2013 10:33:53 AM (00:00:27.0845420 ago) [10:34:33] @seenrx hyperon [10:34:34] petan: Last time I saw hyperon they were quitting the network with reason: no reason was given at 10/19/2012 9:04:45 PM (75.13:29:48.2409290 ago) (multiple results were found: hyperon_) [10:35:04] we need to get some maintainer for logbot [12:41:24] 01/03/2013 - 12:41:23 - Creating a home directory for skalman at /export/keys/skalman [12:46:34] 01/03/2013 - 12:46:34 - Updating keys for skalman at /export/keys/skalman [12:55:34] hi - what should I do to get access to the Webtools project, in order to run a small tool for sv-wikt? [13:25:22] @labs-project-users webtools [13:25:23] Following users are in this project (showing all 4 members): Novaadmin, Platonides, Ryan Lane, Odie5533, [13:25:32] skalman12 ask these users [13:26:06] @labs-project-info webtools [13:26:07] The project Webtools has 1 instances and 4 members, description: A project for providing web-based tools to help Wikimedia projects. Tool authors can get access to this project to host their tools. (Note: This is somewhat similar to the original Toolserver) [13:27:25] petan: I ended up asking Platonides.. we'll see where that gets me :) [14:01:23] RECOVERY Free ram is now: OK on dumps-bot2.pmtpa.wmflabs 10.4.0.60 output: OK: 26% free memory [16:08:43] Hi there! Did anything happen to the puppet stuff? It won't run anymore on a puppetmaster::self. Complaining about generic-definitions.pp [16:10:26] Does anyone else have that problem? I get it on every labs puppetmaster::self that I updated from git today. [16:21:23] 01/03/2013 - 16:21:23 - Creating a home directory for ejcaputo at /export/keys/ejcaputo [16:26:21] 01/03/2013 - 16:26:20 - Updating keys for ejcaputo at /export/keys/ejcaputo [16:28:49] !labs [16:28:49] https://labsconsole.wikimedia.org/wiki/$1 [16:29:06] @regsearch .* [16:29:06] Results (Found 155): morebots, bang, labs-home-wm, labs-nagios-wm, labs-morebots, gerrit-wm, wiki, labs, extension, wm-bot, gerrit, revision, monitor, alert, password, unicorn, bz, os-change, instancelist, instance-json, leslie's-reset, damianz's-reset, amend, credentials, queue, sal, info, ask, sudo, access, $realm, keys, $site, pageant, blueprint-dns, bots, stucked, rt, pxe, ghsh, group, pathconflict, terminology, etherpad, epad, pastebin, newgrp, osm-bug, afk, test, manage-projects, rights, new-labsuser, cs, new-ldapuser, quilt, labs-project, openstack-manager, wikitech, load, load-all, wl, domain, docs, ssh, documentation, help, account, start, link, socks-proxy, magic, labsconf, console, ping, hexmode, Ryan, resource, account-questions, hyperon, deployment-prep, security, project-discuss, project-access, putty, :), instanceproject, puppet-variables, demon, linux, git, port-forwarding, pong, whatIwant, broken, damianz, puppet, report, db, nagios-fix, whitespace, instance, hashar, bot, sexytime, Thehelpfulone, bug, pl, projects, meh, blehlogging, origin/test, mac, windows, petan, accountreq, bastion, labsconsole.wiki, nagios.wmflabs.org, nagios, puppetmaster::self, git-puppet, addresses, initial-login, gerritsearch, wikiversity-sandbox, deployment-beta-docs-1, sudo-policies, cookies, svn, forwarding, labsconsole, beta, *, say, sudo-policy, ryanland, puppetmasterself, search, del, gitweb, python, !log, htmllogs, mobile-cache, op_on_duty, botsdocs, mail, labswiki, google, sshkey, requests, home, single-node-mediawiki, cmds, [16:29:08] mm [16:29:08] sorry for spam [16:29:15] nom [16:32:46] @infobot-snapshot backup [16:32:46] Snapshot snapshots/#wikimedia-labs/backup was created for current database as of 1/3/2013 4:32:46 PM [16:32:54] :) [16:33:16] now we can have multiple db's and switch between them online [17:33:42] PROBLEM host: wikidata-puppet-dev.pmtpa.wmflabs is DOWN address: 10.4.1.63 CRITICAL - Host Unreachable (10.4.1.63) [17:38:52] RECOVERY host: wikidata-puppet-dev.pmtpa.wmflabs is UP address: 10.4.1.63 PING OK - Packet loss = 0%, RTA = 0.68 ms [17:39:22] PROBLEM Total processes is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:40:53] PROBLEM Current Load is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:40:53] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:41:33] PROBLEM Current Users is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:42:13] PROBLEM Disk Space is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:43:03] PROBLEM Free ram is now: CRITICAL on wikidata-puppet-dev.pmtpa.wmflabs 10.4.1.63 output: Connection refused by host [17:44:48] Silke_WMDE: oh? did you do a git pull? [17:44:58] yes [17:45:26] that may be why [17:45:36] which instance? I can help you debug [17:45:44] I am creating a brand new labs instance right now to try again and exclude that I messed something up [17:46:24] ok, well, let me know if you need any help [17:46:39] ok thx! [17:59:53] PROBLEM host: wikidata-puppet-client2.pmtpa.wmflabs is DOWN address: 10.4.1.14 CRITICAL - Host Unreachable (10.4.1.14) [18:03:52] RECOVERY host: wikidata-puppet-client2.pmtpa.wmflabs is UP address: 10.4.1.14 PING OK - Packet loss = 0%, RTA = 0.71 ms [18:04:22] PROBLEM Total processes is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:05:12] PROBLEM dpkg-check is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:05:52] PROBLEM Current Load is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:06:32] PROBLEM Current Users is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:07:12] PROBLEM Disk Space is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:08:02] PROBLEM Free ram is now: CRITICAL on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: Connection refused by host [18:13:03] RECOVERY Free ram is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: OK: 709% free memory [18:14:23] RECOVERY Total processes is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: PROCS OK: 85 processes [18:15:52] RECOVERY Current Load is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: OK - load average: 0.13, 0.12, 0.05 [18:15:52] RECOVERY dpkg-check is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: All packages OK [18:16:32] RECOVERY Current Users is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: USERS OK - 0 users currently logged in [18:17:12] RECOVERY Disk Space is now: OK on wikidata-puppet-client2.pmtpa.wmflabs 10.4.1.14 output: DISK OK [18:34:18] Ryan_Lane: OK, it occurs on a newly created labs instances/puppetmaster::self, too. puppet says "Failed to apply catalog: Parameter creates failed: creates must be a fully qualified path at /etc/puppet/manifests/generic-definitions.pp:666" [18:34:46] But only *after* I tried to install Wikidata :P [18:34:55] which instance is thism so that I can log in? [18:35:02] heh. I love that its 666 [18:35:18] yeah. :) It's wikidata-puppet-client2 [18:36:45] are you using git::clone anywhere? [18:36:52] yes [18:37:10] it's used in mediawiki.pp and wikidata.pp [18:37:23] I think install_path isn't being set maybe? [18:38:05] where is $install_path being set? [18:38:23] hm... there's a default somewhere, wait... [18:38:32] there's one in mediawiki.pp [18:40:09] ah [18:40:21] I think that's what's missing [18:40:38] that's what it's complaining about right now, anyway [18:40:40] Ryan_Lane andrewbogott Is this working syntax or a typo: [18:40:56] "{$install_path}/extensions/UniversalLanguageSelector" [18:41:02] that's a typo [18:41:09] it should be ${install_path} [18:41:18] hm. is that what's breaking things? [18:41:29] Might be [18:41:48] yeah, may be [18:42:39] So, I send that file to review. [18:42:48] well, you can try it out locally [18:42:52] to see if it fixes it [18:42:55] right [18:42:57] ok [18:43:59] \o/ got a different error now! [18:44:02] :) [18:44:03] :D [18:44:10] well, that's a step :) [18:44:48] It's not new, I just didn't understand what to do about it: [18:45:04] Could not find dependent Service[gmond] for File[/etc/ganglia/conf.d/memcached.pyconf] at /etc/puppet/manifests/memcached.pp:49 [18:45:20] o.O [18:45:46] Until now, I always commented out 2 lines locally (memcached in mediawiki.pp) then puppet runs [18:45:59] hm. gmond should be getting installed [18:46:53] ah [18:46:55] I know why [18:47:07] I see you are adding the node info locally [18:47:14] that means you're overriding what's in ldap [18:47:19] oh [18:47:34] how else would I add my instance? [18:48:02] under "Labs sysadmins" in the sidebar [18:48:08] "Manage Puppet groups" [18:48:31] under your project, add a group [18:48:41] call it anything you like (maybe wikidata-roles) [18:48:52] then, add a class: role::wikidata-client::labs [18:49:12] after doing so, you can go to "Manage instances", and click "configure" next to the instance [18:49:19] that'll let you select the role [18:49:27] then you can remove the node entry [18:49:41] you can then use that role for any other instance in the project [18:49:42] I don't see a button to add a class [18:49:50] did you add a group yet? [18:49:59] yes [18:50:02] hm [18:50:03] one sec [18:50:32] should I see that group in the list now? [18:50:35] you don't see one next to "Classes"? [18:50:41] yeah [18:50:42] I do [18:51:29] this is the worst interface in labsconsole :( [18:51:34] I really need to fix this one day [18:51:44] * Silke_WMDE nods [18:51:47] :) [18:51:48] now there's two wikidata-roles [18:52:01] and a wikidata one too [18:52:31] doesn't see wikidata-roles [18:52:39] hm [18:52:40] weird [18:52:46] this is obviously a bug [18:53:14] I see headers "classes" "variables" with nothing inbetween [18:53:46] wow. really? you don't see "wikidata-roles" right abobe classes and variables? [18:53:47] the role::labswikidata-dev was created by someone else [18:53:57] there's no "[Add class]" button? [18:54:13] no [18:54:21] yeah, this is definitely broken, then [18:54:31] do I have to have permissions for that maybe? [18:54:32] let me add the class for you, and open a bug [18:54:35] you should [18:54:56] maybe I'm underprivileged [18:55:22] nope. you're listed as a sysadmin [18:55:37] and you were able to create the wikidata-roles group [18:55:52] ok [18:55:54] I deleted wikidata-roles [18:55:59] and added the class to wikidata group [18:56:04] since one already existed [18:56:12] this explains why I never understood what this interface was for. [18:56:13] you can go to the instance's configuration page and add it now [18:56:16] yeah [18:56:22] it does :D [18:56:44] it works for me, but my privileges are higher [18:57:56] https://bugzilla.wikimedia.org/show_bug.cgi?id=43613 [18:59:01] ok, thanks [19:01:15] Silke_WMDE: let me know if that helps [19:02:32] PROBLEM Free ram is now: WARNING on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: Warning: 16% free memory [19:02:44] Ryan_Lane: Thanks! Yes, the memcached/gmond error is gone. [19:02:47] cool [19:02:49] \o/ [19:17:51] andrewbogott: I just invited you for a tiny review, just a typo, but important for Wikidata, Wikipedia and the world. :) [19:18:10] Silke_WMDE: Yep, looks fine. I'm going to take a minute to try to figure out why Jenkins hates it [19:18:27] good question [19:18:38] andrewbogott: jenkins is failing for all puppet changes right now [19:18:40] hashar: ^^ [19:18:46] good timing :) [19:18:47] I noticed :) [19:19:04] hi [19:19:09] almost back from vacations :D [19:19:16] happy new year! [19:19:17] andrewbogott: what is wrong with Jenkins ? [19:19:34] hashar: it's breaking for every puppet change [19:19:36] hashar: check it out: https://gerrit.wikimedia.org/r/#/c/42111/1 [19:19:42] Ryan_Lane: do you have any 30'000 feet doc about git deploy? I have been tasked to deploy it on labs for the 'beta' cluster. [19:19:51] The pep8 complaint is valid, the other is… cryptic. [19:19:53] oh men [19:20:05] hashar: well, it's going to take some work to do so [19:20:07] maybe someone merged a faulty change [19:20:32] andrewbogott: 19:13:47 err: Could not parse for environment production: No file(s) found for import of '../private/manifests/passwords.pp' at /var/lib/jenkins/jobs/operations-puppet-validate/workspace/manifests/certs.pp:3 [19:20:50] the passwords include is stripped in jenkins job [19:20:58] apparently not ;) [19:21:05] but only in the manifest/site.pp file iirc [19:21:09] not in manifests/certs.pp [19:21:17] it's being included there? [19:21:35] why in the world would that be there? [19:21:39] from certs.pp yeah, according to above error [19:22:36] Ryan_Lane: 6782d9a62f7f65e4e186bd0fe49640b199c8ee98 [19:22:50] !g I3358ace92578266ca288663d5a9c83f664af1fa1 [19:22:50] https://gerrit.wikimedia.org/r/#q,I3358ace92578266ca288663d5a9c83f664af1fa1,n,z [19:23:19] * andrewbogott wonders why that has a different hash for you and for me [19:23:41] andrewbogott: your is the commit sha1, mine is the Gerrit change-id field [19:23:59] oh sure [19:24:50] should we filter out the passwords.pp include in manifests/cert.pp ? [19:25:26] no [19:25:30] I'm fixing this now [19:25:50] \O/ [19:26:05] hashar: Will gerrit refuse to merge if Jenkins doesn't like a patch? [19:26:35] andrewbogott: you can remove jenkins as a reviewer [19:26:45] andrewbogott: indeed. You can't submit a change if anyone voted verified -1 [19:27:00] andrewbogott: but ops should be able to remove jenkins-bot user from the reviewer list (as Ryan stated) [19:27:10] hashar: That seems good; just wondering how the earlier broken change got in in that case. [19:27:18] Someone merged quicklike before Jenkins had a chance to vote? [19:27:19] should be done carefully though cause most of the time jenkins vote -1 for a good reason :-] [19:27:59] andrewbogott: apparently yeah. https://gerrit.wikimedia.org/r/#/c/42058/ has introduced the issue and been marked verified -1 by jenkins [19:28:06] "You can't submit a change if anyone voted verified -1" <- consensus management! [19:28:30] andrewbogott: when it was first merged, jenkins said +1 [19:28:41] äh "verified" [19:28:50] Silke_WMDE: happy new year :-] [19:28:57] :) [19:29:20] Ryan_Lane: I will have a look at the wikitech doc for git deploy next monday and poke Reedy about it too (assuming he knows about it). [19:29:34] Ryan_Lane: then will ping you for some more explanations :-] [19:30:15] hashar: I'm going to need to make some changes to have it work [19:30:24] hashar: but, we can go through it next week :) [19:30:40] Silke_WMDE: So rather than just remove Jenkins I'm going to wait for Ryan to fix the underlying problem and then rebase and hope that satisfies Jenkins. So, stay tuned... [19:30:48] I fixed it [19:30:48] Ryan_Lane: nice. Will have a look at the code first to get familiar with it. [19:30:51] andrewbogott: [19:30:52] ok [19:31:02] * andrewbogott rebases [19:31:07] Ryan_Lane: the bug is https://bugzilla.wikimedia.org/43339 , it is listed as a requirement for the EQIAD migration [19:31:48] yeah [19:32:50] cool, that was fast! [19:33:41] thank you! [19:35:23] I'm starving and missing a date. Actually, I might need some LoadBalancer config help tomorrow. I'll sneak around here. [19:35:38] CU and thanks again! [19:37:16] Ryan_Lane: out for now. See you later :-] [19:37:23] * hashar waves [19:38:10] bye hashar [20:24:22] PROBLEM Free ram is now: WARNING on dumps-bot2.pmtpa.wmflabs 10.4.0.60 output: Warning: 19% free memory [20:34:22] RECOVERY Free ram is now: OK on dumps-bot2.pmtpa.wmflabs 10.4.0.60 output: OK: 39% free memory [20:47:07] * sumanah dives into [[Special:PrefixIndex/Wikimedia_Labs/]] for a few minutes [21:08:14] Ryan_Lane: are you willing to give membership to the webtools project? [21:15:28] giftpflanze: sure [21:15:34] giftpflanze: what's your username? [21:15:36] gifti? [21:15:48] yes [21:18:13] ok [21:18:14] sec [21:18:22] you know that nothing is set up in that. right? [21:18:30] yes [21:18:36] ok. good :) [21:19:28] giftpflanze: done [21:19:34] thank you :) [21:19:53] yw [21:20:35] @labs-user gifti [21:20:35] That user is not a member of any project [21:20:42] @labs-user Gifti [21:20:42] Gifti is member of 2 projects: Bastion, Bots, [21:20:48] :o [21:20:53] maybe wait a bit [21:21:12] @labs-resolve blah [21:21:12] I don't know this instance, sorry, try browsing the list by hand, but I can guarantee there is no such instance matching this name, host or Nova ID unless it was created less than 57 seconds ago [21:21:20] 57 sec lag :/ [21:22:01] @labs-user Gifti [21:22:01] Gifti is member of 3 projects: Bastion, Bots, Webtools, [21:22:15] here we go [21:22:27] any plan to fix the trailing comma? [21:22:59] does your bot have continuation? [21:30:54] giftpflanze mm depends [21:31:10] I consider it a cosmetic issue :D so low priority [21:31:15] but if u want [21:31:24] either fill in a ticket or fix it yourself :P [21:31:30] fix it myself? [21:31:37] how so? [21:31:37] it's open source [21:31:44] where can i do that? [21:31:48] Type @commands for list of commands. This bot is running http://meta.wikimedia.org/wiki/WM-Bot version wikimedia bot v. 1.10.4.80 source code licensed under GPL and located at https://github.com/benapetr/wikimedia-bot [21:31:57] this is plugin labs [21:32:05] so in plugins/labs [21:32:16] isn't that in all of the commands? [21:32:27] or is that just coincidence? [21:32:32] PROBLEM Free ram is now: CRITICAL on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: Critical: 5% free memory [21:32:34] it's in most of commands that are comma separated [21:32:41] ah [21:32:59] in fact even space separated messages have trailing space, just you don't see it [21:33:00] :D [21:34:03] @labs-project-users bastion [21:34:03] Following users are in this project (displaying 18 of 254 total): Andre Engels, Aaron Schulz, Abartov, Abe.music, Addshore, Adminxor, Akhanna, Chughakshay16, Alejrb, Amgine, Amire80, Andrew Bogott, Anomie, Apmon, ArielGlenn, Arky, Asher, Dash1291, [21:34:11] can you get more? [21:34:20] more users? how [21:34:26] freenode limits the message size [21:34:29] like @more [21:34:50] I don't think people would like to browse large lists using irc [21:35:00] but in theory yes it can be done [21:35:14] I mean it can be implemented but I don't know if it's a good idea [21:35:27] this bot doesn't post more than 1 message to avoid spamming [21:35:32] that sounds reasonable [21:36:19] so i would have to get an account on github and make a pull request? [21:36:51] tbh I don't know how it works there I just know that anyone can send pull requests... I guess you need an account for that [21:37:14] if it was possible to make it even more open I would be happy to do that [21:37:34] PROBLEM Free ram is now: WARNING on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: Warning: 16% free memory [21:37:38] or if you create an account I think I can give you some rights so that you can just push and merge withou asking [21:42:27] balls [21:42:37] as if gerrit uses case sensitive usersnames by default [21:44:04] my src directory is so full of shit :\ i haven't used it in a long time [21:46:24] make clean [21:46:38] haven't a makefile for it ^^ [22:26:01] andrewbogott: http://www.openstack.org/blog/2013/01/save-the-date-openstack-summit-spring-2013/ [22:26:11] portland, april 15-18 [22:26:23] feb 15th is last date for talks [22:26:33] hm… ok. [22:26:59] last call for sessions will be later [22:29:43] amsterdam hackathon is in May, thankfully :) [23:37:32] RECOVERY Free ram is now: OK on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: OK: 27% free memory [23:45:32] PROBLEM Free ram is now: WARNING on abogott-request-tracker.pmtpa.wmflabs 10.4.1.48 output: Warning: 13% free memory [23:52:38] Ryan_Lane: i have a script that checks weblink on dewiki if they're working or not and reports them on talk pages. therefore it runs like 100 threads in parallel to not exhaust system resources. when i port that from toolserver to labs, should i use a separate instance for that? [23:52:45] *weblinks [23:52:58] you can, yeah [23:53:04] is this not a bot? [23:53:09] it is [23:53:13] * Ryan_Lane nods [23:53:19] is this going into bots, or webtools? [23:53:25] bots [23:53:28] ah. ok [23:53:44] yeah. if it's going to fully utilize an instance, then it should be in its own instance [23:54:47] ok [23:56:47] I can think of lots of things that would fully utilize an instance, you would not aprove of them :P