[01:08:25] @replag [01:08:29] !replag [01:08:34] !lag [01:08:40] grr [01:08:58] we need that feature implemented [03:42:55] gah, he left [03:43:02] grrrrrr @ amgine [06:00:03] anda bermasalah dengan kartu kredit/KTA?? Kami bantu dibebas bayarkan hub. Dita 02190409949 [06:55:39] huh [06:55:47] idk if i ever saw a spambot here before [08:30:11] [08:30:17] [08:44:05] Hi. I'm migrating from toolserver to Tools. On toolserver, there was a shared folder with dumps -- how can I access dumps here? [08:46:10] https://www.mediawiki.org/wiki/Wikimedia_Labs/Tool_Labs/Needed_Toolserver_features#Filesystem_.2F_shared_storage lists this done without hint, [08:47:53] and https://wikitech.wikimedia.org/wiki/Help:Shared_storage#Public_datasets gives exact file path [09:09:46] gry, thanks a lot [11:20:49] !log deployment-prep Cleaned out Parsoid submodule: sudo su - mwdeploy then cd /home/wikipedia/common/php-master/extensions/Parsoid && git reset --hard origin/master && cd .. && git submodule update --init Parsoid [11:20:53] Logged the message, Master [11:20:54] aude: ^^^ [11:21:11] aude: for some reason, git mess up some repository from time to time :( [11:21:27] ok :/ [12:22:21] which mvc framework for php do you recommend? [12:59:54] AzaToth: django [13:04:38] hashar: what did you get to eat? Did you bring the rest of us back anything? Where's my coffee, it's the one with LOTS of sugar and cream? [13:04:54] :D [13:04:58] nothing sorry [13:05:12] I see how you are. ;) [13:05:13] I would recommend stopping eating extra sugar entirely! [13:05:34] will eventually cause you some kind of diabete :( [13:05:50] Why's that? Are you saying I'm sweet enough already? :p [13:12:49] Ha... no comment I see. xD [15:13:22] Hi guys, anybody knows how many disc space I have in a bastion? I'd like to make a custom dump of some data and import it to my machine to make an analysis. [15:13:25] Coren, maybe you have some ideas on http://lists.wikimedia.org/pipermail/toolserver-l/2013-November/006389.html ? [15:13:40] Coren: (how to redirect old toolserver accounts to TL) [15:14:07] valhallasw: I don't see any way of doing it uniformly -- the tool name often changes also between ts and tl. [15:14:58] elgranscott: Not very much; you could probablt stuff 1G in there for a while, but you shouldn't keep it. [15:15:29] Coren: Sure -- it doesn't need to be uniform, but we should really not let urls die just because the tool name changes :-) [15:15:50] Coren: could we have a vhost with a large-ish redirect config? [15:16:55] valhallasw: prod does... [15:16:59] Well yeah, that'd work, but the maintenance job is nontrivial. [15:17:35] Nosy would have to maintain a mapping of ts dir -> some uri on tl where the same tool lives; and tool maintainers would have to tell her when they move and such. [15:18:05] maintenance should be pretty trivial because it should be a stable mapping. unless you're e.g. switching back and forth between tools and your own separate labs project [15:18:23] Coren.: the reason I want to do that is because a need a complete dump of the revision table. I downloaded the stub-meta-history.xml but I don't have all data I need, like rev_len. Any idea? [15:18:24] Coren: well, we could also do that on TL, I think? any dead account would just redirect to the same url on [15:18:29] nosy can just redirect all expired accounts to the same path on a tool labs host and then we take it from there [15:18:52] that would also solve the issue of what to do with toolserver.org [15:19:16] i think valhallasw are on the same page :) [15:19:22] valhallasw and I* [15:19:28] elgranscott: ... that table is beyond gigantic. Why in hell would you need a complete dump of it? [15:22:50] I really don't like the idea of keeping those redirects alive indefinitely. Random incoming links to a dead service should not be blindly redirected to some other service that may or may not be doing the same thing or have the same maintainers. [15:23:17] Coren: I know it is gigantic, but I need it to find some correlations between changes (protection and unprotect ion). And rev_len is a data I need. [15:24:38] elgranscott: Why not do that analysis in the labs rather than shuffle the dataset around? [15:25:22] I mean, the labs already has the data available, and in a database. :-) [15:26:27] Coren: well, then add a nag screen inbetween: "This is an old toolserver url. The contents *might* be available here, but no guarantees! " [15:27:13] Coren: Yes I know, but I use a software in Windows to perform the analysis... [15:27:36] elgranscott: what do you mean rev_len is not in stub-meta-history, it's the "bytes" parameter to the "text" element [15:28:09] Coren: and there is also a Linux version, but is a software not installed on the labs [15:28:25] elgranscott: That, sir, is trivially arranged. :-) [15:29:33] valhallasw: That's not much of a gain to having toolserver.org simply provide a langing page with an explanation and a link to the list of tl tools like nosy suggested. [15:29:39] landing* [15:29:46] Coren: what do you mean? [15:30:38] elgranscott: If you need software installed, we can install software. The alternative is to transmit the data directly to your box and not store it. [15:33:24] Coren: what? of course it does, because it provides a direct url where possible. [15:34:15] valhallasw: The thing is, I remain unconvinced that this is desirable. [15:35:43] valhallasw: I don't want a link to an old tool redirected to a new tool which might have different privacy rules, or different functionality. Have a bot change the incoming links where appropriate, kill the rest. [15:36:03] Coren: but if it's exactly the same tool? even the same maintainer? [15:36:30] we have links e.g. in mailing list archives [15:36:34] how to fix all that? [15:37:24] (also, some toolserver users will have set redirects up themselves already. then it turns to 403 once they leave TS...?) [15:37:28] Coren: I'm really at a loss of words. Not only are we killing the toolserver, a reasonably well working system, we are also making it as painful as possible for not only the developers, but also the *users*? [15:37:50] * Coren clearly doesn't get what you are driving at. [15:38:06] Perhaps if you gave me an example of a link it'd be useful to redirect? [15:38:16] Any random tool that had moved to TL? [15:39:04] valhallasw: That's just a general assertion, not an actual example. [15:39:32] https://toolserver.org/~quentinv57/sulinfo/coren -> http://tools.wmflabs.org/sulinfo/sulinfo.php?username=coren [15:39:50] that's not an example of same maintainer but i can find one [15:40:03] catscan also now has a different maintainer, but it's the same basic point [15:40:17] pywikibot nightlies are another one [15:40:45] Why would anyone want to use the ts link to nightlies when they could simply be using the new link? [15:40:51] Nostalgia? [15:40:59] https://toolserver.org/~magnus/flickr2commons.php is an example of same tool [15:41:04] because they find the link in a mailing archive, on some outdated website or otherwise? [15:41:29] errr [15:41:30] Coren: The problem is the software has third-party licenses and I don't know if it could be legal to install it on the labs. But how can you transmit the data to my box? What do you mean? [15:41:36] example of same maintainer* [15:41:55] * jeremyb runs away [15:41:55] Do we even have a minimal idea of how many of those broken links /are/ in fact followed? [15:42:51] Coren: that's only an argument agains putting too much effort in it, not an argument against putting *any* reasonable effort in it. [15:43:21] http://www.w3.org/Provider/Style/URI.html [15:43:27] Setting up a system to maintain because, perhaps someone might follow a link in an old email and *might* be confused by a landing page every other month doesn't seem like a reasonable investment of /any/ time to me. [15:43:53] And setting up a bot to change *every* link on *every* wiki does sound reasonable? [15:44:20] elgranscott: Whatever method you use to extract the data can simply transmit to you over ssh rather than to a file which you then copy. [15:44:36] When we could just have the same effect server-side, and also retain working links from other sources as a bonus? [15:45:06] valhallasw: In practice, changing links to tools should be a few /template/ changes. [15:46:38] I could be convinced to set up a rewritemap for a static on-time mapping, but I am *not* going to have those rewritemaps track changes. And it would be by request only, making certain only tools with active maintainers get a map. [15:47:30] It would not need to track changes -- once it has been mapped to a TL project, it's that project's maintainers problem. [15:47:37] or rather responsibility. [15:48:19] Obviously. I'm having a hard time understanding why you don't see that the incoming link is also the maintainer's responsibility. :-) [15:48:29] But meh. [15:48:37] Coren: er, because the TS will die? [15:49:01] Coren: so how would I get links to toolserver.org/~valhallasw to redirect to whatever page on tool labs? [15:49:07] valhallasw: Last I checked, most incoming links to the TS were already rotted. [15:50:06] ....so? [15:50:25] and you haven't noticed the enormous amounts of frustration those dead links result in? [15:52:02] The correct thing to do then is to salvage any redirects in place on the ts when it shuts down. I'll have a server listen on *.toolserver.org and keep serving the same redirects. [15:52:43] So all a maintainer needs to do is to point the ts side at the "right" spot on the tl they want it redirect to, and it'll keep working once ts is retired. [15:53:08] Coren: I'd like to dump the revision table... Any suggestion? [15:53:44] Coren: and this will be just .htaccess? [15:54:16] Coren: I thought I should have the same data in the dumps, but I downloaded and transform the data in stub-meta-history.xml and I don't have all data I need... [15:54:18] valhallasw: Or whatever other process nosy provides in addition to it. [15:54:27] Sure. [15:54:56] This way, no links get any more broken than they already are. :-) [15:56:17] elgranscott: local$ ssh you@tools-login.wmflabs.org mysqldump the_table_you_want_to_dump >localfile [15:57:01] elgranscott: what does the revision table give you that stub-meta-history doesn't? [15:57:14] elgranscott: But also: elgranscott: what do you mean rev_len is not in stub-meta-history, it's the "bytes" parameter to the "text" element [15:57:32] Yeah, that. :-) [16:02:21] Coren: but when you transform the XML to SQL with mwdumper that data disappear... [16:02:57] elgranscott: Then, clearly, this is a bug in mwdumper that needs fixin' [16:03:11] (And it doesn't sound like an overly complicated one at that) [16:05:20] Coren: So, how can I report that bug? [16:05:34] elgranscott: if you know Python I can send you a script that processes said dump and pushes it into a database, should be easy to adapt it for your needs [16:05:46] (and yes, it does contain rev_len) [16:06:27] elgranscott: I... don't know who maintains mwdumper; but I expect there should be a bugzilla product for it. [16:06:36] Nettrom: Yes, I know Python. If you can send me the script it would be very useful! [16:06:49] Nettrom: m311man@gmail.com [16:07:49] elgranscott: give me five minutes to write a comprehensible email and you'll have it [16:08:29] Nettrom: Great! Lots of thanks!!! [16:14:56] Coren: Would I be allowed to perform the ssh command you told me? [16:15:03] hi yalls [16:15:15] salt grain checks seem to be timing out on a self hosted puppet master [16:15:28] File "/usr/lib/pymodules/python2.7/salt/crypt.py", line 457, in __authenticate [16:15:28] time.sleep(self.opts['acceptance_wait_time']) [16:15:30] any ideas? [16:16:41] elgranscott: you should have an email shortly [16:19:05] elgranscott: and parson my Java-ish Python, I've just recently figured out that PEP8 is something to follow [16:22:18] elgranscott: Yes, you should, but you'll have to provide the options to mysqdump. :-) [16:23:10] elgranscott: But that's moot now that you'll have a tool that'll just work with the dump you already have. [16:23:50] Core: which options? [16:25:19] (03PS1) 10Addshore: Make all DataValues?.* report to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/96041 [16:25:32] aude: ^^ :D [16:25:49] we were missing one, this will also allow us to make any more DV repos without bothering to update it again [16:26:05] DataValueImplementations is not needed, it seemed [16:26:12] doesn't match the subdirectory structure [16:26:35] but your change looks good [16:26:44] DataValueImplementations is a repo though :P [16:26:49] i know [16:26:51] * aude confused [16:26:58] we should probably watch it until it vanishes or gets renamed or something [16:27:01] silly indeed ;p [16:27:57] ok, we can watch [16:28:05] if it's not needed, i'll request it be removed [16:28:18] all depends on how we now split stuff [= [16:28:22] (03CR) 10Aude: [C: 031] Make all DataValues?.* report to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/96041 (owner: 10Addshore) [16:28:49] yep [16:31:32] (03CR) 10Jeroen De Dauw: [C: 031] Make all DataValues?.* report to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/96041 (owner: 10Addshore) [16:47:47] Core: how should I run the SSH command correctly to perform the mysqldump? [16:54:43] Coren: I happened to see http://ganglia.wmflabs.org/latest/graph.php?r=week&z=xlarge&c=tools&h=tools-login&v=1&m=s1&jr=&js= (supposed to be replag for s1.labsdb) the other day. Interesting daily spikes. [16:55:30] anomie: Hm. Looks like someone is running a bot via cron. Tsk, tsk. [16:57:14] Coren: I don't know how to run the ssh with mysqldump to perform what you told me... [16:57:24] I'm also wondering why http://ganglia.wmflabs.org/latest/?r=hour&cs=&ce=&m=load_one&s=by+name&c=tools&h=tools-login&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4#mg_Replication_Lags_div has graphs for s1, s2, s4, and s5, but not s3, s6, or s7. [16:58:12] elgranscott: It's moot anyways, you're going to be able to use the dumps. [16:59:38] anomie: Good question. I'll look into it this afternoon. [17:00:04] Coren: Do you know if mwdumper is up-to-date with the last version of database? [17:00:31] elgranscott: No I don't, I've never used dumps. [17:04:19] elgranscott: Be a little patient, Nettrom is in the process of sending you all you need. :-) [17:05:28] Coren: yes, Nettrom has sent me his script ;) [17:14:04] (03CR) 10Legoktm: [C: 032] Make all DataValues?.* report to #wikidata [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/96041 (owner: 10Addshore) [17:14:12] cheers legoktm [= [17:18:31] (03CR) 10Legoktm: "Deployed" [labs/tools/grrrit] - 10https://gerrit.wikimedia.org/r/96041 (owner: 10Addshore) [17:25:16] hello. is there a way we can have a group in gerrit that is automatically populated by a list of people who have access to a specific tool on tool labs? [17:26:55] Coren: any chance you know? ^ [17:27:50] specifically, people in the 'lolrrit-wm' tool should automatically get +2 on labs/tools/grrrit [17:46:52] ugh: gerrit is mostly ^d's domain [17:47:31] manybubbles: yes, but tool labs is mainly Coren's :P [17:47:47] we've pinged all the appropriate people [17:47:50] then [17:47:56] <^d> Gerrit don't know about labs groups and so forth. [17:47:57] I know for a fact ^d is getting coffee right now [17:48:02] <^d> Unless it's an ldap group, maybe. [17:48:10] * ^d walks away and works on that [17:48:42] ottomata: we'ree adding our redundancy back now. you can watch the cluster stay in yellow state while it does the replication [17:50:56] ugh: No. Gerrit barely speaks to LDAP at all, and is completely incapable of using groups. :-( [17:51:12] ^d: Huh, what? [17:51:26] Ah. I meant /local/ groups. They're in LDAP, but in a different spot. [17:51:33] :/ [17:52:10] We had 'unification of groups' on the TODO list, headed by Ryan. I'm pretty sure that's been delayed until next year at best. [18:00:08] ah [18:00:10] alright [18:04:41] <^d> Coren: Yeah, we can use some ldap groups (and we do :)) [18:04:50] <^d> I forget what the search prefix is offhand. [18:05:20] <^d> groupBase = ou=groups,<%= ldap_base_dn %> [18:05:22] <^d> :) [18:05:35] Yeah, that'll work for global groups not local ones. [18:05:50] Hence the need for unification beforehand. [18:05:54] <^d> *nod* [18:06:02] what do you mean by local groups? [18:30:40] paravoid: Service groups are local to each project, under a different LDAP OU [19:36:47] howdy [19:55:58] mhoover: howdy [19:56:10] our docs aren't *amazingly* up to date [19:56:53] mhoover: sign up for an account on wikitech. note that your username will be your git username as well, so if you want your real name, use that [19:57:33] ok [19:59:21] mhoover: welcome! [19:59:29] ok, signed up [19:59:32] confirmed [20:00:14] what's your user name? [20:00:22] mhoover [20:00:23] and shell account name [20:00:27] same [20:00:29] ok [20:01:41] mhoover: you're going to need to upload your public ssh key to wikitech (via preferences - openstack tab) [20:01:48] and you'll also need to enable two-factor auth [20:01:54] also via preferences [20:02:17] got it [20:02:21] setting that up [20:04:39] thank andrew…hi :) [20:08:35] ok, let me add you to ops group [20:10:42] ok, done [20:10:49] mhoover: was an NDA in your contract? [20:11:29] confidentiality statements, but not a sep nda [20:11:56] well, the confidentiality stuff is an NDA :) [20:11:57] ok, good [20:12:50] mhoover: for what it's worth, though, there's barely anything that you're actually not supposed to disclose. Data about our users, root passwords, things like that :) [20:13:25] of course, yes [20:15:40] I'll also need to get you setup in production [20:19:36] sweet, thx man [20:19:50] i'm just checking out preferences and such [20:19:56] reading through [20:21:28] * Ryan_Lane nods [20:22:09] * Coren waves at mhoover [20:22:20] * Coren catches up on backscroll. [20:22:26] mho over waves back [20:22:36] mho+over [20:22:38] :) [20:23:21] I think ima call you mho now. :-) [20:24:41] hehe [20:40:25] Hi. I added the shared pywikipediabot folder to my .bash_profile: export PYTHONPATH=/shared/pywikipedia/rewrite:/shared/pywikipedia/rewrite/externals/httplib2:/shared/pywikipedia/rewrite/scripts [20:40:57] but when I open python console and type "import wikipedia", I get a ImportError [20:41:25] I logged off and on after I created .bash_profile [20:41:25] have you sourced the file? [20:41:31] giftpflanze, yes [20:42:15] alkamid: import sys; print sys.path [20:42:24] does that include the directories you list in PYTHONPATH? [20:42:28] also [20:42:37] if it's rewrite, you need to import pywikibot [20:43:01] ['', '/shared/pywikipedia/rewrite', '/shared/pywikipedia/rewrite/externals/httplib2', '/shared/pywikipedia/rewrite/scripts', ... [20:43:12] oh, okay [20:43:20] I used another branch before... I guess [20:45:12] valhallasw, are there any other differences that I should know about? [20:45:40] I mean basic differences like this one, the rest I'll find out [20:45:50] alkamid: there are some small differences, most notably Page('nl', '...') doesn't work anymore -- use Page(Site('nl', 'wikipedia'), '...') instead [20:46:04] and some functions are deprecated, but you'll get deprecationwarnings for those [20:47:40] thanks valhallasw [20:48:10] j #wikimedia-operations [21:07:40] How do I sort out "ImportError: No module named xmlreader" ? [21:12:43] alkamid: manually? i guess something like "pip install xmlreader" [21:12:55] puppetized? unsure [21:13:01] er.. sorry [21:13:06] I just need to include another path [21:13:21] rewrite/pywikibot [21:14:18] alkamid: from pywikibot import xmlreader [21:14:40] valhallasw, thanks [21:18:50] I copied a few scripts to my tool folder and did "take scripts/" after "become mytool". Now, when I change the code locally and want to scp it, I get "permission denied" [21:19:46] I know revision control is the way to go, but I'll set it up later [21:19:54] alkamid: chmod 664 on the script [21:20:15] err [21:20:32] yes, 664 should be right (+w for group) [21:20:44] maybe you also need g+w on the directory... [21:20:47] thanks again valhallasw (: [21:20:55] chmod 664 did the trick [21:21:48] mutante, alkamid, don't use pip [21:21:56] (kneejerk!) [21:25:47] andrewbogott: what's the right thing to suggest for python modules? [21:26:02] (there is a provider) [21:26:12] mutante: debian packages, mostly. [21:26:45] andrewbogott: ok, cool, just confirming it's not -> puppet-pip [21:26:54] https://github.com/rcrowley/puppet-pip and stuff [21:27:34] With pip it's hard to know exactly what we're getting… better to use packages that have been vetted by debian or ubuntu folks. Or build our own. [21:28:07] As the python world gets more and more reliant on pip and venv we might have to figure out a new policy some day… but .deb works pretty well so far as long as folks don't demand bleeding-edge versions. [21:28:09] andrewbogott: yes, i knew this for production, just wasnt sure about ongoing labs discussion about those [21:28:12] ok [21:29:33] andrewbogott: and as long as the package is actually available [21:30:08] We should have a go-to .deb-packaging person for python packages ;-) [21:30:23] Yeah… atm I think that's me, although I don't love it. [21:30:45] oops, now I have a directory "mydir" that I can't cd into, neither by "alkamid" nor by "alkamidbot" (tool) [21:30:56] I scp'd it earlier [21:31:01] alkamid: who is it owned by? ls -ld mydir [21:31:01] from my local machine [21:31:44] drw-rw-r-- 2 local-alkamidbot local-alkamidbot 81 Nov 18 08:28 mydir/ [21:32:18] alkamid: as alkamidbot, chmod +x mydir [21:32:37] if it doesn't have eXecute permissions, you cannot cd into the dir [21:33:30] uh, my linux is so rusty, I should stop using unity and fall back to console (; [21:36:28] alkamid: scp -p to preserve permissions while copying [21:37:53] another missing module: query [21:39:29] I used it to send direct queries to the api [21:39:45] alkamid: from pywikibot.compat import query, but that's deprecated -- use pywikibot.data.api.Request instead [22:31:11] import datetime [22:31:11] ImportError: /usr/lib/python2.7/lib-dynload/datetime.so: failed to map segment from shared object: Cannot allocate memory [22:31:33] (when a script was run with qsub) [22:33:30] add more memory with -mem 500M [22:33:42] i normally have my python scripts at 350ish [22:38:45] thanks legoktm [23:06:34] mhoover: as an op you should use bastion-restricted.wmflabs.org instead of bastion [23:06:47] ahhh [23:06:53] excellent thx [23:07:24] The existing tampa cluster is running essex. We're hoping to install the very latest (h something?) in ashburn [23:07:31] …and upgrade as part of the migration. [23:07:52] Ah, Havana. That's the newest, right? [23:08:00] andrewbogott: cool. yes, it is [23:08:35] andrewbogott: pretty sweet, even has docker support built in [23:08:48] I'm pretty sure you can regard the systems in ashburn as bare metal at this point. We should confirm that no one is using any of them for 'miscellaneous' purposes, then Ryan or I will show you how to make a clean install. [23:09:57] I'll send an email to the ops list now -- you're subscribed already, right? [23:10:01] right on sounds cool [23:10:29] yes, i believe I'm on that as of this morning [23:10:52] wait, I'm wrong, puppet says tampa is folsom [23:13:04] not on ops list, btw [23:13:25] sounds better:) essex is pretty old [23:13:55] none of the openstack slated servers in eqiad are in use [23:13:59] and we're using folsom in pmtpa [23:14:05] we'd like to use havana in eqiad [23:15:01] Ryan_Lane: what does it mean that there's a virt1008 in puppet but also https://rt.wikimedia.org/Ticket/Display.html?id=4259 [23:15:02] ? [23:15:14] and do you want to try region to region live migration if possible? [23:15:20] hm. dunno [23:15:25] mhoover: nope :) [23:15:36] hehee [23:15:40] is that even possible? [23:15:44] is this possible nowadays [23:15:46] oh heh [23:16:16] as far as I know regions are totally separate openstack installs [23:16:18] depends on the setup. not sure if they got it done for the havana release, might be for the next [23:16:49] we probably want to somehow unite keystone across regions [23:17:04] that's not a simple thing, though [23:17:21] it's doable with galera [23:17:52] Ryan_Lane: is virt100x hardware the same as virtx hardware? [23:17:56] yep [23:18:12] And we only have virt1000-virt1007? [23:18:14] That's not enough... [23:18:18] yeah, it's not enough [23:18:22] :( [23:18:25] some hardware was stolen from us [23:18:28] by analytics [23:18:30] live migration seems to be available if you are using shared storage. you can do the same thing with ifs, so not sure how comprehensive it is [23:18:31] it's time they give it back [23:18:51] mhoover: well, live migration is doable with or without shared storage [23:18:57] but not across region [23:19:07] since openstack doesn't manage both regions [23:20:02] so it can't update the databases and such [23:20:14] right [23:20:16] you could do cells [23:20:36] but cells are usually missing a lot of features and it makes the networking more of a pain [23:20:39] Ryan_Lane: How many boxes did they swipe? Just 1008? [23:20:48] andrewbogott: we stole 1008 for databases [23:20:58] they have like 10, I think [23:20:59] yes, I'm sure the networking infra ass pain would negate any benefits [23:21:01] and they use like 3? [23:21:22] yeah. cells are mostly a way to scale in a single region [23:21:30] Ah, good. Because if pmtpa is any guide we need 12 boxes just to safely run what we have now... [23:21:34] for people with way too many hypervisors [23:21:49] well, I'm hoping we'll wipe out a number of instances with this move :) [23:22:03] but yeah, we need at minumum 8 [23:22:08] and ideally 12 [23:22:20] and the network node needs to have bonded ports [23:55:05] Ryan_Lane: where do i find my non-scratch 2fa token? [23:55:26] you're using google authenticator, right? [23:55:33] can't seem to login with the ones i have [23:55:56] google authenticator will show you your token [23:56:03] not yet, just trying to get back in to wikimedia [23:56:20] tools sit [23:56:21] you can't enable two factor without using some OATH client [23:56:24] standard login [23:56:45] not required for a standard login, then? [23:56:55] not until you enable two-factor [23:57:23] the login page showing a token field is something unavoidable since MW doesn't have support for challenges [23:58:04] ahh ok