[00:08:00] hexmode: You can't take things word-for-word from wikitechy, we're not there to being that close a copy [00:08:20] /usr/local/apache/common/bin/updateinterwiki [00:08:27] johnduhart: I see that and found that script [00:08:34] but even after running [00:08:40] still no fix :( [00:08:56] What are you tring to fix? [00:09:08] labs [00:09:14] but looking at the script [00:09:20] it might not be in there [00:10:38] I don't get it, what's wrong? are you trying to add a prefix for labs.wm wiki? [00:10:44] andrewbogott: it would also be interesting to store ssh host keys in LDAP, when creating the entry [00:10:55] I don't think that would be terribly easy, though [00:10:59] johnduhart: the bugzilla: prefix isn't working on labs [00:11:12] ah [00:11:15] the interwiki table *was* wrong [00:11:16] since the host key is created after boot, and nova doesn't have access to it [00:11:19] and I fixed it [00:11:24] hexmode: How? [00:11:25] but still, nothing [00:11:27] Putting extra stuff in ldap sort of presumes that any given dns driver will support storing arbitrary extra info. [00:11:36] We could store it in the nova db though. [00:11:48] johnduhart: using sql.php and "UPDATE..." [00:11:50] well, the ssh host key in LDAP is just another kind of DNS record [00:11:58] http://www.ietf.org/rfc/rfc4255.txt [00:12:05] an SSHFP record [00:12:27] hexmode: Well that won't do any good, the table's contents are dropped when the update script is ra [00:12:28] n [00:12:29] so it would work with any kind of DNS server [00:12:35] heh [00:12:45] johnduhart: so what do I need to do? [00:12:49] * hexmode is lost [00:13:06] Not sure, let me look at the table quickly [00:13:27] johnduhart: the wikifarm is using cdb files [00:13:30] hm. I wonder what attribute would store that [00:13:36] at least that is how it is set up [00:13:53] hexmode: That's a cache, it's still stored in the database [00:14:01] right [00:14:32] johnduhart: looking at the script [00:14:39] Ryan_Lane: OK, that seems possible then, in a later version maybe. [00:14:43] yeah [00:14:44] it looks like it copies the interwiki table from enwiki [00:14:50] definitely not this iteration :) [00:14:50] and then updates sql [00:14:56] then the cdb [00:14:57] hexmode: Not percisely [00:15:05] I found that issue, hold on [00:15:08] I'm trying to think of ways to handle our ssh host key issue [00:15:30] right now when an instance is deleted and recreated, everyone's trust list gets screwed up [00:19:33] This is why I didn't want to create a million wikis [00:19:40] Because it takes forever to update [00:19:54] hexmode: make sure you use the protocol-relative cdbs! [00:20:04] or, well, the version of the script that creates them [00:20:19] Ryan_Lane: since I'm not doing it... [00:20:20] Ryan_Lane: We're http: only here anyway [00:20:39] johnduhart: def could use the ability to update only one [00:20:42] it's better to make it look like production [00:20:45] * hexmode sighs [00:20:55] it's like one flag in the script to do it [00:21:12] I removed the flag when I adopted it for here [00:21:18] o.O [00:21:32] * hexmode has been having a ton of fun with labs and deploy, but feels like his bugs are lonely [00:21:56] we run protocol-relative in production. if we don't in labs, we may miss a bug [00:22:02] also, we should likely run https [00:22:03] johnduhart: yeah protocol-relative is relatively un-tested. It'd be better to have it [00:22:18] we *can* run https, with the right certificate hack [00:22:25] johnduhart: but this is not something that blocks us [00:22:50] Put it on the queue, there's more important stuff to do than that right now [00:23:14] I'd like to fix some of the issues that have popped up with MW and then get a round TUIT later [00:23:18] hexmode: relatively untested? it's being tested live! :D [00:23:42] Ryan_Lane: relative as in it is < 1year old [00:23:46] heh [00:23:52] I know. I'm messing with you [00:24:59] what the fuck [00:25:05] ? [00:25:09] why the fuck is /mnt/export mounted on /tmp [00:25:18] that's a good question [00:25:36] * johnduhart bangs head on desk [00:26:38] ugh [00:26:49] Need to stop apache so I can unmount this. [00:35:22] PROBLEM HTTP is now: CRITICAL on deployment-web deployment-web output: Connection refused [00:36:18] thanks SO http://stackoverflow.com/questions/40317/force-unmount-of-nfs-mounted-directory [00:38:42] !log deployment-prep Unmounted /mnt/export from /tmp on -web [00:38:43] Logged the message, Master [00:40:52] did maxsem set up abusefilter? [00:40:58] * hexmode goes to look [00:41:40] oh, still down? [00:41:59] Yup, hold on [00:42:05] k [00:42:40] OrenBochman: where do we stand with search? [00:45:43] * hexmode resorts to email [00:48:52] lol snowolf [00:50:22] RECOVERY HTTP is now: OK on deployment-web deployment-web output: HTTP OK: HTTP/1.1 302 Found - 565 bytes in 0.004 second response time [00:52:07] petan: .... [01:08:20] hexmode: Bugzilla links should be good now [01:08:33] Needed to restart memcached for it to take effect [01:10:40] johnduhart: ugh [01:10:51] tyvm, though [01:14:26] andrewbogott: I know you're going to kill me, but can you switch to using novaadmin2? [01:14:43] pdns is configured to use novaadmin with the private password [01:14:44] in puppet [01:14:49] switch from nova-dev2? Or switch the ldap url? [01:14:51] same password [01:15:03] different nova admin user [01:15:08] oh, sure. [01:15:08] hexmode: we have a couple of issues [01:15:09] oh. I need to do some other changes too [01:15:12] one sec [01:15:20] johnduhart: ?? [01:15:25] ? [01:15:36] johnduhart: oh, I meant OrenBochman [01:15:40] OrenBochman: ?? [01:15:48] andrewbogott: ok. done making the changes [01:15:51] hexmode: I'm not sure how big we should make the new instances [01:15:58] uid=novaadmin2,ou=people,dc=wikimedia,dc=org <-? [01:16:02] yep [01:16:06] ok [01:16:08] same privileges, different password [01:16:16] OrenBochman: how big do you think we could start with? [01:16:20] err [01:16:22] sorry [01:16:28] also we need to automate pushing all the global config to the indexer [01:16:29] OrenBochman: and how big will Ryan_Lane let you go ;) [01:16:29] andrewbogott: it has the new password you wanted [01:16:41] the novaadmin user has the original password :) [01:16:55] how big do you need it, and why does it need to be huge? [01:17:06] OrenBochman: global config... can you just get what you need via nfs? [01:17:09] Ryan_Lane: "...cannot be added due to insufficient access rights" [01:17:15] OrenBochman: or is this OAI [01:17:16] ? [01:17:17] hm [01:18:14] andrewbogott: it'll work now [01:19:18] sec [01:19:25] k [01:19:41] andrewbogott: ok. dns searches are working [01:19:47] dig @10.4.0.61 i-0000002d.pmtpa.wmflabs [01:19:52] I'm still getting insufficient access [01:19:57] crap. really? [01:20:00] OrenBochman: should I see if Brion is around or do you need someone else? [01:20:28] RAWR [01:20:37] OrenBochman should talk to me about wiki configuration stuff [01:20:51] nfs may be ok [01:21:12] andrewbogott: ok. I got it for sure this time :) [01:21:27] works! [01:21:31] * Ryan_Lane remembers that you can't just copy and paste somethign and expect it to work [01:21:33] also I've been told by petan that OAI is installed and working on the tarkge machines [01:22:38] OrenBochman: have you tested it? also what is tarkge? [01:23:35] johnduhart: where is the updateinterwiki script run from? [01:23:40] which machine? [01:24:09] -web or -test, but peter wants maint stuff run on test [01:24:16] hexmode: I don't know how to test it [01:24:20] Doesn't matter that much, both work [01:24:25] hexmode: or how to install it [01:24:59] OrenBochman: k, so what else can you do? or is this more in petan's realm now? [01:25:13] I only know from the docs that the search side will work as long as the OAI section of the globalconfig is good [01:25:20] k [01:25:39] I *think* Brion knows about that bit and will check with him [01:25:43] see if he can help [01:26:39] that would be great - especialy if the docs were updated - they are in a disgracefull state [01:26:53] I have a couple of issues [01:27:11] k, so I'll get that from Brion... next issue? [01:27:22] the docs state that you have to make the global config by hand for this type of setup [01:27:59] I can try but I've only done one installation of search so far [01:28:00] what about oai? [01:28:24] OrenBochman: there he is, ask him! :) [01:28:30] get him! [01:28:52] we'd like to know how to test if it is working [01:29:14] Ryan_Lane: I'm going to be installing Gluster tomorrow at school :O [01:29:27] johnduhart: it's incredibly easy [01:29:41] Ryan_Lane: I know, I was shocked when I read the doc [01:29:52] brion: we'd like to know how to test if it is working [01:29:55] also I'd like to know how to install it [01:30:08] andrewbogott: http://mailman.powerdns.com/pipermail/pdns-users/2011-March/007547.html [01:30:10] \o/ [01:30:15] damn it [01:30:16] heh [01:30:26] brion: also I'd like to know how to install it [01:30:54] brion: the extention docs are out of sync [01:31:00] dang. [01:31:13] also, SSHFP records aren't supported by the LDAP backend [01:31:14] also they are rather vague [01:31:14] OrenBochman, pop over to say https://en.wikipedia.org/wiki/Special:OAIRepository [01:31:19] it'll give you an HTTP auth prompt [01:31:21] win [01:31:31] since i don't think anybody cares these days i'll just give you the testing credentials ;) [01:31:35] user 'testing' pass 'mctest' [01:31:59] see http://www.openarchives.org/OAI/openarchivesprotocol.html for general protocol documentation [01:32:15] to install locally.... in theory: [01:32:24] the special page asks for a password [01:32:26] make sure you've got OAI extension dir in place [01:32:30] ^ see the user/pass above [01:32:55] and do the usual require "$IP/extensions/OAI/OAIRepo.php"; [01:32:58] oh well. we'll stick with the LDAP backend for now [01:33:00] you'll only need the repository half [01:33:15] later we can write a file or database based backend [01:33:19] run maintenance/updates.php to make sure it installs its tables... which i think should work [01:34:05] much, much later :) [01:34:15] as pages get edited/created/deleted, it'll internally record things into its table [01:34:24] ok [01:34:27] and those records can get read out through the Special:OAIRepository iface [01:34:43] does it store the changes or just which pages are dirty ? [01:35:17] iirc it records the page id (?), possibly a rev id, and a created/edited/deleted state flag [01:35:39] then the interface slurps out current page content [01:35:44] at request time [01:36:01] it's meant to give you current versions of stuff, rather than to show you every individual change [01:36:21] (eg, potentially multiple changes since your last query will be "rolled up" into one, and you just download the entire page text as of the last change) [01:36:44] or if it's deleted, you get a marker indicating the page was deleted [01:36:46] ok [01:37:17] the docs you pointed me too is interesting but too much information right now [01:37:18] it's relatively straightforward, but doesn't always map to what folks want :) for search index updates it's good enough... as long as you're working with source [01:37:35] all you probably need is the 'ListRecords' verb [01:37:46] ok [01:39:05] metadataPrefix too [01:41:21] ok great I can test it! [01:41:39] yay :D [01:41:44] brion:thanks [01:41:59] np [01:42:48] Ryan_Lane: any progress about console access to the production search servers ? [01:43:19] OrenBochman: I asked. consensus was that we didn't want to give out new shell access [01:43:36] we'll likely take the current config and add it to the git repo [01:43:45] even if it isn't actually used by puppet [01:43:52] and walk you through things [01:44:15] ok [01:45:30] Ryan_Lane: there are scripts running things there which are undocumented [01:45:36] yes [01:45:37] I know [01:46:28] so why not zip them up to somewhere [01:47:04] and also provide a directory layout of how things are orgenised there [01:47:22] and how are nfs mappings are set up [01:47:26] we need to sanitize that stuff, likely [01:48:18] I don't realy care about that stuff so much since I'm going for a completely different architecture [01:48:35] well, for what's needed in deployment-prep we need it [01:49:49] I agree - but perhaps peterb should be the dude on point - I'm just guessing what should be happening - he can go back and forth [01:50:05] and check what is going on [01:51:11] I sent rainman a bunch of questions about to over view of the production but he has not gotten back to me so far... [01:51:25] Ryan_Lane: So, what is the default backend for pdns if not ldap? [01:53:04] I help out as much as I can. [02:01:36] I was hoping that the search-test would allow testing - but the unit test don't work on that machine either [02:03:34] johnduhart: around? I thought I had enough info from you to update interwiki, but apparently not [02:04:04] I'm going to have far less time from Monday - I was hoping to run test against the code I wrote in my vacation. [02:05:19] andrewbogott: hm. the pdns backend. I dunno what it looks like, though [02:05:26] in production we use the bind backend [02:07:40] OrenBochman: Is there anything I can do to help you get your code tested? Chase down rainman or something? [02:08:50] I'm trying to figure out what's wrong - most of the test run on my windows machine both before and after my chenges [02:09:56] I'm not very good with subversion and I did not want to mess things up with untested code. [02:22:56] hexmode: what's up? [02:24:11] Ryan_Lane: Could I get added to the testlabs project? [02:24:20] johnduhart: I was trying to update the mw prefix and couldn't ... thought I had done everything -- even restarted memcached -- but no [02:24:26] there's really nothing going on in there, why do you need access? [02:24:56] nothing was finished being configured [02:24:57] Ryan_Lane: After this I wanted to work on that, or is it ops only [02:25:06] ahhhhh. ok [02:25:20] hexmode: Update the mw prefix? [02:25:23] I was thinking we could likely just treat deployment-prep as if it was that [02:25:37] johnduhart: interwiki prefix for mw [02:25:43] or is there a reason not to do so? [02:26:06] Ryan_Lane: deployment-prep is not puppetized. At all ;) [02:26:17] it should be ;) [02:26:25] that's the next logical step [02:26:25] Think of it as a first pass to getting testlabs working [02:26:37] ah. ok [02:26:49] I need to move the NFS home directories out of that project first [02:27:26] and I was thinking of moving them to gluster. [02:27:41] Ryan_Lane: Also question, is the whole "developers can create wikis to test stuff yadda yadda" seperate from testlabs? [02:27:42] so, we may want to wait until gluster is done. [02:27:47] Ryan_Lane: sure [02:27:58] no. the idea is that would be testlabs [02:28:14] there shouldn't be a need to have root on the instances to do so [02:28:59] to create new wikis, that is [02:31:32] Alright, so testlabs is to allow developers to create instances of MW to test and play with with things, while having conditions similar to production. Is there a focus anywhere of creating a mirror of production, like testwiki or prototype on a large scale? [02:32:03] well, that's also testlabs [02:32:10] there's no reason it can't be both :) [02:33:27] ok. internets are likely going away for me in a minute [02:34:22] the difficult part for testlabs will be the proxy configuration [02:34:38] or the apache configuration, I guess [02:34:49] the proxy config will likely be fairly straightforward [02:35:05] ugh desktop crashed [02:35:09] :D [02:36:00] The thing I was concerned with about both coexisting is the infastructure for the WMF mirror config and the developer wiki creation will be vastly different [02:36:18] why would it be different? [02:36:57] the only major difference would be that the apaches would have multiple installs [02:37:04] of mediawiki [02:37:48] we could have a deployment host [02:37:56] Ryan_Lane: One is several hundred wikis sharing the same codebase and using one configuration chain, the other is several different versions of MW, different installs, each needing a seperate config [02:38:23] yeah. I don't think that's a problem [02:38:54] each differently installed wiki can have an apache file dropped in place [02:39:05] pointing at the install location for the wiki [04:59:46] looks like hexmode decided to release the flood :) [05:00:07] ryan's internet sucks [05:00:11] 12 05:00:07 < jeremyb> ryan's internet sucks [05:00:16] jeremyb: wikitech-l isn't exactly a flood [05:00:19] welcome :) [05:00:35] I'm on a mifi, and the battery died [05:00:37] hexmode: well not slashdot :P [05:00:54] jeremyb: oo! [05:01:06] I could post this on reddit! [05:01:11] meh [05:06:50] bye again [05:24:32] huh? [05:25:28] wondering the same [05:28:00] hexmode: Wait did we ever fix the mw interwiki? [05:34:47] this deployment cluster looks really great guys [05:34:49] good job [05:34:55] Thanks [05:35:19] Config for most of it is here https://github.com/johnduhart/deploymentprep-conf [05:35:58] in github? :D [05:36:27] Yeah, there's a git repo with all the config in it and I push it when I can :) [05:36:33] heh [05:36:51] we could have made a gerrit repo for it too [05:37:15] I guess I need to make a gerrit manager [05:37:21] that'll let people create their own repos [05:37:30] I guess so [05:37:39] * johnduhart works on Math extension [05:37:42] heh [05:37:49] have fun with that one [05:37:51] it's a PITA [05:38:08] I'm sure we have the requirements puppetized [05:38:36] I'm pretty sure I have them all installed something probably just has the wrong permissions [06:05:01] Ryan_Lane: chown: changing ownership of `upload': Remote I/O error [06:05:03] Ideas? [06:05:11] where at? [06:05:20] /mnt/upload on -web [06:05:27] It's an NFS mount share [06:05:30] gimme a sec [06:05:36] sure, thanks [06:06:04] That's the math problem, the rename() returns false because it can't change owner, even though the file made it across [06:06:13] ah [06:06:49] whats the specific file? [06:06:56] and which user is trying to write it? [06:06:58] wow [06:07:03] What? [06:07:07] the ownership is *really* fucked up [06:07:11] yup [06:07:12] did someone use NFS4? [06:07:19] uh [06:07:24] what's the nfs server? [06:07:24] maybe... [06:07:32] deployment-nfs-memc [06:07:45] nfs *and* memcache? :D [06:07:53] This was a "there's no one else around and I want to get this done, it can get fixed later" [06:08:01] heh [06:08:04] Ryan_Lane: heh not much choice :D [06:08:07] my* [06:08:11] thats' funny [06:08:29] normally I'd say, sure stick memcache most places [06:08:30] The permissions are fine on -nfs-memc [06:08:56] ,fsid=0 [06:09:09] not needed for 3 [06:09:12] AGAIN I DONT KNOW WHAT IM DOING [06:09:16] :) [06:09:20] :D [06:09:21] I know, I'm showing you [06:09:29] thanks [06:09:41] I don't think insecure is needed either, but let me make sure [06:10:11] yeah, that can be removed too [06:10:24] usually you'd want this: rw,no_root_squash,no_subtree_check [06:10:37] I think sync is the default [06:11:28] ah. maybe sync is *not* default [06:12:37] seems sync is a sane option there [06:14:16] no_subtree_check isn't needed here [06:14:27] this is when a file is renamed when a client has it open [06:14:33] no worries about that here [06:14:41] ah [06:14:42] so, I'm gonna remove fsid=0 [06:14:49] cool [06:14:50] that's an NFS4 option [06:15:10] lemme see about insecure [06:16:01] secure/insecure are which ports are used [06:16:11] secure says ports below 1024 will be used [06:16:19] no need for insecure here either [06:16:32] I'll leave sync [06:16:40] it helps ensure there isn't data loss [06:16:48] it's going to hurt performance, though [06:17:05] ugh [06:17:07] right [06:17:16] exportfs: /etc/exports [2]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.4.0.0/24:/mnt/export/". [06:17:16] Assuming default behaviour ('no_subtree_check'). [06:17:16] NOTE: this default has changed since nfs-utils version 1.0.x [06:17:23] adding no_subtree_check back on [06:17:54] ok. now for the client [06:18:01] ok [06:18:08] how many clients are mounting it? [06:18:16] just the one? [06:18:22] just -web [06:18:25] ok [06:18:53] how is it being mounted? [06:19:10] I ran mount [06:19:16] ah. by hand [06:19:22] * johnduhart nods [06:19:25] deployment-nfs-memc:/ on /mnt/upload type nfs4 (rw,clientaddr=10.4.0.37,addr=10.4.0.58) [06:19:27] too cool for fstab [06:19:31] heh [06:19:41] so, notice it's running nfs4 [06:19:46] security works different with nfs4 [06:20:06] oh? [06:20:11] eventhough it's using auth=sys, the domain doesn't match [06:20:27] auth=sys means the server hands control over to the client [06:20:35] nfs stands for no fucking security [06:20:41] lmao [06:20:56] if the server trusts the client's IP, the client is the one who had full control [06:21:19] but NFS4 also does server mapping of users and groups by domain [06:21:22] nfs3 does not [06:23:09] so. let's remount this [06:23:18] mount it as nfs rather than as nfs4 [06:23:20] Sure [06:23:22] let's also make fstab mounts [06:23:26] heh [06:23:34] Should I stop apache for a sec? [06:23:37] there's a way to make this work for nfs4, but I can't find the docs [06:23:46] you'll have to, if it's accessing files [06:23:56] Yup, stopped [06:23:58] otherwise you won't be able to unmount [06:24:41] so, [06:24:46] you set the domain for NFS4 via /etc/idmapd.conf [06:25:26] it needs to be set on client and server, and the idmapd service must be running (for NFS4 [06:25:27] ) [06:25:35] it can be ignored for nfs3 [06:25:40] nfs4 is slightly faster [06:25:41] yay less work [06:26:05] it seems odd that you are mounting / [06:26:21] fs=0 was doing that [06:26:25] ah. right [06:26:34] nfs4 has a virtual filesystem [06:27:36] oh, were you going to unmount? [06:27:54] I thought you were? [06:27:58] heh [06:28:00] I will [06:28:00] I can do it [06:28:02] oh [06:28:03] ok [06:28:05] you do it [06:28:22] PROBLEM HTTP is now: CRITICAL on deployment-web deployment-web output: Connection refused [06:28:26] /usr/local/apache is fine [06:28:32] that's kind of a weird mount [06:28:35] what's it for? [06:28:43] ptan did that one :p [06:28:49] All the config and site is in there [06:30:09] root@i-000000cf:/etc# mount -a [06:30:10] mount.nfs: mount point /mnt/upload does not exist [06:30:14] how was that there before? [06:30:26] Heh [06:30:32] That's why I had the fsid=0 thing [06:30:41] I couldn't mount otherwise [06:30:48] ah. crap [06:31:19] heh [06:31:23] mountpoint is busy [06:31:26] for /usr/local/apache [06:31:30] oh well [06:31:33] it has an fstab mount [06:31:43] so, check out the fstab for how I did it [06:31:59] I actually don't like using the fstab [06:32:01] I see [06:32:03] oh? [06:32:04] I prefer the automounter [06:32:08] but, it's a lot more work [06:32:20] it's possible to use the automounter + LDAP, as well [06:32:27] that's how the homedirs are mounted [06:32:50] ok. things should work now [06:33:07] helpful quick overview of nfs? :) [06:33:23] Yep:) [06:33:37] Don't see /mnt/upload yet though [06:34:15] no? [06:34:22] it's there [06:34:37] and has proper ownership too [06:34:42] oh yup [06:34:45] let' [06:34:51] let's see if Math works [06:35:40] Okay so now I get no output [06:35:45] thanks Math extension [06:36:12] :D [06:36:54] pfffff [06:36:59] I never turned apache back on [06:37:00] oops [06:37:04] :D [06:37:42] It works :D http://test.wikimedia.deployment.wmflabs.org/wiki/Math [06:37:47] nice [06:38:22] RECOVERY HTTP is now: OK on deployment-web deployment-web output: HTTP OK: HTTP/1.1 302 Found - 565 bytes in 0.007 second response time [06:40:47] heh. the sitenotice is funny :) [06:41:48] Ryan_Lane: This one was better http://meta.wikimedia.deployment.wmflabs.org/w/index.php?title=Special:NoticeTemplate/view&template=PersonalAppeal [06:41:50] ;) [06:42:21] :D [06:47:04] !log deployment-prep modified export options on deployment-nfs-memc; removed nfs4 specific options, and removed other options not necessary for our environment. [06:47:05] Logged the message, Master [06:47:39] !log deployment-prep remounted /mnt/upload on deployment-web as nfs rather than nfs4 [06:47:40] Logged the message, Master [06:48:04] !log deployment-prep added nfs mounts to the fstab for deployment-web [06:48:05] Logged the message, Master [06:48:18] I should follow the practices I like others to follow too :D [06:50:00] !log openstack changed my mind on shared nature of nova-ldap1. It's for use by nova-dev1 and nova-dev2. If another is needed, please create one similar to nova-ldap1. [06:50:01] Logged the message, Master [06:51:57] heh. I added the recent changes feed for the Nova Resources namespace to my feeds [06:52:08] a good way of catching up with what's going on [06:52:44] not a bad idea [06:57:50] huh. google+ hangouts have screen sharing now? [06:58:02] that's pretty cool [06:58:05] yeah [06:58:29] google+ still is a barren wasteland when it comes to status updates, but they are adding some pretty great stuff in hangouts [06:58:41] apparently it'll let you record the hangout too [06:59:07] this may actually be a good way to record and let people participate in hackathons [06:59:12] <^demon|zzz> I used a hangout for the first time today. [06:59:24] <^demon|zzz> The constant video snapping from speaker to speaker was horribly obnoxious. [06:59:35] why are we not using this at work, rather than skype? [06:59:39] sleep chatting huh? [07:00:01] heh [07:00:10] <^demon|zzz> Can't sleep. [07:00:20] <^demon|zzz> Damn roommate is playing COD super loud at 2am. [07:00:31] <^demon|zzz> So I've got a house rumbling with explosions right now [07:00:48] haha cod [07:01:05] hm. I don't see how to record... [07:02:49] <^demon|zzz> So yeah, we used a hangout today for the CR triage. [07:02:52] <^demon|zzz> I wasn't a huge fan. [07:03:08] no? [07:03:14] skype was still better? [07:03:21] <^demon|zzz> I don't use skype either ;-) [07:03:29] <^demon|zzz> I prefer x2003 from my SIP. [07:03:31] oh. you usually just call in? [07:04:31] <^demon|zzz> Although using G+ was useful because we were able to bring Markus in today without him having to make an int'l call. [07:04:35] <^demon|zzz> I suppose that was one upside :) [07:05:01] heh [07:05:43] I kind of like being able to see people [07:06:08] visual queues make it easier to not interrupt people [07:06:22] <^demon|zzz> Video also enforces the pants rule ;-) [07:06:39] indeed [07:08:16] Ryan_Lane, one more thing about LiWa3 .. that bot is going to pull an awful lot of data from the 'pedia's (2 parsed revids per diff, 1 parsed revid for a new page) .. I can do some caching, but that will be pretty futile for most of the data. I hope that is not a problem with bandwidth [07:08:45] it won't be [07:08:50] this is in the datacenter [07:09:12] OK [07:09:17] the amount of traffic it'll generate is a drop in the bucket :) [07:09:20] :-D [07:09:43] I guess it will generate traffic similar to cluebot, but I guess more [07:09:54] if we were hosted somewhere where we paid by the GB I'd worry some :) [07:10:47] I think that is on of the limits with Versageeks box .. bandwidth (not the total size of the data) [07:12:53] ah [07:13:05] we don't even have to worry about transit here :) [07:13:54] <^demon|zzz> Mmk, I don't hear gunfire or grenades exploding downstairs anymore. Time for attempt #2 at sleep. [07:13:58] <^demon|zzz> Night folks. [07:14:06] night [07:16:36] I thought just to mention it .. watching 772 wikis at 407 edits per minute (in total), of which >80% gets parsed - 340 per minute) .. that is probably ~600 parsed wikipages per minute .. [07:24:50] yeah. that's a decent amount [07:24:59] should be fun to see how it works in the environment [07:25:07] may be kind of slow till we add the new hardwar [07:25:11] *hardware [07:30:06] 600 per minute is 10 per second .. average pagesize will be .. 250k ?? That is 2.5 Mb/sec .. not th�t much ... [07:33:49] well, by slow, I mean maybe the processing [07:34:04] right now there are 60 instances on three pieces of hardware [07:34:30] beefy hardware, mind you [07:35:01] we plan on having 16 hardware nodes (even beefier than the ones we have now) total [07:35:10] 8 in eqiad and 8 in pmtpa [07:36:05] We'll see quick enough when it runs [07:36:10] yep [07:38:27] please ping me here when you have all the accounts set up, I'll have a look around then and then ping someone of the bots department [07:38:41] I thought I made the account for you [07:38:48] did you do your initial log in? [07:39:20] !inital-login is https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [07:39:20] Key was added! [07:39:30] !initial-login | Beetstra [07:39:42] wm-bot: -_- [07:39:48] bah [07:39:52] I mispelled it :) [07:39:58] !inital-login del [07:39:59] Successfully removed inital-login [07:40:11] !initial-login is https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [07:40:11] Key was added! [07:42:08] !account-questions is I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your SVN account name, or your preferred shell account name, if you do not have SVN access. 3. Your preferred email address. [07:42:08] Key was added! [07:42:13] !account-questions [07:42:13] I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your SVN account name, or your preferred shell account name, if you do not have SVN access. 3. Your preferred email address. [07:42:18] \o/ [07:42:32] I have to ask that so often :D [07:46:25] you've got that data :-) [07:46:31] yeah [07:46:36] did you do the initial login? [07:46:42] !initial-login | Beetstra [07:46:42] Beetstra: https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [07:48:10] 01/12/2012 - 07:48:09 - Creating a home directory for beetstra at /export/home/bastion/beetstra [07:48:29] Yep [07:48:34] :) [07:48:36] Uploading public SSH keys [07:48:57] yep. the bot lets me know when you've completed things [07:49:09] 01/12/2012 - 07:49:08 - Updating keys for beetstra [07:49:56] !project Bots [07:49:57] https://labsconsole.wikimedia.org/wiki/Nova_Resource:Bots [07:50:11] you'll need to talk to the members of that project for access [07:50:22] to that specific project, anyway [07:50:25] they'll add you in [07:50:45] I always defer to project owners, that way you guys have a chance to talk about how things get set up [07:51:14] notice that there is documentation on that page, you should update the documentation with your setup, when it's done [07:51:24] also, as you are doing things, you should be logging actions [07:51:26] !log [07:51:35] !log help [07:51:36] Message missing. Nothing logged. [07:51:38] bah [07:51:42] !logging [07:51:42] To log a message, use the following format: !log [07:51:48] there we go [07:52:01] that'll show up in the server admin log, on the project page [07:52:10] and will also show up on the combined server admin log [07:52:13] !SAL [07:52:25] !SAL is https://labsconsole.wikimedia.org/wiki/Server_Admin_Log [07:52:25] Key was added! [07:53:12] !sal alias SAL [07:53:12] Successfully created [07:56:33] OK, I'll talk to them about it, cheers for this! [07:56:39] yw [08:01:02] In which timezone are you, Ryan_Lane? [08:01:10] PST [08:02:01] that is .. 11 hours difference .. OK. [08:02:40] ah. that makes things easy enough, schedule-wise [08:02:58] at least, if with PST you mean UTC-8 - Pacific Standard Time [08:03:06] yep [08:03:18] though, hopefully, you won't need me too often [08:03:33] the idea behind this is that the community can do basically everything [08:03:49] so you'll likely be interacting more with each other than me [08:03:51] I am used to that .. when I was arranging my visa we had 3 days overlap in workweek with my future work .. here I have weekend on Thursday/Friday, in NL I had weekend on Saturday/Sunday .. [08:05:09] Just for account-thingies, makes it easier to know - I'd like some other 'collegues' in the anti-spam field to have access to the db's at some point [08:05:42] yeah. that's fine [08:05:58] we don't have much for restrictions on accounts [08:06:10] just the basic "don't be a dick" rules [08:06:30] like hacking, hosting wares/pirated stuff, etc [08:07:09] using it as a proxy, because I can't access megavideo and porn sites in this country :-p ?? [08:07:18] also not ok :) [08:07:23] darn [08:07:24] heh [08:07:30] :-D [08:08:46] Makes looking for porn-spam easier, though .. you click a link, it is blocked, and you know it is spam [08:09:07] hahaha [09:36:35] Beetstra: You want access to Bots? [09:38:07] 01/12/2012 - 09:38:07 - Creating a home directory for beetstra at /export/home/bots/beetstra [09:38:21] methecooldude, yes please [09:38:49] Done [09:39:07] 01/12/2012 - 09:39:07 - Updating keys for beetstra [09:39:52] !log bots Given access to Beetstra [09:39:53] Logged the message, Master [09:40:07] Urm, I wonder [09:40:12] And MySQL? [09:40:52] Beetstra: I'm not sure how to give you access to that [09:41:11] !log bots Testing [[User:Rich Smith|userpage links]] [09:41:12] Logged the message, Master [09:42:28] How do I 'upload' my bots? (don't have svn) [09:42:39] Beetstra: Use an SCP client? [09:43:17] sFTP? [09:43:28] (what is SCP) [09:44:26] Ah, OK, sFTP would be an SCP [09:45:35] the bots need the MySQL to run, they are working together here and there [09:47:40] What are the data for host name and port for sFTP and shell access? [11:02:58] Beetstra: https://labsconsole.wikimedia.org/wiki/Access [11:12:20] Beetstra: it's now 'git' instead of svn [11:13:05] Beetstra: you can also find a section about setting that up on the labsconsole wiki [13:38:57] johnduhart / mutante - I can't get in (Putty from windows) - "No supported authentication methods left to try!" [13:39:07] Probably keys not correct? [13:40:15] For me it worked with plain putty, just added the key entered the host name... htat's it [13:40:48] Just to be sure .. added the key where? [13:41:25] On labs using the wiki and on putty in SSH -> Auth [13:42:48] Beetstra: Are you using a new key pair created with puttygen or did you use anoter program to generate the pair? [13:43:19] puttygen, long long time ago created [13:43:55] mhm, strange [13:44:58] using putty generated keys on GNU ssh (and other way round) can cause trouble [13:45:15] but that's no issue [13:53:19] also a fresh key is rejected [13:54:57] Beetstra: mhm [13:55:48] Did you use the "public key for OpenSSH" from the program or the one give as file? [14:00:40] Beetstra, what host are you using? [14:12:00] A windows host [14:12:11] Public key, saved [14:13:12] ah, that's the problem, I think [14:13:27] copy the one out of the puppy gen interface and it'll work [14:19:32] PROBLEM Free ram is now: CRITICAL on deployment-sql deployment-sql output: Critical: 4% free memory [14:20:29] hmm .. still not [14:21:40] weird [14:29:32] RECOVERY Free ram is now: OK on deployment-sql deployment-sql output: OK: 24% free memory [14:39:05] OK, I go to https://gerrit.wikimedia.org/r/#settings,ssh-keys .. put in the key that I generate .. save the private key .. try to login .. [14:48:44] Grrr .. [14:55:07] 01/12/2012 - 14:55:07 - Updating keys for beetstra [14:55:09] 01/12/2012 - 14:55:09 - Updating keys for beetstra [14:58:08] 01/12/2012 - 14:58:07 - Updating keys for beetstra [14:58:10] 01/12/2012 - 14:58:09 - Updating keys for beetstra [15:12:31] no way .. [15:23:30] johnduhart: petan: hexmode: I saw some recent Labs emails about Roan? He's now in the air and will be unavailable for at least the next 30 hours [15:35:07] 01/12/2012 - 15:35:06 - Updating keys for hashar [15:35:09] 01/12/2012 - 15:35:09 - Updating keys for hashar [15:35:16] ... [15:35:23] those bots are worth than facebook [15:36:17] mutante: do you know anything about the labs auth system? [15:38:36] * hexmode looks around for Orem [15:38:43] will email him again [15:41:22] hashar: not that much, what do you want to know? [15:41:31] johnduhart [15:41:58] mutante: well I can't connect on my VM. But I will poke Ryan about it later tonight [15:42:11] why you told Ryan that I mounted it to /usr/loca/apache? you chose that path... + we use that mount on 3 hosts not on 1 + running maintenance scripts on -web is a bad idea, actually [15:42:17] johnduhart: ^ [15:42:47] mutante: don't want to make you loose your time :b [15:43:39] and yes I should probably not forget umount /tmp, that's true, did it break anything? [15:44:20] hashar: can"t connect = network wise? (or just asks for password, and key auth fails) [15:45:01] mutante .. if that is the case, I seem to have something like the last one (except, it does not even ask for a pw) [15:45:16] * Beetstra may need a walkthrough here .. I must be doing something wrong [15:45:58] mutante: I can connect on the bastion with my ssh key. [15:46:08] and have it forwarded to my VM but it get rejected [15:46:23] Ryan did confirm that the key was indeed correct in the VM authorized_key file :b [15:46:29] anyway now I get timeouts huhu [15:46:53] petan|work: Yeah I know [15:47:01] The settings on the mount were wonky [15:47:48] hm... which settings? on nfs server? [15:48:00] For the mount on -web [15:48:03] btw reason why we have memcached on same server is that nfs is a temporary server [15:48:24] so it's going to be tuned off and I don't want to install another instance then [15:48:48] which mount on -web there are supposed to be 3 mounts [15:48:52] sure [15:48:59] ./mnt /usr/local and /backup [15:49:24] this should be on dbdump btw I want to purge -test [15:49:32] we don't need it, or do we? [15:49:38] purge / delete [15:49:41] don't need it [15:49:41] whatever [15:50:19] ok, I hope we copied all confs we need, btw dbdump should be used for maintenance because of load [15:50:25] uh huh [15:50:30] Beetstra: so, you got a ssh key. and you load into an agent? (which OS do you use to connect, btw) [15:50:48] if you start it on -web the phpcli itself will generate extra cpu load [15:50:52] and eat a lot of ram [15:50:54] hashar: using -A to forward? [15:51:00] mutante: yeah [15:51:01] that's why I wanted to use another instance for it [15:51:13] I am using Windows XP / Putty to connect, uploaded the public key, connect, type username and then it says that it has no authentication methods left to try [15:51:23] hashar: it might be the permissions on the authorized_key file.. [15:51:23] Somewhere a setting wrong, I expect [15:51:37] hashar: did puppet put it there or a human? [15:51:40] johnduhart: I thought you knew that, but reading logs, wasn't sure [15:51:42] mutante: I even generated a new key on the bastion and added it theought the labsconsole interface [15:51:53] mutante: I am almost sure it was put by puppet [15:52:07] Beetstra: if you use putty, do you also have pagent.exe? [15:52:16] Beetstra: eh.. or pageant.exe? [15:52:22] yes [15:52:58] Beetstra: i would suggest you use that, it becomes a tray icon, and load your key there.. in putty itself you dont have to care about the key settings then [15:53:19] johnduhart: re lot of wikis, it's not so hard to make update iwlinks only for a limited list of wikis also why is it problem that it run for a long time? when you need to debug it, update only test wiki, once it's working run update for all, on background... [15:53:27] petan|work: Do test on testwikis http://labs.wikimedia.deployment.wmflabs.org/wiki/InterwikI_test [15:53:40] ok ok [15:54:02] Beetstra: you are trying to connect to the bastion host, right? (not directly to your vm) [15:54:28] bastion.wmflabs.org port 22 [15:54:32] johnduhart: I was testing a interwiki I created for labs wiki, so I couldn't use test wiki [15:54:52] but I could choose a better page, I was like, people probably wouldn't care on that wiki [15:54:53] petan|work: How did you create the interwiki? [15:54:54] Thanks, that does it! [15:55:04] johnduhart: I did it in sql, because I was testing it [15:55:10] I know it's gonna be removed [15:55:22] that's actually good because it doesn't work [15:55:37] Also you're not committing stuff, I see a dirty working copy [15:55:43] huh [15:55:45] really? [15:56:06] oh I did it just now... [15:56:13] was fixing the apache mins ago [15:56:52] how you fixed a math? [15:57:09] Permissions were wonky for /mnt/upload [15:57:15] ah [15:57:22] that caused parser error huh? [15:58:02] it should debug better then... I would expect something like "can't write to /mnt/upload instead of unable to parse error" [15:58:05] :o [15:58:22] heh, cool [15:58:26] Beetstra: :) yw [15:58:55] Now the next part .. MySQL, someone? [15:59:03] Beetstra: then find one more setting in putty, the one that says to Allow agent forwarding [15:59:07] Beetstra: what you need? [15:59:23] Beetstra: so that you can connect from bastion host to your instance.. and it forwards the key [15:59:25] Preferably a web client to set up tables [15:59:38] Beetstra: please note all mysql instances are temporary, because Ryan is going to set up a mysql server later [15:59:49] Yes, I know [15:59:57] but if you want I can help you set up one for now [16:00:15] Beetstra: after making the config change in putty, and setting up a session, click save (session), so next time you can just click in putty and you're in (and not repeat changing the settings) [16:00:36] mutante: yep, thanks! [16:00:55] petan|work, that would be great, then I can try and see if things run smoothly [16:01:09] ok, how large db should be? [16:01:14] For the final tables Ryan was going to install more harddrive space.. [16:01:21] note: MariaDB :) [16:01:28] MariaDB? [16:01:31] eh .. first 'empty', the bots will fill most of it [16:01:32] Beetstra: or mysql? [16:01:43] Beetstra: bots? [16:01:48] MySQL [16:01:53] yes, 2 of them [16:01:54] ok, what project you work on? :) [16:02:05] You know LiWa3, COIBot and XLinkBot ?? [16:02:05] because we already have bots project [16:02:14] yes, these should be in bots project [16:02:20] we already have 3 sql servers there [16:02:22] ah, no, I am not in a bots project for that yet .. [16:02:23] :D [16:02:32] I think .. [16:02:34] ok, but you want to run them on bots project [16:02:41] yes [16:02:43] ok [16:02:50] I will add you to project then ok? [16:02:57] OK, great! [16:03:25] you are there [16:03:33] we have 2 application servers, bots-1 and bots-2 [16:03:38] you want to use bots-2 [16:03:46] because it's empty :) [16:04:13] Yes, that is the plan I think .. this may be heavy [16:04:37] (the current very old box can't pull it anymore - a 4 processor Sun Sparc with 4 Gb internal memory) [16:04:38] there are 3 sql databases, bots-sql1 small and fast, bots-sql2 huge, bots-sql3 maria db [16:04:57] aha, maybe it's worth of optimizing the code? :) [16:05:01] The current dataset is getting close to 100 Gb ....... [16:05:09] Maybe people should work less on Wikipedia .. [16:05:18] right I will need to talk to Ryan in that case [16:05:24] you need to wait :o [16:05:50] I am parsing >80% of the edits on 772 wikis ... [16:05:57] I will move bots-2 to bigger server and create you instance with 4 gb of ram, but have to get approvement from him, because I think we are out of resources a bit [16:06:04] bots-sql2 * [16:06:32] I was going to start with an empty table, the 100 Gb is 6-7 years of parsing .. [16:06:46] Will migrate the old data later, if this works properly [16:06:47] Beetstra: you know we don't have copy of wiki db yet? [16:06:48] petan|work: MariaDB = what we"re supposed to use instead of mysql [16:06:58] mutante: that's on bots-sql3 [16:06:59] :) [16:07:01] That does not work anyway, this has to be in real time [16:07:07] when I installed bots sql maria wasn't in puppet [16:07:25] bots sql3 is a first maria we ever installed on labs [16:07:27] see #wikipedia-en-spam .. [16:07:44] petan|work: ah:) just wanted to point out to Beetstra its actually that when talking about mysql, but when using it it's all the same.. and compatible [16:07:55] yes [16:08:10] Beetstra: so there are some puppet classes for that you can apply to your instance.. in the db section [16:08:18] later on we move all db's to new server so it probably doesn't really matter tbh [16:08:18] if you want local db [16:08:32] Ryan doesn't want local db's at all [16:08:51] he said sql should be on separate vm's because of utilization [16:08:57] so that's what we do [16:08:59] * Beetstra is slightly lost .. [16:09:11] I am now on bastion .. where do I have to be for the bots? [16:09:20] ssh bots-2 [16:09:25] :o [16:09:30] that's what you want [16:09:38] it's a 2gb box [16:09:43] so probably not suitable [16:09:51] Beetstra: btw thank you for working on Wikimedia technology! :-) (I'm volunteer development coordinator) [16:10:04] but we don't have anything better right now, I need to ask for more :) [16:10:24] bots-2 denies permission .. [16:10:29] aha [16:10:32] Beetstra: so you want to change the source code of bots? then the next step would be setting up git and gerrit [16:10:38] did you create a key? [16:10:43] private key [16:11:20] I am constantly rewriting the bots .. I indeed may want to set up git and gerrit indeed, but for now ftp is fine [16:12:00] ftp is something we don't have [16:12:01] :o [16:12:08] you need to use scp [16:12:20] !logging [16:12:21] To log a message, use the following format: !log [16:12:24] Beetstra: ^ [16:12:45] sftp works .. [16:12:45] everytime you change someting do !log bots installed blah on bots-3 becaue I need it for my bot [16:12:47] ok? [16:12:57] !sal [16:12:58] https://labsconsole.wikimedia.org/wiki/Server_Admin_Log see it and you will know all you need [16:13:03] it get logged to sal [16:13:08] so other people know you changed it [16:13:35] yes sftp work :) [16:14:12] that is fine .. for now I have to get access to bots-2 up .. [16:14:25] ok problem is in ssh key [16:14:43] simplest way is to create a new private key on bastion [16:14:50] then upload public to labs [16:15:16] another way better for experienced users is forwarding from your local machine [16:15:40] but that's sometimes complicated especially on other OS, I know only how to do that on unix [16:16:47] johnduhart: I want to change 404 page ok? [16:17:02] or not? [16:17:35] or if you wanted to do that... [16:22:07] 01/12/2012 - 16:22:07 - Updating keys for beetstra [16:22:09] 01/12/2012 - 16:22:09 - Updating keys for beetstra [16:23:32] my 'project' to log would be 'bots-2'? [16:23:42] back [16:23:53] Beetstra: you can use WinSCP to copy files via ssh (using a GUI).. but i would suggest to start with git/gerrit right away.. [16:24:12] no it's bots [16:24:31] Beetstra: that looks and feels like an FTP client ..but its safe [16:24:34] bots-2 is ionstance name [16:24:48] petan|work: What 4040 page? [16:24:52] 404 [16:24:57] !log bots connected to bots-2 for COIBot, LiWa3, XLinkBot [16:24:58] Logged the message, Master [16:25:10] changes should be logged [16:25:17] not everything :) but ok [16:25:33] like when you install a new software, reboot server, restart something [16:26:13] example !log bots installed phpcli on bots-2 to run cluebot there [16:27:22] johnduhart: not found, you know? [16:27:27] http 404 [16:27:34] that page [16:27:58] so that people who try to connect to en_wikipedia/wiki see we moved it to en.wikipedia [16:30:57] I meant to log that I was using bots-2 .. i will not log every file I upload .. [16:31:08] sure :) [16:31:09] np [16:31:21] Beetstra: out of curiosity, what city are you in/near? [16:31:32] I am in Riyadh [16:31:49] !wiki Riyadh [16:31:49] http://en.wikipedia.org/wiki/Riyadh [16:31:52] petan|work: Sure I guess [16:32:02] ok [16:32:10] Yes, th�t Riyadh [16:32:25] Beetstra: got it. I've never been there [16:32:46] Not many people have [16:32:52] It is not that easy to get here [16:34:01] Pff .. it is not even easy to be here [16:34:14] * Beetstra is practically locked up in a hotel [16:34:46] Beetstra: ! my goodness, hope you can leave eventually. :) [16:35:37] Yes, I can leave now, but it is not too save. I will move in a week or two, or will be driven around by a driver from my company (back and to work) [16:36:01] I have been out this afternoon, to a shopping mall, but I can't just walk out and look around [16:44:58] OK, short experiment .. [16:45:08] beetstra@bots-2:~/coibot$ perl coibot.pl [16:45:08] Can't locate POE.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.10.1 /usr/local/share/perl/5.10.1 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.10 /usr/share/perl/5.10 /usr/local/lib/site_perl .) at coibot.pl line 10. [16:45:09] BEGIN failed--compilation aborted at coibot.pl line 10. [16:45:56] (the good thing is, that is just the second module I am loading) [16:50:32] Beetstra: in case you're ever near one of these: https://www.mediawiki.org/wiki/MediaWiki_developer_meetings feel free to drop in [16:51:10] OK, will try [16:51:30] Who do I contact if I would need perl modules updated? [16:53:16] or even, perl modules added .. [16:57:36] Beetstra: could you be more specific? you mean perl modules added to your labs instance? [16:57:49] Beetstra: is that something you don't have access to do yourself? [16:58:10] Beetstra: you may want to join the Labs mailing list https://lists.wikimedia.org/mailman/listinfo/labs-l [16:58:26] yes, but then everything is on my account .. which is not necesarily a bad thing [16:58:55] Beetstra: so you mean you'd like these modules to be available to all Labs users? [17:01:07] Well, that does sound logical in one way .. [17:01:13] heh, google plus isn't blocked at school [17:08:17] !log bots installing perl module POE (+needed modules) for beetstra [17:08:18] Logged the message, Master [17:10:09] this is going to take time .. [17:23:59] hexmode: ping [17:24:41] petan: ping [17:26:06] pong [17:27:48] petan: how are you ? [17:28:06] can you busy ? [17:28:19] petan: are you busy ? [17:29:23] I'd like to make some progress with the search-deployment [17:30:02] I was thinking that the fastest way to do it is to close search test [17:30:11] I was thinking that the fastest way to do it is to clone search-test [17:30:21] using a larger instance [17:36:21] ra [17:38:54] eh [17:39:17] can you tell me more :o [17:39:41] OK, tried to install the first POE module: don't have sufficient rights .. not sure how to install locally [17:47:28] OrenBo: hm? [17:48:18] OrenBo: hey [17:48:24] just got here [17:48:24] hope I didn't keep you too long [17:49:32] petan: hey [17:49:41] johnduhart: hey [17:50:27] OrenBo: in case you missed it: http://thread.gmane.org/gmane.science.linguistics.wikipedia.technical/58176 [17:51:37] hi hexmode [17:52:09] * hexmode goes to look at the etherpad one more time [17:52:59] hrm... no ryan yet [17:53:32] ok [17:53:37] we need to wait [17:53:46] petan: math is working? [17:53:49] yes [17:53:52] john fixed it [17:53:54] I think [17:54:00] re [17:54:17] I've created the indexer instance [17:54:24] OrenBo++ [17:54:45] OrenBo: what are we waiting on now? [17:55:02] ok [17:55:18] sorry, just trying to get an idea of what is left and if I need to find someone to help [17:55:33] well it would be best to backup search test and put it in the instance [17:55:59] OrenBo: are you sure we need medium for this? [17:56:05] once search is working localy [17:56:23] petan: why ? [17:56:29] because it's big [17:56:48] let me put it this way [17:56:59] it is too small in terms of memory [17:57:14] we need 50% of the index in memory [17:57:19] or it will be slow [17:57:37] it doesn't work [17:57:40] can you ssh there? [17:57:53] I just started it - it might not be ready [17:58:01] why is it nfs server? :) [17:58:04] and I need to setup security rules [17:58:34] setting up shares to other machines [17:58:35] you created it incorrectly so it probably won't be started [17:58:43] we already have nfs server [17:58:49] but ok [17:59:16] what's an nfs server amongst friends [17:59:39] * hexmode puts that on bash [17:59:49] or, rather bugzilla quotes [18:01:54] OrenBo: ok so let me know if you needed some assistance with this [18:02:00] just to have it logged: [18:02:13] !log bots installation of POE on own account .. failed [18:02:14] Logged the message, Master [18:02:18] OrenBo: how many space in sql you will need for this [18:02:26] time for dinner [18:02:49] also If you are up to reimplementing it on a larger machine we can start of smaller - remember the production indexer has 600GB storage and 48 gigs of ram [18:03:05] ok but we are not on prod now [18:03:19] * hexmode is armed with info for fixing abuse filter and aiming to fix it now [18:03:32] hexmode: do that! [18:03:33] :) [18:04:11] still It should have memory about 35% of the database size [18:04:29] why... [18:04:30] and hard disk about 75% [18:04:38] 75% of what [18:04:46] that's just for doing searches [18:05:02] hm [18:05:06] to index you need more storage [18:05:09] right [18:05:29] I don't know - ask notpeter I can't chek production [18:05:34] I will create sql account for you ko? [18:05:36] ok [18:05:43] that a know out [18:05:47] that a knok out [18:05:53] ok [18:06:13] what's up? [18:06:43] petan: can you back up search-test and put it on the new machine [18:06:52] notpeter: wassup [18:07:14] why is the indexer in production got such a large HD ? [18:07:16] yes [18:07:18] backup what? [18:07:22] how much is index [18:07:41] clone search-test instance [18:07:50] you said it can be backed up [18:08:00] OrenBo: I assume b/c it indexes everything but most of it is long-tail stuff and doesn't need to be referenced much [18:08:50] yeah [18:08:57] it's like 600 gigs of indexes [18:09:07] but that's for all of the major and many of the intermediate sized wikis [18:09:07] assumtions are great - here is another two - it has dounle the indexes and all of the dumps [18:09:24] OrenBo: what all you need to copy from it [18:09:53] huh [18:10:07] don't know - you and jermy set it up and now it works [18:10:16] I didn't set up much [18:10:22] you need to get software from it? [18:10:31] afaik you installed sw [18:10:36] petan: could you look at the etherpad and help me figure out which wiki's to get abusefilter stuff for [18:10:38] ? [18:10:40] yes I did [18:13:46] ok OrenBo tell me what I should get to new instance then and I will do that [18:15:10] hexmode: you are sure we can create such a big instance now? [18:15:19] Ryan has no problem with that? [18:15:21] :o [18:15:45] petan: I'm not sure of anything and Ryan would need to really say, but I assume we can [18:16:01] ok [18:16:03] let's wait for him [18:16:30] petan: I'd rather just go ahead for now [18:16:41] hm [18:17:10] we can always ask forgivness, but this is pretty important stuff so I'd rather do it now [18:17:46] OrenBo: what services are going to run there [18:25:01] hey [18:25:09] Ryan_Lane: Oren needs to create medium instance [18:25:32] that's fine [18:26:44] we need a temporary media wiki installation. [18:27:00] we need jave + apache [18:27:06] we need jave + ant [18:27:17] right [18:27:29] we need rsync [18:28:03] deployment-wmsearch [18:28:06] will be name? [18:28:06] we need to import a 30 pages to the db [18:28:14] I deleted indexer and there is dns issue [18:28:21] when I create same name [18:28:25] ok [18:28:54] once it works we will need to make the searcher instance - similar setup but less storage/memory [18:29:08] we will know the index size [18:29:10] so it's a temporary one? [18:29:14] nope [18:29:16] hm... [18:29:22] ok just tell me what to do [18:29:26] I will leave rest on you [18:29:39] search needs one indexer and many searachers [18:29:55] one would be ok [18:29:57] ;) [18:30:04] i agree [18:31:09] can you get search from subversion to same path as in search [18:31:19] ? [18:31:31] yes [18:31:36] I got to go to buy some food [18:31:39] but I don't know which path it is [18:31:57] i don't have ssh on this machine (no key) [18:32:09] sec [18:35:23] you guys tell me there is a way to see all you did on the machine - where is the stuff logged ? [18:35:37] !saql [18:35:39] !sal [18:35:39] https://labsconsole.wikimedia.org/wiki/Server_Admin_Log see it and you will know all you need [18:37:24] it just has comments people put in [18:38:04] !log deployment-prep installed updates on new instances and rebooting it [18:38:05] Logged the message, Master [18:40:05] it says nothing about installation of ant or of the lsdemon [18:40:13] hm probably no one logged it [18:40:21] did you? [18:41:56] don't know how [18:42:13] is deployment-wmsearch the new instance [18:42:19] sure [18:42:22] it's up [18:43:42] I'll be back in a while - if you setup mw + ant + java I'll install search [18:45:09] wait a moment what is mw for? [18:45:17] which version you want? [18:45:21] notpeter: mediawiki [18:45:34] I know [18:45:36] but what for? [18:45:40] 1.19? [18:45:41] same as producion [18:45:47] we already have that [18:45:51] ok [18:45:53] there is whole cluster of wikis [18:46:10] we have 300+ wikis there [18:46:11] search doesn't have a mediawiki install [18:46:18] I think so [18:46:20] just the conf files [18:46:23] i agree [18:46:24] np [18:46:44] it is also hand configured [18:46:46] !log deployment-prep mounted conf files [18:46:47] Logged the message, Master [18:46:54] I don't know how to do that [18:47:11] OrenBo: it's in /usr/local/apache [18:47:30] I can however modify a working setup [18:48:09] if you can provide a share to an existing mw on another machine we might be ok as well [18:48:21] yes that's what I have done [18:48:23] be carefull [18:48:33] btw do you need apache there? [18:49:11] !log deployment-prep installed all requested sw on seaarch [18:49:12] Logged the message, Master [18:49:30] OrenBo: there is a folder /usr/local/apache check what is there [18:49:35] in common/wmf-config is config [18:49:44] don't know ask notpeter : [18:49:52] do not change anything in this folder! [18:50:27] also needed is a share to the Internatinalization folder [18:50:39] please put the paths in the etherpad [18:50:41] yes that is there too [18:50:44] ok [18:50:50] ttl [18:51:28] done [18:54:25] so, can you guys live without squid for a bit? [18:54:44] I *really* need to get that new node in [18:54:49] ah [18:54:51] yes [18:54:54] ok [18:54:56] hexmode: ? [18:55:02] or another ops guy can handle it, maybe [18:55:09] petan: ? [18:55:13] no squid? [18:55:26] sure, no squid is fine for now [18:56:25] petan: could you look at the image slowness problem? [18:56:30] I did [18:56:33] problem is IO [18:56:48] unless we get gluster or upload squid it's gonna be bad [18:57:21] vm's have terrible io [18:57:37] small files are loading lot of time [18:57:45] k, so if OrenBo gets us search and I get the abusefilter then that just leaves image slowness/squid [18:57:56] hrm... [18:58:02] hm probably I think it's gonna be better with squid [18:58:09] * gluster [18:58:11] right, both [18:58:16] both would make it better [18:58:30] text squid would make less resource expensive loading of pages [18:58:33] it would be better with squid, yeah [18:58:59] petan: I have a meeting right now, so if you can write a summary at the top of the page, that would help for when I get back [18:59:04] k [19:00:40] I'm here now [19:00:46] petan: I don't know what is involved in gluster... could you outline what is needed there? [19:00:49] on the pad? [19:00:56] johnduhart: hey! [19:01:08] * johnduhart is working on setting up gluster on some servers he just got [19:02:48] servers are loud [19:03:04] and long [19:03:18] long? [19:03:19] :P [19:03:22] heh [19:03:48] Compared to a tower PC yes :p [19:04:10] wait is it for rack or tower? [19:04:20] rack servers looks completely differently :D [19:04:35] no wonder it's long ^^ [19:05:19] It's not a rack server but a tower server [19:05:24] From 2003 :p [19:05:29] 2 of them [19:06:18] ah [19:06:29] yay [19:06:29] :D [19:06:31] heh [19:06:42] that's a piece of old box [19:06:55] Yeah, better than nothing though :) [19:07:07] 3 10K SCSI drives in each, in RAID [19:07:29] I have raid on my desktop [19:07:36] :D [19:07:52] Dual Socket board, we're looking at getting another CPU (it's about $20) [19:08:17] test [19:08:20] :o [19:08:23] hee andre! [19:08:26] (diederik) [19:08:34] http://support.gateway.com/s/Servers/COMPO/Cases/WME866319/WME866319nv.shtml [19:08:49] Andre_Engels: howdy [19:08:50] ping Ryan [19:09:01] !Ryan [19:09:01] man of the all answers ever [19:09:02] Ah, not recognized from the nick :-) [19:09:04] Hello [19:09:16] @search question [19:09:16] Results (found 1): ask, [19:09:20] hmm [19:09:25] @search 1. [19:09:25] Results (found 1): account-questions, [19:09:33] !account-questions | Andre_Engels [19:09:33] Andre_Engels: I need the following info from you: 1. Your preferred wiki user name. This will also be your git username, so if you'd prefer this to be your real name, then provide your real name. 2. Your SVN account name, or your preferred shell account name, if you do not have SVN access. 3. Your preferred email address. [19:09:56] fancy bot! [19:10:00] indeed :) [19:10:04] we can thank petan for that [19:10:10] thanks petan! [19:10:11] 1. andreengels 2. a_engels 3. andreengels@gmail.com [19:10:12] heh [19:10:21] yw [19:10:44] Or make 1 Andre_Engels [19:10:53] Then it's equal to the one on the projects. [19:10:57] hm... you know this is logged channel? [19:11:01] Easier to remember :-0 [19:11:04] posting emails is probably not a best idea :o [19:11:10] it'll be Andre Engels, then [19:11:17] since mediawiki changes _ to " " [19:11:25] Yes, I know [19:11:30] ok [19:11:45] all of my email addresses are posted on the web [19:11:45] <^demon> Ryan_Lane: I just recovered 4.7G from / on formey :) [19:11:52] ^demon: how? [19:11:59] ~magic~ [19:12:02] \o/ [19:12:06] I'm not that knowledgeable about the Mediawiki code, but I have plenty of experience as a user [19:12:12] <^demon> I had an unpacked copy of phase3 clone in my homedir. [19:12:37] Andre_Engels: you sure you want an underscore in your shell account username? :) [19:12:56] I've always felt shell account usernames with special characters to be a little weird [19:12:56] better than : [19:13:04] Well, it _is_ my svn account name [19:13:04] that isn't an allowed character ;) [19:13:07] oh [19:13:08] :P [19:13:16] you sure? [19:13:19] (and that one came as a copy from my sourceforge name) [19:13:24] I'll check... [19:13:39] ah. I was mispelling it [19:13:41] it sure is [19:13:41] ok [19:13:45] easy enough [19:14:37] !initial-login | Andre_Engels [19:14:37] Andre_Engels: https://labsconsole.wikimedia.org/wiki/Access#Initial_log_in [19:16:33] gah. firefox is eating 8GB of ram on my box [19:16:39] one of my stupid extensions must be leaking memory [19:17:09] 01/12/2012 - 19:17:09 - Creating a home directory for a_engels at /export/home/bastion/a_engels [19:17:10] that's one of the downsides of 64Bit Apps :P [19:18:09] 01/12/2012 - 19:18:09 - Updating keys for a_engels [19:19:10] 01/12/2012 - 19:19:10 - Updating keys for a_engels [19:19:20] petrbena 24049 2.1 9.6 1159532 602724 ? Sl Jan08 124:16 /usr/data/priv/petrbena/Bordel/firefox/firefox [19:19:22] petrbena 24109 0.7 0.5 153680 31840 ? Sl Jan08 45:31 /usr/data/priv/petrbena/Bordel/firefox/plugin-container [19:19:25] Ryan_Lane: :P [19:19:36] o.O [19:19:42] heh [19:19:58] petrbena@Desktop-ws:~/Bordel/devel/wmib$ uptime [19:20:00] 20:19:50 up 57 days, 17 min, 39 users, load average: 0.01, 0.07, 0.08 [19:20:02] :D [19:20:03] heh [19:20:04] my desktop [19:20:13] I think it's foxyproxy [19:20:20] nah [19:20:31] for me. [19:20:33] aha [19:20:36] possible [19:21:40] hi ryan: how can I create a new group mobile_stats? I can't seem to find the link [19:21:50] Working on the steps, but what exactly am I to do with gerrit? [19:21:52] group? [19:21:57] !groups [19:22:01] !security [19:22:01] https://labsconsole.wikimedia.org/wiki/Security_Groups [19:22:10] oh. security group? [19:22:16] !git | Andre_Engels [19:22:16] Andre_Engels: for more information about git on labs see https://labsconsole.wikimedia.org/wiki/Git [19:22:37] so, a brief intro to labs…. [19:22:44] labs is divided into projects [19:22:53] where each project works autonomously, mostly [19:23:02] I need to register/login there... Which login am I to use? [19:23:04] a project has members, sysadmins, and netadmins [19:23:18] Andre_Engels: uses your wiki username and password [19:23:35] netadmins can create and manage instances in a project [19:24:23] netadmins can manage IP addresses, firewall rules, and dns entries [19:24:37] access to projects is managed by the project sysadmins [19:24:47] OrenBo: around? [19:25:15] I'll have to go off-line for a short bit... Be back in a few minutes. [19:26:11] group == project (sorry) [19:27:28] you can't create a new project [19:27:40] what new project do you need? [19:27:43] I can create it for you [19:28:00] okay, mobile_stats [19:28:45] who should own it? you? [19:29:16] yes [19:29:20] ok [19:29:43] Andre is a member, Erik Zachte will become a member [19:29:44] oh, cool. seems I fixed my add project bug [19:29:59] drdee: I added the new project: mobile-stats [19:30:10] other projects are named with a dash, so I thought I'd be consistent :) [19:30:12] 01/12/2012 - 19:30:11 - Creating a project directory for mobile-stats [19:30:12] 01/12/2012 - 19:30:12 - Creating a home directory for diederik at /export/home/mobile-stats/diederik [19:30:12] 01/12/2012 - 19:30:12 - Creating a home directory for laner at /export/home/mobile-stats/laner [19:30:22] sure [19:30:24] Heheh, Apparently the servers are causing the lights to dim [19:30:34] drdee: you can add the other users to your projects [19:30:45] !manage-projects is https://labsconsole.wikimedia.org/wiki/Special:NovaProject [19:30:45] Key was added! [19:30:45] johnduhart: sounds like a very stable setup [19:30:52] !manage-projects | drdee [19:30:52] drdee: https://labsconsole.wikimedia.org/wiki/Special:NovaProject [19:31:00] that interface will let you do so [19:31:13] 01/12/2012 - 19:31:12 - Updating keys for diederik [19:31:13] 01/12/2012 - 19:31:12 - Updating keys for laner [19:31:42] awesome, many thx as always [19:31:47] yw [19:31:51] (btw, did you see analytics day announcement?) [19:33:15] yeah [19:33:23] !rights is https://labsconsole.wikimedia.org/wiki/Access#Rights [19:33:23] Key was added! [19:34:42] ryan: did you put it in your calendar :) ? [19:34:47] heh [19:34:54] I didn't, but I should [19:34:56] is it on a weekday? [19:35:01] I'd like to see the speakers, for sure [19:35:04] back [19:35:27] it's on a friday [19:35:41] hm [19:35:47] seems we are officially using all avaiable memory [19:36:09] but i think it's gonna be really awesome, they are all core committers to their projectsw [19:36:13] yeah [19:36:36] that's my doppelganger [19:36:53] hm. we still have like 20GB free [19:37:07] * Ryan_Lane gets to work on moving virt1 into the cluster [19:37:13] consider me not here [19:49:12] 01/12/2012 - 19:49:12 - Creating a home directory for a_engels at /export/home/mobile-stats/a_engels [19:50:13] 01/12/2012 - 19:50:13 - Updating keys for a_engels [19:55:12] !log bots 2nd attempt installing perl modules - installing POE [19:55:13] Logged the message, Master [20:04:45] !log bots updated CPAN, POE installed [20:04:46] Logged the message, Master [20:05:41] git push origin HEAD:refs/for/master [20:05:44] maplebed: ^^ [20:10:07] OrenBo: where is search in svn [20:10:11] notpeter: ^ [20:11:29] two places. [20:11:42] ok :) [20:11:51] where? [20:12:05] (was getting right paths) [20:12:06] trunk/lucene-search-2 [20:12:07] and [20:12:10] ah [20:12:41] trunk/extensions/MWSearch [20:12:59] the former is the source for the jar that runs on the boxes [20:13:06] the latter is obvious the extention that queries it [20:16:35] right [20:17:11] I'm getting "Permission denied (publickey)." trying to log into the virtual server [20:18:47] !log bots Installed perl modules: POE::Component::IRC::State, WWW::Mechanize, XML::Simple, DBI [20:18:48] Logged the message, Master [20:20:41] OrenBo: . [20:20:50] installed some [20:30:43] Ryan_Lane: you realized why nfs and mem is using one vm [20:31:28] whys that? [20:31:34] because we have no free ram :D [20:31:36] on labs [20:31:38] heh [20:31:49] from a performance perspective, they are at odds [20:31:55] I didn't want to create 2 instances while I needed only 1 gb for memc and 20gb for nfs [20:32:06] I would create either 2 m1 instances or 1 [20:32:14] * Ryan_Lane nods [20:32:16] while performance would be nearly same [20:32:44] memc doesn't need 2gb or ram, neither nfs [20:33:08] it's running only small php scripts [20:33:14] and web server has own cache [20:33:17] for fs [20:38:39] heh [20:38:51] Remote desktop in remote desktop in remote desktop is fun [20:45:57] don't use that [20:46:03] install cygwin [20:46:15] that's how I control win [20:52:23] brb [20:52:27] :o [20:52:31] in 2 hour [21:02:13] petan: abusefilter! [21:02:20] oh :( [21:03:01] johnduhart: abusefilter! [21:03:05] http://en.wikipedia.deployment.wmflabs.org/wiki/Special:AbuseFilter [21:03:08] w00! [21:10:06] OrenBo: where are you at on the search? Need help? progressing nicely? [21:41:15] petan: ? [21:41:53] johnduhart: can you give me admin on enwikibooks? [21:52:15] hexmode: you are steward :D [21:52:24] you can give yourself all you need [21:52:30] brb [21:57:52] hrm [22:01:05] petan: mw interwiki prefix isn't working here? http://labs.wikimedia.deployment.wmflabs.org/wiki/Main_Page [22:07:26] Because there isn't a mediawiki instance [22:07:32] mediawiki wiki [22:08:51] johnduhart: *sigh* [22:08:59] http://zh.wikipedia.deployment.wmflabs.org/wiki/Special:%E9%98%B2%E6%BB%A5%E7%94%A8%E8%BF%87%E6%BB%A4%E5%99%A8 [22:09:04] Tada! [22:10:45] http://zh.wikipedia.deployment.wmflabs.org/wiki/Special:AbuseFilter, also [22:10:59] * hexmode looks around for something else [22:15:52] petan: johnduhart: are you working on squid? Or should I? [22:16:07] I think we're still waiting on Ryan [22:16:27] Who's in the middle of other things atm [22:16:48] you guys mentioned we can live without squid right now [22:17:16] I guess we can [22:17:33] johnduhart: anything left here besides search? http://etherpad.wikimedia.org/DeploymentPrep [22:17:51] The thing is we'd have to take the site down to install squid, and petan wasn't happy about that [22:17:56] Ryan_Lane: yeah, there is just some very painful slowness right now as a result [22:18:03] But imo it's not a big deal, we can have an hour outage [22:18:10] johnduhart: taking it down isn't a big deal [22:18:31] johnduhart: you also wanted to do an sql update? [22:18:49] guess that was petan [22:19:03] Right before launch we'kll svnup and update [22:19:31] Is LQT working? [22:19:55] http://en.wikipedia.deployment.wmflabs.org/wiki/Help:Displaying_a_formula just imported \o/ [22:20:14] johnduhart: robla asked me about that so I need to check [22:20:35] is there a wiki in prod that has lqt? someplace I could use to check [22:21:35] let me look [22:22:37] hexmode: I can turn it on for testwiki, want that? [22:23:09] johnduhart: hrm.... lets find some real-world use [22:23:30] I know there is some on mw.o, but I need to find another place [22:23:45] or, heh, I could just import mw.o pages [22:23:52] That would work :) [22:25:16] johnduhart: so, yeah, testwiki then... [22:26:17] Done [22:27:07] :) [22:28:22] johnduhart: could you do the squid, or do you need Ryan to do it? [22:28:34] needs me to do it [22:28:48] Ryan_Lane: will you have time tomorrow? [22:28:53] for the production-like environment [22:29:06] Ryan_Lane: There's no one from ops that can look at it besides you? [22:29:30] if I build mobile2 as virt0, move virt1 services to virt0, then add virt1 to the compute cluster today, then yes, I can do it tomorrow [22:30:35] Ryan_Lane: \o/ [22:30:36] I thought we were throwing the old mobile servers into a fire... [22:30:51] not if they are still under warranty [22:30:57] then we can announce it to the world on friday [22:30:59] :) [22:31:17] hexmode: that's a huge if, man [22:31:28] this is a very ugly page at the bottom: https://en.wikipedia.org/wiki/Help:Calculation [22:31:46] Ryan_Lane: hmmm [22:31:48] hmmm [22:32:46] Ryan_Lane: But they're tainted! They ran the ruby gateway! [22:33:39] heh [22:33:56] well, it'll *still* be running ruby ;) [22:36:32] nnnoooooooooooo [22:36:43] :p [22:38:18] OrenBo: ping? [22:40:12] hexmode: We can't have it because Ryan hasn't set it up yet [22:40:40] johnduhart: I know, I was just making a silly title for the explanation [22:40:46] ah, okay [22:41:12] I don't think the image slowness is related to squid, it's some sort of backend issue [22:42:04] johnduhart: if you think it is something else, do you have time to investigate? Or would squid solve it anyways? [22:42:17] I'm taking another look right now [22:43:49] k... I'm going start putting together an outline of what we need to document and look over making some test cases [22:43:53] documentation stuff [22:44:06] great [22:46:41] oh... doh [22:46:47] got distracted [22:46:49] lqt [22:46:58] * hexmode goes to export from mw.o [22:52:14] Ryan_Lane: if you create me 2 gb instance for sql, I will rid you of slowness johnduhart [22:52:48] It's not a db query issue [22:53:00] actually it is a bit, mysql is slow [22:53:06] I've been doing some tests today [22:53:14] it's waiting for disk, most of time [22:53:46] I increased buffer to inno engine from 16 to 200mb and it's still on 100% [22:53:58] It's not a slow query [22:54:02] http://test.wikimedia.deployment.wmflabs.org/wiki/File:Empire_State_Building_Top.jpg#mw-debug-pane-debuglog [22:54:08] Look at the query log [22:54:18] yes, if one user is using the site, then not [22:54:34] if 10 users would be then we would have problem [22:54:52] That's not the problem I'm trying to solve petan. [22:54:59] I know [22:55:10] but unless we get gluster there is no improvement to images [22:55:21] or we need a squid that would help [22:55:29] it's io problem [22:55:30] The problem I'm trying to solve is why does it take 20 seconds to load the file info page of a file in a remotedb repo. [22:55:49] because the file is localy imported? [22:55:49] How can you keep coming to these conclusions if you don't know what the problem really is? [22:56:13] Yes we have an IO issue, but are we sure that's really the problem? [22:56:36] regarding images probably not, but images isn't only thing which isn't fast enough [22:56:41] did you try simple wiki? [22:56:55] that is full db clone and it takes 20 seconds to load SPecial:Statistics, or it used to [22:57:10] johnduhart: steward on testwiki? [22:57:12] No I'm not focusing on that right now, one thing at a time. [22:57:33] hexmode: You're steward everywhere [22:57:40] That's what a steward is [22:57:56] hexmode: use Special:UserRights [22:58:05] nope [22:58:07] that's how you can give yourself any rights [22:58:08] ah [22:58:15] I know, test wiki is not in wiki set [22:58:16] doesn't let me change things on testwiki [22:58:26] hexmode: use it on meta and type hexmode@testwiki [22:58:32] then it will let you [22:58:33] ah [22:59:41] johnduhart: if we get a debug log to produce time for each line, maybe we could trace what is slow [23:00:09] petan: enable profiling? [23:00:12] that's what I don't like on web applications you can't debug them so well like binaries [23:00:34] 439 Files Included [23:00:44] https://www.mediawiki.org/wiki/Debugging#Profiling [23:01:26] Don't hook up the profiler just yet [23:01:53] however slowness isn't a blocker it's expected [23:01:58] vm's are slower than prod [23:02:08] I mean not a blocker for announcement, maybe a blocker for release [23:02:37] we can still tell people they can test gadgets etc... [23:04:04] k [23:04:51] reason why I wanted to move sql to better config is that we can do it now, later it doesn't make sense [23:05:11] I would need to shut down site for a long time [23:05:20] petan: how important is that [23:05:35] I don't really need if it's needed but if more people will start using site we may discover that it become unusable [23:05:35] looks like the only other thing is search [23:05:39] because of slow IO [23:05:49] I know, maybe we don't even need to do that [23:05:51] sql [23:06:08] but I believe if we had 1gb more in ram performance would be 10 times better [23:06:24] I was hoping OrenBo would get search done today... was hoping he would [23:06:37] I did stuff he needed hm... [23:06:47] see ep [23:06:50] does he know? [23:06:54] I updated it there [23:06:57] I guess he read it [23:09:52] petan: do you have time to work on the custom error handler pages? [23:11:17] yes [23:11:23] error page? [23:11:27] sure [23:11:32] 404 handler [23:11:40] how urgent it is? [23:11:51] https://en.wikipedia.org/404 [23:11:57] like that ^ [23:12:16] petan: not too much... but if you're looking for something? [23:12:50] It looked like the slow IO problem wasn't going anywhere, but maybe I misunderstood [23:15:39] k [23:15:52] actually I was going to sleep but this I can do now [23:16:04] don't let me keep you up ;) [23:16:19] I can't fire you, after all [23:34:59] johnduhart: around [23:35:08] I can't use php as Error page, why? [23:35:10] only html [23:35:14] ? [23:35:28] when I use php, apache can't display it [23:35:34] that's weird [23:35:41] http://labs.wikimedia.deployment.wmflabs.org/w/gghshs.gsdfgh [23:36:04] I wanted to show a page name in that [23:36:14] but it allows me to use only html doc [23:36:23] it behaves pretty weird [23:36:33] I needed to reload it like 5 times for it to take at least htm [23:36:42] PHP isn't on outside of w [23:36:51] ah [23:36:53] that's why [23:37:07] why it isn't? [23:37:18] Security [23:37:31] what isn't on mean [23:37:33] Aren't these 404s handled by squid? [23:37:49] petan: It's not enabled outside of live [23:37:51] like it download a page as raw text even source code? or fail [23:38:16] Not sure [23:38:17] if you just disable php apache is giving you source code [23:38:21] that is even worse [23:38:24] Why? [23:38:26] insecure like hell [23:38:28] No [23:38:41] what if leaked file was password file or whatever [23:38:54] Well don't put passwords were apache can get them [23:38:58] it's only really insecure if you have the configuration files in the root [23:39:08] ok and how is that secure then [23:39:16] and they definitely should not be [23:39:29] I mean what is benefit of giving source code rather than executing file [23:39:40] Security [23:39:46] if php is disabled, it can't run it [23:39:47] I see nothing secure on that [23:40:14] http://labs.wikimedia.deployment.wmflabs.org/w/extension/Math/Math.php?someUndiscovereedParameterExploitHere=... [23:40:24] actually what I was most concerned about on all servers was actually that server wasn't leaking source code [23:40:42] who cares if it leaks code? [23:40:51] it's all open source anyway [23:41:02] ding ding ding you win Ryan_Lane [23:41:03] here yes [23:41:17] in general it probalyisn't secure... [23:41:19] the only thing that would be concerning would be the configuration [23:41:40] well, you could protect it by running it through a proxy, and doing detection before delivery [23:41:47] ok [23:41:49] back to 404 [23:42:10] how can I make it execute a php file... or is that page in html / cgi on prod? [23:42:10] Ryan_Lane: Aren't 404s handled by squid? [23:42:23] umm [23:42:27] some, maybe [23:42:35] http://en.wikipedia.org/w/gsdgh [23:42:37] this one? [23:42:39] others likely by apache [23:43:02] I don't see anything in the apache config for handling 404s like that [23:43:20] like what [23:43:36] Like what you just linked. [23:43:52] http://labs.wikimedia.deployment.wmflabs.org/w/gsdgh [23:43:58] this is pretty same and handled by apache [23:44:12] Date: Thu, 12 Jan 2012 23:43:33 GMT [23:44:12] Server: Apache [23:44:18] looks like Apache is serving it [23:44:25] petan: Except A. It doesn't work right and B: It deviates from the production configuration [23:44:36] how do you know that? [23:44:42] Ryan_Lane: I don't think squid would override the Server header [23:45:07] Ryan_Lane: So if Apache was sending back a plain jane 404 squid intercepts and returns the nice one [23:45:17] could be. [23:45:26] I haven't done much squid configuration [23:45:31] petan: Do you see anything here for that? http://noc.wikimedia.org/conf/ [23:45:32] ok I need to sleep if you know how to do that in proper way do that [23:45:55] I need to use grep, which I can't in browser [23:46:02] otherwise I can't find anything :o [23:46:22] I mean grep conf/* [23:47:09] I am tired to wget noc :o [23:47:11] oh wait [23:47:13] What's this [23:47:17] hm? [23:47:18] ErrorDocument 404 /w/404.php [23:47:26] that's what I tried now [23:47:30] and see it doesn't work [23:47:35] or where is it? [23:47:38] ah [23:47:44] that's not in our conf? [23:47:47] but looks similar heh [23:47:50] hold on [23:47:56] there is 404 in live [23:47:58] on prod [23:48:12] johnduhart: actually it is handled by apache [23:48:17] they have it in w [23:48:25] no really??!?! [23:48:33] yes really [23:48:33] :P [23:49:18] wasn't it you who told me it's deviating prod config and squid does it? :P [23:49:22] I don't see it in 1.18wmf1 though, so where is it https://svn.wikimedia.org/svnroot/mediawiki/branches/wmf/1.18wmf1/ [23:49:36] probably secret... :O [23:49:53] petan: Yes and now I just saw that line in httpd.conf on production [23:49:53] * petan would give a beer to Ryan for leaking it [23:50:09] nah it's gotta be somewhere [23:50:16] oh [23:50:18] no time, bb [23:50:20] eh? what did I leak? [23:50:22] Probably in DocRoot for the server [23:50:41] you want to leak it :D although you didn't yet [23:50:43] DocumentRoot "/usr/local/apache/common/docroot/default" [23:50:53] oh? the 404 handler? [23:50:55] Ryan_Lane: What's in here :> [23:51:11] come on, it's open source no deal with leaking source, you said it [23:51:14] I dunno [23:51:21] Found it https://svn.wikimedia.org/svnroot/mediawiki/trunk/tools/web-scripts/404.php [23:51:21] :P [23:51:26] heh [23:51:44] it's the normal index.html [23:52:04] this one: http://www.wikimedia.org/ [23:52:39] oh. 404 was written by a volunteer I think [23:52:43] so it doesn't surprise me that it's in svn [23:53:02] The thing is where does it belong [23:53:14] now do 500 [23:53:17] Is it just copied into live and then scap'd out [23:53:19] * Ryan_Lane has no clue [23:53:39] the devs know all of this [23:53:51] YAY virt0 is booting! [23:56:33] so, some people are going to be talking to you guys about deployment-prep on the list [23:56:40] they want to try to deploy timedmediahandler [23:56:47] and need a production-like environment for doing so [23:57:16] :x [23:57:20] I'm assuming you'll want them to wait until after 1.19 is deployed? :) [23:57:37] Yes please, one hurdle at a time [23:57:41] of course [23:58:00] :) [23:58:00] welcome to my life, where everyone wants everything at the same time [23:58:03] haha [23:58:20] * hoo knows that... :/ [23:58:34] well, it's better than being bored ;) [23:58:53] mhm, I'm unsure [23:58:57] So I turned on profiling for testwiki [23:59:20] 0.00% 21.299031 1 - MediaWiki::performAction [23:59:21] 0.00% 1.149124 2 - LocalisationCache::getItem-load [23:59:34] hoo: :D [23:59:37] The regision of code responsible seems to be profiled [23:59:39] ...yay [23:59:41] you haven't yet stood awake most of the night to fix notebooks for friends who only appear then their PC is broken (I got a couple of them) [23:59:51] seems to not be* [23:59:54] johnduhart: are you doing it via the udp profiler?