[00:02:27] Oh, bah [00:02:32] I can't amend an abandoned cahnge [00:03:06] There, amended to take care of the whitesapce [00:03:25] hm [00:03:41] And -1ed it just in case someone might try to merge it [00:03:49] is seems to me that we should have some abstract class [00:04:01] that abstract class should have a factory method [00:04:14] the factory method would give back the proper subclassed hook, based on the repo [00:04:19] Right [00:04:31] That makes sense [00:04:39] Note to self: python != shell script [00:04:43] right :) [00:04:48] it's a proper language ;) [00:04:51] With OOP [00:04:54] yes [00:05:36] The hardest part is figuring out which subclass to use based on the repo name [00:05:40] That would have to be configurable somehow [00:05:44] yes [00:05:52] the config class can have an array for it [00:06:18] I'd like to have some sort of rudimentary wildcard matching there [00:06:23] yes [00:06:31] So that I can say that mediawiki/* is a PHP+JS project [00:06:39] yeah, that would be nice [00:06:57] Ideally, projects could be members of multiple classes (and have multiple hooks executed) too [00:07:54] and the code for "get the current state of the repo so I can run a lint script on it" should also be factored out [00:08:09] This is kind of frustrating, if this were written in PHP I'd probably already have done it [00:08:24] {"mediawiki/*': ["php","js"] } [00:08:29] Yeah [00:08:46] "operations/puppet/*": ["puppet"] [00:08:58] then do a regular expression using the key to match against the repo [00:09:06] Some other stuff might be C [00:09:22] the factory method can return an array of class instances [00:09:40] the hook can loop over the instances, calling the lint check [00:10:05] err [00:10:13] the hook, not necessarily the lint check :) [00:10:18] Yeah [00:10:39] Krinkle and I talked about running JSLint (in Node.js) on JS code yesterday, that's what got me started [00:10:47] Obviously we'd want php -l too [00:10:53] I swear all I do is software development anymore [00:10:59] I'm thinking we might want to have Python lint for puppet too [00:11:04] how have I been tricked into this? [00:11:21] Ryan_Lane: You wanted to run ops as a software dev project. Those are your words. Don't you see the trap? ;) [00:11:26] hahaha [00:11:28] RoanKattouw: php -l should probably in per-commit, reliable enough. JSLint not yet. [00:11:29] indeed [00:11:38] pre-comit* [00:11:45] Krinkle: This is all post-commit [00:11:56] yeah. gerrit doesn't have pre-commit [00:11:59] I know, but doesn't git support pre-commit. [00:12:11] It's post-commit but pre-merge [00:12:15] right [00:12:23] Anything that fails lint gets an automatic -1 code review [00:12:26] pre-commit would be locally [00:12:34] git's nature [00:12:36] Well, or on the server [00:12:44] thinking of that. I need to bring up a freaking gerrit test instnace [00:12:46] *instance [00:12:48] Gerrit does actually reject certain commits [00:12:49] pre-push-pull [00:12:50] Ryan_Lane: YES PLEASE [00:12:51] so that we can actually test this stuff [00:13:09] Also, so I have a place where I can work on my other gerrit dev things in my Copious Spare Time [00:13:15] :D [00:13:20] As opposed to spending another weekend setting up gerrit and failing [00:13:33] would be cool if we could place a hook in between developer-repo "git push" and the server receiving it [00:13:36] I was thinking this exact same thing when going through manifests/gerrit.pp [00:13:43] well, this would be the already built version :) [00:13:44] "I don't need to figure out how to install it, *it's right here* " [00:13:49] hm, right [00:13:56] so like svn pre commit, except that it's pre-push-pull since it's git. [00:13:57] we can work on getting the dev version later too [00:14:00] But it was the setup I was wrangling, not the compilation [00:14:07] ah [00:14:16] that's cool, then [00:14:22] Oh and for the sake of all that is holy, do not put any more .war files in the puppet repo kthx ;) [00:14:31] no plans on that :) [00:14:34] Krinkle: Right, yeah [00:14:39] Good :) [00:14:47] I've been kicking myself ever since doing it [00:15:14] ok. before I work on anything else, I'm going to work on adding a virt1000 box [00:15:19] for ldap and mysql replication [00:15:23] So yeah let's decouple the path to the .war file so it's easy to write a class that installs a dev version of gerrit [00:15:38] (and by the "us" in "let's" I mean you :D ) [00:15:41] :D [00:15:43] Yeah, have fun with that [00:15:46] I'm gonna go get some sleep [00:16:38] good night [00:16:59] Ryan_Lane: Any ideas why bots-apache1 hangs after authencating my key? [00:17:18] it does? [00:17:22] hm [00:17:36] works for me.... [00:17:48] richs? [00:17:52] Ryan_Lane: Yea [00:18:11] it hangs? that makes no sense... [00:18:16] I've logged in before... but I had to try it about 3 times [00:18:22] try it now [00:18:32] Ah, it's caught up now [00:18:35] hmm [00:18:42] one of the hosts is overloaded right now [00:18:53] I need to continue moving instances off of it onto another hose [00:18:53] It normally gets stuck on Authenticating with public key "Rich@labs" from agent [00:18:55] *host [00:25:19] PROBLEM host: nova-production1 is DOWN address: nova-production1 CRITICAL - Host Unreachable (nova-production1) [00:27:19] PROBLEM host: master is DOWN address: master CRITICAL - Host Unreachable (master) [00:36:41] bah. fucking live migration lost an instance [00:36:44] *my* instance :( [00:36:55] * Ryan_Lane reboots the instance [00:37:09] hahaha [00:37:22] now it won't reboot anywhere :( [00:55:39] PROBLEM host: nova-production1 is DOWN address: nova-production1 CRITICAL - Host Unreachable (nova-production1) [00:57:19] PROBLEM host: master is DOWN address: master CRITICAL - Host Unreachable (master) [01:19:09] RECOVERY host: master is UP address: master PING OK - Packet loss = 0%, RTA = 0.71 ms [01:25:39] PROBLEM host: nova-production1 is DOWN address: nova-production1 CRITICAL - Host Unreachable (nova-production1) [01:47:59] RECOVERY host: nova-production1 is UP address: nova-production1 PING OK - Packet loss = 0%, RTA = 0.79 ms [01:50:04] \o/ [01:50:07] fixed that [01:50:11] fucking noba [01:50:13] *nova [01:50:26] apparently I shouldn't try to do more than one migration at a time. [02:59:53] RECOVERY Current Load is now: OK on nova-dev2 nova-dev2 output: OK - load average: 2.31, 0.81, 0.33 [02:59:53] PROBLEM host: labs-cp2 is DOWN address: labs-cp2 CRITICAL - Host Unreachable (labs-cp2) [03:00:43] RECOVERY Current Users is now: OK on nova-dev2 nova-dev2 output: USERS OK - 0 users currently logged in [03:01:23] RECOVERY Disk Space is now: OK on nova-dev2 nova-dev2 output: DISK OK [03:02:23] RECOVERY Free ram is now: OK on nova-dev2 nova-dev2 output: OK: 88% free memory [03:03:23] RECOVERY Total Processes is now: OK on nova-dev2 nova-dev2 output: PROCS OK: 71 processes [03:05:53] PROBLEM Disk Space is now: CRITICAL on labs-lvs1 labs-lvs1 output: Connection refused or timed out [03:06:13] PROBLEM Current Load is now: CRITICAL on labs-lvs1 labs-lvs1 output: Connection refused or timed out [03:06:33] PROBLEM SSH is now: CRITICAL on labs-lvs1 labs-lvs1 output: No route to host [03:06:53] PROBLEM Current Users is now: CRITICAL on labs-lvs1 labs-lvs1 output: Connection refused or timed out [03:07:03] PROBLEM Total Processes is now: CRITICAL on labs-lvs1 labs-lvs1 output: Connection refused or timed out [03:10:53] RECOVERY Disk Space is now: OK on labs-lvs1 labs-lvs1 output: DISK OK [03:11:03] RECOVERY Current Load is now: OK on labs-lvs1 labs-lvs1 output: OK - load average: 0.07, 0.03, 0.01 [03:11:33] RECOVERY SSH is now: OK on labs-lvs1 labs-lvs1 output: SSH OK - OpenSSH_5.3p1 Debian-3ubuntu7 (protocol 2.0) [03:11:53] RECOVERY Current Users is now: OK on labs-lvs1 labs-lvs1 output: USERS OK - 0 users currently logged in [03:11:53] RECOVERY Total Processes is now: OK on labs-lvs1 labs-lvs1 output: PROCS OK: 79 processes [03:14:43] PROBLEM host: pad1 is DOWN address: pad1 CRITICAL - Host Unreachable (pad1) [03:17:53] RECOVERY host: pad1 is UP address: pad1 PING OK - Packet loss = 0%, RTA = 0.83 ms [03:20:33] RECOVERY host: labs-cp2 is UP address: labs-cp2 PING OK - Packet loss = 0%, RTA = 5.10 ms [03:21:53] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [03:52:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [04:22:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [04:52:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [05:22:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [05:52:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [06:22:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [06:52:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [07:10:18] morning [07:10:22] :) [07:22:33] PROBLEM host: labs-mw2 is DOWN address: labs-mw2 CRITICAL - Host Unreachable (labs-mw2) [07:24:03] RECOVERY host: labs-mw2 is UP address: labs-mw2 PING OK - Packet loss = 0%, RTA = 1.01 ms [07:24:13] morning [07:24:14] heh [07:24:27] well, it's 11:30 pm here, so night :) [07:24:45] stupid live migration crashed a few instances [07:34:53] * jeremyb has missed a lot in here. crashing for the night, see you maybe tomorrow [07:35:48] jeremyb: night [14:04:09] New review: Dzahn; "(no comment)" [operations/puppet] (test); V: 1 C: 2; - https://gerrit.wikimedia.org/r/1712 [14:32:54] ^demon: So as I told Ryan yesterday, I've been poking at some reorg so we can have PHP&JS lint hooks for mediawiki in gerrit. See also yesterday's chat log in this channel, and https://gerrit.wikimedia.org/r/#change,1794 [14:33:11] I can't take this any further because I lack Python skills [14:35:29] <^demon> Looks good other than run_project_hook being a stub :) [14:36:13] Yeah [14:36:15] Well [14:36:17] What Ryan said was [14:36:30] There should be an abstract class that that PuppetHooks class inherits from [14:36:57] And there should be a config where you indicate which hook class(es) are to be run for which repos [14:37:17] Like {'operations/puppet/*': ['puppet', 'python'], 'mediawiki/*': ['php', 'js'] } [14:38:43] <^demon> RoanKattouw: I suppose we're going to have to learn python one of these days :) [14:39:04] Oh, you don't speak Python either? [14:39:07] A kindred spirit! [14:39:28] I'm enjoying learning Python [14:39:37] Ryan was like "welcome to a better language" and I said "well we'll see about that, my editor inserts whitespace that breaks things, and according to Google there are no static functions [14:39:49] Oh, you're learning Python too? Nice! [14:39:57] JS as well, right? [14:41:44] Yep [14:42:04] I like Python better so far [14:42:49] I'm not surprised [14:42:56] JS has C-like syntax [14:43:57] I don't know Python's syntax inside out, but it seems to be more intuitive to "mortals" than the C family [14:43:59] the variable stuff is weird in JS to me, with scope stuff [14:44:07] Oh, yeah [14:44:15] I mess up with that all the time, even just now [14:44:19] JS scope is particularly weird [14:44:31] C at least has sane scoping rules [14:44:34] the other day I was trying to make some JS stuff work and I said, "Leonard, I like Python better than JavaScript," and he said, "you're right" [14:44:39] hehe [14:44:53] <^demon> sumanah: Replied to you on wikitech-l. [14:45:01] but it is cool how when you're learning JS & jQuery you can make magic things happen in the browser quickly [14:45:22] thank you Chad! [14:45:35] <^demon> You're welcome. Hope I cleared it up. [14:45:45] I think so -- we'll see [15:13:50] Damianz: reviewed your change in gerrit, and it would have merged, but there is a path conflict. if you know how to,please rebase the change [15:17:25] !rebase [15:17:58] !pathconflict [15:18:10] !pathconflict is https://labsconsole.wikimedia.org/wiki/Git#Fixing_a_path_conflict [15:18:10] Key was added! [16:33:42] <^demon> Production and test are out of sync again. Surprise surprise. [16:35:07] ^demon: you know how copy production config? [16:35:25] I tried to copy files from noc but it doesn't work much :| [16:35:34] actually it doesn't do anything [16:35:40] <^demon> You need secret magic. [16:35:47] huh [16:36:39] <^demon> The InitialiseSettings stuff uses SiteConfig magic. The current config really isn't replicatable outside the cluster. [16:36:51] hm... [16:36:53] right [16:37:31] I found dozed of bugs so far but it's not so easy to confirm it's a bug since config is not synced [16:44:18] heh, I even had to do this for testing: class DummyConf { function siteFromDB() { return array( 'wikipedia', 'en' ); } } $wgConf = new DummyConf; [16:46:35] ^demon: I have a wgConf setup locally, it's not super hard [16:47:01] <^demon> Not super hard, but certainly not intuitive. [16:47:09] <^demon> And wmf config won't work out-of-the-box without it. [16:47:13] True [16:47:17] because of the dblist stuff [16:50:02] I set that all up for the SiteMatrix extension [16:50:10] Then that caused major problems trying to use CentralAuth [16:51:48] <^demon> I've yet to come up with a clean model I really really like for multi-site config. [16:52:12] <^demon> And if we're going to change it I want it Done Right. [19:32:43] heh ^demon I created my own global config structure and it seems to be even easier than what is now at noc [19:33:19] there is a global array where is defined which extension you want to have and each extension has global default preconfig [19:33:31] so deployment is just about changing name of extension to true in local config [20:10:46] hmm [20:10:47] You are viewing this page on deployment.wmflabs.org, which might be a proxy or phishing site. This site can intercept your password; you are strongly advised to log in from en.wikipedia.org. [20:10:53] on http://deployment.wmflabs.org/en_wikipedia/w/index.php?title=Special:UserLogin&returnto=Special%3ASpecialPages [20:14:16] hm... I don't see that message [20:14:22] definitely create a different password there [20:14:26] oh, seriously :D [20:14:28] it's there [20:42:59] hexmode: simple wiki is imported I am upgrading to new schema now [20:43:15] there was a problem in updater, I filled a bug [20:43:15] \o/ [20:43:30] I had to run populate_sha1 because update.php crashed [20:43:57] bug #? [20:44:07] sounds like it should block 1.19 [20:44:22] !bug 33558 [20:44:22] https://bugzilla.wikimedia.org/show_bug.cgi?id=33558 [20:44:43] actually update.php works just not on big dbs [20:44:45] :o [20:45:11] simple wiki has 2 000 000+ revisions [20:45:25] it is still calculating sha1 [20:45:39] mmmm can I have some salt with your sha1 [20:48:08] btw hexmode there are some pages displaying weird... [20:48:15] in 1.19 [20:48:34] http://deployment.wmflabs.org/en_wikipedia/wiki/Special:RecentChanges [20:55:34] which ones? [20:56:44] petan: which pages are displaying weird? [20:57:16] RC [20:58:32] petan: did you run rebuildrecentchanges? [20:58:46] I runned rebuildall many times [20:58:59] actually it's running almost non stop [20:59:07] one run takes almost a day [20:59:39] still running [20:59:54] petrb@deployment-dbdump:~$ ps -ef | grep maint [20:59:56] petrb 12741 6175 33 12:36 pts/1 02:49:16 php www/en_wikipedia/w/maintenance/rebuildall.php [21:03:55] also I can't set up AFTv5 [21:04:02] it's there but doesn't show up [21:04:08] I copied config from noc [21:04:31] I wouldn't worry about that too much yet [21:04:37] ok [21:05:00] when it's at a more working state, i'm sure you can find a developer or 2 who can poke at it [21:05:09] right [21:05:16] and what is more working state? :P [21:05:23] should I import more stuff? [21:06:04] How much have you impported now? [21:07:56] http://deployment.wmflabs.org/en_wikipedia/wiki/Special:Statistics [21:08:00] still have 40gb free [21:08:19] only few content pages probably should import more... [21:08:29] I created full db of simple wiki [21:08:37] but it's now being updated to 1.19 [21:08:46] it runs slowly heh [21:10:09] you guys are eating a ton of space ;) [21:10:16] 1.2T 377G 709G 35% /var/lib/nova/instances [21:10:18] heh :) [21:10:34] I wanted to create machine for memcached too [21:10:35] I really need to move virt1 into the cluster [21:10:38] and squid [21:10:45] memcache is no big deal [21:10:52] now it's running on apache [21:10:54] there's a puppet class for it [21:10:59] yes [21:11:00] I used it [21:11:14] I'm less worried about memory [21:11:22] but I don't know if squid could run there as well [21:11:26] on one machine [21:11:33] don't put it on one machine [21:11:33] people were complaining that labs are slow [21:11:46] now memcached and apache is both on same [21:11:48] that's because I had moved instances off of virt4 [21:11:54] ah [21:11:59] and virt2 and virt3 were overloaded [21:12:03] right [21:12:16] though we're quickly approaching the point where we need to add more hosts for more instances [21:12:28] I'd say we can add about 30 more instances, realistically [21:12:33] ah... [21:12:45] another 30 if I bring virt1 into the cluster [21:12:49] right [21:12:59] then we start adding the cisco gear [21:13:00] I think we could kill 30 if we get an sql :D [21:13:09] and another cluster in eqiad [21:13:11] because that's most expensive thing [21:13:14] eats most space [21:13:24] also cpu [21:13:25] yeah [21:13:31] deployment-sql is permanently on 100% [21:13:39] 2 days [21:14:07] right [21:14:12] because it is importing constantly [21:14:17] yeah... [21:14:23] and vms are terrible for IO [21:14:29] probably yes [21:14:37] Can we not have some dedicated servers etup in replication for mariadb? [21:15:31] that's the idea, yeah [21:15:54] I'm the only one doing infrastructure work right now for labs [21:16:07] so I have to prioritize tasks :) [21:16:33] * Damianz inserts task that reads along the lines of borrow some monkeys to help [21:16:39] :D [21:16:39] :P [21:16:53] hehe I can only help remotely :P [21:17:03] sara will be here about 10 hours a week in a couple weeks [21:17:11] ok [21:17:19] andrewbogott is doing openstack development [21:17:28] (sorry for the gratuitous ping) [21:17:33] Hmmm why do I reconigze that name [21:17:55] Oooh I know, I saw the launchpad entry for openstack stuff :D [21:19:35] Why did node.js have to change some stuff around with sys that broke the irc stuff :( Bah, more work than I wanted to do tonight. [21:20:55] * Damianz goes back to the hell that is trying to use javascript for anything a tiny bit complex [21:21:50] Reedy: you think I should import some more stuff to that or install more stuff? or it's good for now [21:22:11] How much have you got in [21:22:13] ? [21:22:24] A mix of some random pages will give a reasonable sample sizew [21:22:39] http://deployment.wmflabs.org/en_wikipedia/wiki/Special:Random [21:22:56] I will import more content pages [21:23:01] that's a good idea [21:23:12] Berth Milton, Sr. (1926 � 2005), Swedish pornographer and businessman who founded Private [21:23:17] oh [21:23:18] lol [21:23:27] that was randomly picked [21:23:44] should I delete it? [21:24:00] nah, i just found it was amusing to land on [21:24:13] wikipedia is overloaded with porn actually [21:24:22] mmm [21:24:27] commons most [21:28:25] Damianz: you are using node? [21:28:34] Damianz: did you use the packaged version we have? [21:29:42] It's running on one of my other servers atm so nope :P I had it working but the new version broke stuff, basically it's just a bot that sits in one channel and relays some stuff to another channel based on the message contents. [21:30:16] Keep debating about just using twisted but gah getting a server+client+talking to each other in twisted is like pulling teeth. [21:32:40] ah [21:32:50] use a queue! :D [21:33:01] this is what they are made for [21:33:17] i am here [21:33:34] we really need to get a proxy set up [21:33:46] to use rather than all of these floating IPs [21:34:22] whenever we get one set up I'll start taking some of these IPs back [21:34:31] which project? [21:34:33] pageviews? [21:34:35] sure [21:34:35] yes [21:34:42] We could just have a nice haproxy box that proxys to internal ips based on domain :P [21:34:50] yeah, that's what I want [21:35:07] well [21:35:13] I'd probably use nginx, or varnish [21:35:18] Mmmm nginx [21:36:05] I find varnish a bit of a PITA to configure for multiple domains with multiple directors based on url. [21:43:13] !terminology is https://labsconsole.wikimedia.org/wiki/Terminology [21:43:13] Key was added! [21:43:20] !terms alias terminology [21:43:20] Successfully created [21:43:35] Damianz: yeah. I'll more likely use nginx [21:58:02] ryan: can you give me headsup when pageviews.wmflabs.org is available so i can inform eloquince? [22:05:21] drdee: ah. right. crap [22:05:22] sorry [22:06:22] drdee: you can allocate an IP, then associate it with your instance, then create your DNS entry, now [22:21:41] seeing the behavior of labs makes me scared :-/ [22:22:28] how come special:recentchanges does not render wikitext properly? [22:22:57] that's a behaviour of mediawiki [22:23:44] here comes the interesting hexmode we got simple wiki [22:23:46] full db [22:23:50] ryan: okidoki [22:24:22] althought it seems to be a bit misconfigured though [22:24:46] ryan: i am not a cloudadmin person :( [22:25:23] drdee: for addresses you just need to be a netadmin [22:25:37] drdee: are you getting failures in the "Manage addresses" dialog? [22:26:17] drdee: you manage dns addresses via manage addresses. manage domains is for adding entire new dns zones [22:28:06] i see [22:28:56] it says Failed to allocate new public IP address. [22:29:17] really? [22:29:19] in pageviews? [22:29:50] the zone is called statsgrokse [22:30:32] o.O [22:30:45] how the hell did I add something to the pageviews project, then> :) [22:31:19] drdee: ok. try now, upped the quota for statsgrokse [22:31:50] works! [22:32:01] andrewbogott: the feature they are asking for in the list is painful [22:32:54] they are asking for anycast support [22:33:08] yes, behavior of new mediawiki. that's what makes me scared [22:33:32] It seems sort of... outside the scope of what we're doing now. [22:33:42] ryan: i added a hostname, but the instance name and the instance id are empty [22:33:58] Although... it wouldn't be super hard to implement would it? [22:34:24] which would mean adding multiple DNS entries to the same zone in different servers, then having anycast direct different IP ranges to different DNS servers [22:34:28] I'm not clear on whether it's hard to know where a lookup is coming from [22:34:34] anycast can handle it [22:34:49] but, you'd have to add the same record to two spots with different IPs [22:35:02] I guess really it isn't that hard... [22:35:06] hm... I must not understand. lemme read again [22:35:09] the rest is handled by the network [22:35:40] and this would be part of the floating api [22:36:05] so, when a floating IP is associated with an instance, you'd create a record in another location [22:36:14] I guess this is actually pretty easy [22:36:27] hell, we might even want to do this. let's ignore this for now [22:36:36] Ok, so it's not that we would actually mess with the lookup, we'd just put the info in place for certain backends to handle? [22:36:41] yes [22:36:53] then we'd have multiple DNS servers running [22:37:02] and different IP ranges would be sent to different DNS servers [22:37:33] which would then return different records [22:37:48] So... couldn't a driver just do that, without any changes in the driver api? [22:37:52] yes [22:38:02] Let me respond to this [22:38:05] OK. [22:38:45] So we could add an api for configuring it at any point, it doesn't sound like it affects our current design. [22:38:54] Anyway, I will let you respond. [22:39:01] * Ryan_Lane nods [22:41:54] ryan: how long does it take for the dns to update? [22:45:14] drdee: you need to associate it with an instance [22:45:20] I did [22:45:22] otherwise it isn't pointing anywhere [22:45:26] dns is immediate [22:45:29] or should be [22:45:29] I discovered that : [22:45:40] you can try pageviews.wmflabs.org [22:46:50] drdee: it resolves, right now [22:46:57] is port 80 open? [22:47:34] it's working for me.... [22:47:39] is it not working for you? [22:47:41] yes, because when I enter the ip address it does work [22:47:49] the DNS address works for me [22:47:50] oh [22:48:03] you tried it before you added the address [22:48:09] it's in your negative cache [22:48:12] are you on a mac? [22:48:18] no, windows [22:48:36] i also tried in a different browser without a negative cache [22:48:44] the OS has a DNS cache [22:48:57] okay [22:49:14] http://www.techiecorner.com/35/how-to-flush-dns-cache-in-linux-windows-mac/ [22:49:15] every conversation with you turns into a learning experience :) [22:49:28] back in a little bit [22:49:29] heh [22:49:34] thanks for the link! [22:49:55] there's a billion moving parts in ops. takes a long time to get a good broad understanding [22:50:19] excellent! it is working [22:50:23] and then you think you understand stuff, technology change and you have to learn again from scratch :-d [22:50:41] oh, I only come to realize how little of nothing i know