[00:39:40] AzaToth: We should probably point out that legal is hashing out the actual (i.e.: non-draft) TOS. :-) [00:41:41] ok [00:42:07] Ryan_Lane: the domain choice appears between Password and Retype password on wikitech's Create account form. It's the same way on the old form. I'll add back the id="mw-user-domain-section" so wikitech's css hides it, but do you think that is the right place to show it? [03:04:37] Coren, you still awake? [03:04:50] If not, I'll use a loud alarm. [03:04:58] Cyberpower678: Que pasa? [03:05:33] Coren, when a script is making a file, it sets the mod to 660. How can I fix that? [03:05:42] I need 775. [03:06:02] This must be fixed in the script. [03:06:45] Well, I doubt you need executable, but you the created permissions are a combination of your umask, and the open() parameters. You can also always chmod the file after the fact. [03:06:52] What file, what conditions? [03:08:05] They're data files for the articleinfo script. They get created with 660 making inaccessible to apache, as you said, and therefore, when it gets called, it won't load. [03:08:24] What's umask? [03:09:35] Coren, ^ [03:11:03] You almost certainly want 664 not 775 then. Umask is a setting of processes that give what permissions are turned off by default. [03:11:10] * Cyberpower678 wakes Coren. BEEP! BEEP! BEEP! BEEP! [03:11:12] What language is your script in? [03:11:36] PHP. Like all of X!'s tools. :p [03:13:19] Coren, ^ [03:13:41] Cyberpower678: http://php.net/manual/en/function.umask.php [03:23:19] Thanks Coren that fixed the bug. [03:23:28] Tool is working perfectly now. [08:12:08] Coren|Sleep or someone: could you be a dear and install pyexiv2 on tools-login? [08:12:22] Or just install scons [08:12:30] And then I can install it locally [08:16:52] hi [08:17:03] so what exactly you need to install and where [08:20:37] Theopolisme ^ [08:21:57] !log tools petrb: installed pyexiv2 on -login [08:22:15] Theopolisme what do you need it for on -login? :o [08:22:29] your jobs are supposed to run on exec nodes and they don't have it [08:22:41] @notify Theopolisme [08:22:41] This user is now online in #wikimedia-labs. I'll let you know when they show some activity (talk, etc.) [08:37:56] !log deployment-prep applying role::labsnfs::client on searchidx01, the lucene search indexer needs access to the dblist files [08:39:47] petan: more bot is dead again it seems :-D [08:39:54] :/ [08:39:59] but now it lives on grid... [08:40:41] [ add / remove maintainers] Mattflaschen Andrew Bogot [08:40:51] aint there... :\ [08:41:08] you need to wait for andrewbogott_afk to fix it [08:41:28] andrewbogott_afk can you add me to morebots project when you back [09:01:40] @labs-project-instances openstack [09:01:40] Following instances are in this project: nova-osm-keystone, abogott-request-tracker, nova-precise2, damian-keytest, openstack-role-dev4, openstack-role-dev5, openstack-role-dev6, [09:04:06] !log openstack petrb: install keystone-client on keystone [10:20:00] Warning: There is 1 user waiting for shell: Goosy (waiting 0 minutes) [10:33:29] Warning: There is 1 user waiting for shell: Goosy (waiting 13 minutes) [10:46:58] Warning: There is 1 user waiting for shell: Goosy (waiting 27 minutes) [11:00:27] Warning: There is 1 user waiting for shell: Goosy (waiting 40 minutes) [11:13:55] Warning: There is 1 user waiting for shell: Goosy (waiting 54 minutes) [11:27:16] Warning: There is 1 user waiting for shell: Goosy (waiting 67 minutes) [11:40:45] Warning: There is 1 user waiting for shell: Goosy (waiting 80 minutes) [11:54:09] Warning: There is 1 user waiting for shell: Goosy (waiting 94 minutes) [12:07:39] Warning: There is 1 user waiting for shell: Goosy (waiting 107 minutes) [12:14:08] !rq Goosy | addshore [12:14:08] addshore: https://wikitech.wikimedia.org/wiki/Shell_Request/Goosy?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Goosy?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Goosy [12:14:36] :P [12:16:47] Damianz can u pls insert me to cluebot project :> [12:17:00] petan: why? [12:17:13] because I want to master cluebot XD [12:17:32] it's only running a php interface that's opensource [12:17:37] ah [12:17:44] I thought it is running actual cluebot [12:18:57] Nope - not yet [12:35:01] petan: also found https://flume.apache.org/ [12:35:04] (re: logging) [12:35:26] much newer [12:35:28] of course [12:36:14] hmmm puppet just hangs [12:39:31] petan: there's also Kafka, which IIRC our analytics team is also starting to use [12:39:55] hmm [12:40:15] YuviPanda I am still looking for something that is better than what I could eventually write myself :P [12:40:33] but yes this looks good [12:40:42] I need something what is relatively simple to use, but powerfull [12:40:46] yeah [12:40:55] I don't want anything what tool creators would find hard to implement [12:40:57] i'm partial to rsyslog :D [12:41:04] petan: we should just go with syslog [12:41:08] everything can write to syslog [12:41:12] as in [12:41:12] but that is local [12:41:14] every platform [12:41:19] and hard to make available online [12:41:19] petan: err, that's the point of rsyslog [12:41:24] aha [12:41:32] it's not local [12:41:40] petan: and it's also pretty simple [12:41:43] like super simple :P [12:41:49] which is good for our purposes I think [12:41:56] Kafka, Scribe etc are all a fair bit more complex [12:42:03] hmm [12:42:04] ok [12:42:04] and used for different purposes [12:42:07] and would be overengineering [12:42:09] we can test it [12:42:18] I create new instance for it [12:42:26] plus rsyslog already has debian packages :D [12:42:35] maybe I will first try it on bots [12:42:37] mmm [12:42:40] http://www.canonical.com/sites/default/files/active/Whitepaper-CentralisedLogging-v1.pdf (PDF warning) [12:42:43] that is testing [12:43:11] Who needs an alarm clock when you got IRC /YAAAAAAAAWWWN. [12:44:12] YuviPanda you have access to bots? [12:44:14] YuviPanda: Do you know rsyslog config? I set up an instance on tool logger, and using Python's logging module looked awful. [12:44:29] YuviPanda do you want to have root there so that you can set it up :> [12:44:29] scfc_de: no, i'm just reading through about it [12:44:30] I am lazy [12:44:34] petan: IIRC no. [12:44:38] hold on [12:44:40] sure but i'll go offline in about 1 hour [12:44:43] exam on saturday [12:44:44] (Grr) [12:44:45] eh [12:44:46] ok [12:45:02] you have access there :D [12:45:09] mmmm root! [12:45:18] sec [12:45:18] bots.wmflabs.org? [12:45:27] you need to ssh through bastion [12:45:37] bots-login [12:45:37] I think I've proxying setup [12:45:41] YuviPanda: It works very nice in that it logs something, just the "format" needs fixing. [12:45:47] @seen apergos [12:45:47] scfc_de: Last time I saw apergos they were talking in the channel, they are still in the channel #mediawiki at 6/6/2013 12:44:23 PM (00:01:24.7548880 ago) [12:45:53] @labs-info bots-syslog [12:45:53] I don't know this instance, sorry, try browsing the list by hand, but I can guarantee there is no such instance matching this name, host or Nova ID unless it was created less than 42 seconds ago [12:45:55] meh [12:46:05] wikitech is fucked [12:48:22] YuviPanda you have root now [12:48:29] it's bots-syslog [12:48:35] make log of everything you have done [12:49:03] wonderful [12:49:03] yes [12:49:07] i just logged in to check [12:49:21] I do have root, yes [12:49:32] what's wrong with wikitech petan? [12:49:36] i'm reading through docs and stuff now [12:49:38] Krenair it was lagged [12:49:44] Krenair loading 2 minutes 1 page [12:49:46] now it's ok [12:59:59] petan: can you tell me how to run queries on the database from python on tool labs? [13:02:18] Moo! [13:03:20] Coren, quacl [13:03:26] *Quack [13:03:37] lbenedix using python-oursql library I think [13:03:43] but I am no way expert on python [13:03:58] Okay... I'll try [13:04:00] thx [13:04:37] Cyberpower678: http://www.alienarena.cz/img/scrns/quack.jpg [13:07:42] petan, is it a first person breader? [13:07:55] yes [13:08:10] Sweet where can I buy the game? ;p [13:08:30] on every farm [13:17:26] lbenedix: the normal MySQLdb works, too. oursql had some problems, for example it didn't give up the evil Global Interpreter Lock, creating problems with multitasking. idk if that has been fixed. [13:22:23] JohannesK_WMDE: Is that relevant if you don't use threads? [13:23:46] scfc_de: this particular problem isn't. i had the impression that MySQLdb was generally more stable, though. i didn't find anything that oursql did better. [13:25:39] not for the things i did anyway [13:30:31] while reading http://pythonhosted.org/oursql/ again: the streaming stuff might be beneficial for importing whole wiki graphs, which i do in catgraph. but i doubt it is normally useful. and i'm pretty sure mysqldb does "real parameterization" too [13:36:01] JohannesK_WMDE: I'm not very knowledgeable about that topic, but on dbreps we're changing from MySQLdb to oursql. I think legoktm said it was more "pythonic". MySQLdb's "%s" syntax for queries certainly looks odd. What I don't like about oursql is its scarce documentation. [14:21:29] scfc_de: i never understood what "pythonic" means exactly, i'm generally not a religious person ;) coming from a c background, the %s syntax looks familiar to me. oursql looks newer and somehow more... i dunno, slick. i guess both bindings work. [14:39:28] Coren, you wouldn't mind if I created a script that would use a couple gigs of memory would you? [14:39:50] Cyberpower678 define "couple" [14:39:59] 2 or 3. [14:40:12] Cyberpower678 we won't mind but you should expect that it may take a while for it to be scheduled for run [14:40:28] I would only run it once a day. [14:40:43] because jobs with big requests for ram have usually 1) lower scheduling priority 2) need to wait for instance with so much free ram available [14:41:26] petan, that's ok. I can't make it any smaller than it's going to be unfortunately. [14:42:10] 1.5 GBs and the test run is not even half way through/ [14:42:12] :/ [14:42:30] np [14:42:41] * Cyberpower678 needs to think of another solution. [14:42:42] "ram is cheap" --Ryan [14:43:07] If this keeps up, it's going to crash my computer. [14:43:11] 2GB now. [14:43:17] hmm [14:43:36] Definetely need to think of another. [14:43:38] solution. [14:46:55] Fatal error: Out of memory (allocated 1873018880) (tried to allocate 16777216 bytes) in C:\Users\Maximilian Doerr\Documents\NetBeansProjects\Peachy\Includes\Wiki.php on line 727 [14:46:57] :/ [14:47:19] Cyberpower678: What does the script do? [14:47:43] scfc_de, buffers all pages with links gathers, the redlinks and removes them. [14:48:14] Come again? [14:48:22] I think I just found a solution to my problem which should drastically decrease memory usage. [14:48:47] Cyberpower678 are you sure you can't store the temporary you don't need to access often to temp instead of ram? [14:49:12] It's so simple, I can't believe I didn't think of it first. [14:49:21] * Cyberpower678 goes to work. [14:49:21] delete() ? :o [14:49:47] petan, ? [14:50:01] aye if you allocate memory inside of loop iteration, freeing it on its end is a good idea :P [14:50:25] btw Cyberpower678 you don't know c++ I suppose [14:51:07] petan, I'm learning C++ next semester for my engineering degree. [14:51:29] hmm [14:51:45] then you will discover delete and all your memory problems will disappear :P [14:51:54] petan, cool. [14:52:01] php-cli should be forbidden [14:52:10] petan, hmm? [14:52:23] running long time jobs in a language which doesn't allow memory handling is evil [14:52:47] petan, you think everything is evil. :p [14:52:52] no [14:52:55] c isn't evil :> [14:53:07] c++ is legit as well... [14:53:11] It is if it's a grade. [14:53:29] C isn't evil, but it is cruel and unforgiving. [14:54:12] Coren doesn't get my joke. C is evil if it's a grade. [14:54:18] I don't really think it's cruel :) it's just... syntactically close to how cpu works [14:54:27] cpu never works with objects [14:54:37] Cyberpower678 neither I do [14:54:49] Report card grade? [14:54:55] Who wants a C. :p [14:55:19] I need more english knowledge to understand this kind of joke [14:56:03] Simpsons should be enough :-). [14:56:44] scfc_de, D'oh [14:57:58] Cyberpower678: "buffers all pages with links gathers, the redlinks and removes them" -- i don't understand the purpose of this. what are you trying to do? just out of curiosity [14:58:20] JohannesK|phone, redlink handler. [14:59:34] Coren did you disable limit on jobs? [14:59:42] I just submitted like 30 of them and it didn't complain [15:00:06] but only 16 running, good :) [15:00:33] Cyberpower678: you want to find alternative targets for the red links? [15:00:49] JohannesK_WMDE, hmmm? [15:02:43] Cyberpower678: i'm just trying to understand what you want to do. [15:03:11] JohannesK_WMDE, mainly delink the nonexistent links, handle categories correctly, and image links as well. [15:03:36] So basically, carefully handle the redlink without breaking the articles. [15:03:37] delink nonexistent links? [15:03:40] they are purposefully red [15:03:51] petan, yes that's what i thought. [15:04:05] petan, why would pruposefully add redlinks? [15:04:08] to articles? [15:04:17] because that means "someone go and create this" [15:04:27] Cyberpower678: people click on it and create the page. [15:04:28] it's a requested article if it's redlink [15:04:57] Otherwise, MediaWiki would display redlinks just as normal text :-). [15:05:04] probably [15:05:37] petan, that's the beauty of the control interface, that's going to be developed. It tells the bot which redlinks to ignore. Not all redlinks are meant to be created, especially, if it won't satisfy the criteria of creation. [15:05:41] it's red because it is wrong, not because someone added a link to nonexisting page, but because the page doesn't exist and should exist [15:06:18] Cyberpower678 did you get successful BRFA for this task? [15:06:26] I think you should before wasting time working on a bot [15:06:29] Cyberpower678, haven't filed one yet. [15:06:39] because it very likely isn't going to be approved :/ [15:07:04] petan, fine then let's see. [15:07:25] I always first create BRFA, then waste my time... :P [15:11:23] JohannesK_WMDE, scfc_de: reasons why oursql is better than mysqldb: http://pythonhosted.org/oursql/ [15:12:07] logotkm: um, i actually posted that link before. at least to the same content [15:12:32] oh sorry, didnt read all the scrollback yet. just saw the ping [15:13:49] the first "reason" is: "real parameterization" - like i said i'm pretty sure mysqldb has that too. it just looks different - %s instead of ?. but it's not string interpolation, it's parameterization. [15:15:06] JohannesK_WMDE: mysqldb does the escaping and insertion locally [15:15:18] instead of sending query and data to the server seperately [15:15:38] valhallasw: [citation needed] [15:16:23] http://stackoverflow.com/questions/2424531/does-the-mysqldb-module-support-prepared-statements [15:20:50] hm, ok. that is stupid. [15:22:06] it is still safe though, because parameters are escaped by the library. [15:23:32] mainly i had a problem with this bug in oursql: https://bugs.launchpad.net/oursql/+bug/582124 [15:23:51] makes multithreading kind of useless. [15:24:07] but then, multithreading in python basically isn't. [15:27:45] JohannesK_WMDE: Hmmm, that says it was fixed?! [15:29:37] that say it was commited not fixed [15:29:49] likely no release since the commit [15:29:59] + ubuntu is using very obsolete versions of everything [15:30:09] so even when it's released it will take months for it to get to ubuntu repo [15:30:23] which will then take months for it to get to wikimedia repo [15:30:37] JohannesK_WMDE: well, you can do async I/O, but it's still painful for the gains you get [15:30:39] so I would expect this getting fixed on wikimedia labs in a year or two :) [15:30:39] or to the toolserver. yes. [15:31:43] JohannesK_WMDE is it possible to install alternative python library together with oursql [15:31:51] or these 2 conflict [15:32:53] JohannesK_WMDE: you can always install the most recent version in a virtualenv [15:34:26] valhallasw: do you mean multithreading is painful in general, or in python specifically? it does give benefits even in python if you have to wait for multiple slow things like network/disk. but python doesn't really run several bytecode threads in parallel because of the GIL. the interpreter code isn't thread safe. [15:34:38] JohannesK_WMDE: in general, and in python specifically ;-) [15:35:12] JohannesK_WMDE: the advice from the python community generally is 'don't use threads but something like twisted' [15:35:27] basically pre-emptive multitasking [15:35:35] MaxSem: I have now set up a small demonstration rados instance on maps-ceph1 and configured maps-tiles4 to use it. [15:35:40] eh, sorry, cooperative multitasking [15:35:41] i dunno. multithreading is what it is :) i haven't noticed any special painfulness with python threading though. it's just less useful. [15:35:44] weeee [15:35:44] * valhallasw 's brain is fried [15:36:04] I have also updated the puppet scripts on that instance to reflect the use for multiple tileservers [15:36:19] petan: oursql isn't in Ubuntu or Debian. [15:36:26] uhhhh cooperative multitasking, right. they suggest that because that's basically the best thing python threading can do, valhallasw :) [15:36:47] JohannesK_WMDE: and because cooperative multitasking is fine for IO bound tasks :-) [15:36:56] yes. [15:37:59] maybe they also think: if you're trying to do cpu-bound stuff, you shouldn't be using an interpreted language like python anyway, but a real programming language. ;) [15:38:30] they might be right :p [15:39:17] Unfortunately git diff on the puppet dir gives me too much noise, as it is based off of a current snapshot of puppet and that seems to interfer with pulling in your changes from gerrit [15:40:08] awesome! I'm currently working on annoying the ops to make this project move forward [15:40:20] Ah great :-) [15:41:47] We should probably define some form of tests, or minimal requirements, to see what needs to be shown for it be considered [15:42:02] scfc_de ah [15:42:12] hmm is that pip version fixed then? [15:44:03] apmon: an openstreetmaps tiles server thing on labs? [15:44:41] apmon: might be interesting for the limes map: http://tools.wmflabs.org/render-tests/erdbeer/limes/ [15:48:16] Well, there is are two things here. [15:48:53] petan: Don't know. [15:48:55] One is to move a tileserver into production, for use in wikipedia, wikipedia-mobile and presumably any other wikipedia project [15:49:28] and then there is an idea to get an openstreetmap db into labs, for people to play with the data and use it directly in their tools [15:51:34] that is cool. i just talked about integrating the limes map into wikipedia articles. we can't really do that currently because the load would likely be too high on the tileserver as well as on toolserver/toollabs - or so i was told. a tile server on labs for wikipedia projects would be a good thing. [15:53:44] limes map? [15:56:31] yes, an openstreetmap-based app that shows roman military installations along the limes. it is time-based, so you can kind of see the roman empire's rise and fall with animation. here's the link again: http://tools.wmflabs.org/render-tests/erdbeer/limes/ [15:56:39] MaxSem ^ [15:58:24] https://en.wikipedia.org/wiki/Limes [16:03:49] can somebody please help me with cron. I have already read the help pages about cron [16:07:20] JohannesK_WMDE: I guess it depends on how you integrate it. [16:07:46] OSM maps are already integrated into many wikipedias off of the toolserver tileserver [16:07:56] and the load that generates is perhaps surprisingly low [16:09:47] Pyfisch: What isn't working? [16:10:18] scfc_de: the target cron file (Pyfhon script) is not executed. [16:11:17] Is your installed cron file viewable somewhere? [16:11:17] apmon: afaik the tileserver on the toolserver is one of the more resource-consuming things. maybe not cpu, but ram and disk. [16:13:04] JohannesK_WMDE: It has its own server, so doesn't affect the rest of the toolserver [16:13:15] Oh, I guess, it does use the main toolserver disk system [16:13:54] which does consume quite a bit of disk space on the disk array for tiles [16:15:48] i don't know which server the main thing is running on. there is something map-related running on ortelius though, which is a normal web server [16:16:18] apmon: If it's using NFS, maybe also that. When the dumps project is bunzip2ing enwiki, it can be quite annoying :-). [16:16:26] Pyfisch: Problem solved? [16:22:39] scfc_de: do you mean the cron table or the target script? [16:23:14] Warning: There is 1 user waiting for shell: Dbeeson (waiting 0 minutes) [16:24:00] Pyfisch: The cron tab to start ("crontab -l > /tmp/pyfisch", for example) [16:24:41] I use only "crontab -e" [16:25:17] Whatever. Can you save it to disk somewhere or put it on pastebin.com? [16:25:58] * * * * * python /data/project/userdata/FischBot/wikiversitybot.py > /data/project/userdata/FischBot/log/wikiversitylog.txt [16:26:07] it is only this line and some comments before [16:27:06] "cannot access /data/project/userdata/FischBot/wikiversitybot.py: No such file or directory". Is your tool really called "userdata"? :-) [16:31:41] scfc_de: Yes userdata is a directory to create projects without extra service account. This is used not only by my bot. /data/project/userdata/FischBot/ should exist. On which host you are trying it? [16:32:22] Pyfisch: Ah! You're talking abouts Bots? [16:32:34] scfc_de: yes. [16:34:48] Add " 2>> /data/project/userdata/FischBot/log/stderr.txt" to the end of the cron tab and wait a minute. Then look at what /data/project/userdata/FischBot/log/stderr.txt says. [16:34:59] cron tab *line*. [16:35:20] I. e. "* * * * * python /data/project/userdata/FischBot/wikiversitybot.py > /data/project/userdata/FischBot/log/wikiversitylog.txt 2>> /data/project/userdata/FischBot/log/stderr.txt" [16:36:43] Warning: There is 1 user waiting for shell: Dbeeson (waiting 13 minutes) [16:40:00] aude: I worked on your patch a bit and am submitting it right now. It seems to work better but now I'm getting a timeout in one of the wikidata components. [16:40:18] aude: Probably we should merge this patch (or something like it) and then you can handle the timeout as a separate issue. [16:41:33] "* * * * *" is rarely a good thing; anything that needs to be fired up that often should be made continuous and sleeping for the intervals (to avoid the startup cost being paid over and over) [16:42:41] Coren: it is only for testing [16:42:57] Pyfisch: So you saw stderr.txt and the error message? [16:43:06] yes python error [16:43:38] we will have to wait also my bot is not running it is probably pywikipedia error [16:45:08] It said "pkg_resources.DistributionNotFound: httplib2". Is that a required package for pywikipedia? [16:47:14] Pyfisch: You probably need to ask a Bots root to install python-httplib2. [16:48:17] scfc_de: I am using the standard pywikipedia files which are updated daily. They always ran without problems *grr* [16:49:28] That's why I like proper releases :-). [16:50:05] andrewbogott: fine with me [16:50:12] Warning: There is 1 user waiting for shell: Dbeeson (waiting 27 minutes) [16:50:25] i've been having trouble with puppet all day [16:50:40] it freezing etc. nothing to do with my patch or the issues [16:52:53] oh [16:54:23] scfc_de: maybe it is not the fault of pwb, it can be the fault of labs. pwb always used that module like legoktm said me in #pywikibot. Was python on labs reinstalled? [16:54:35] Pyfisch: actually yeah, it was [16:54:40] that might be it [16:54:47] petan: ping [16:55:18] andrewbogott: puppet gets stuck on info: Loading facts in /var/lib/puppet/lib/facter/default_gateway.rb [16:55:33] * aude be patient and see if it ever finishes [17:03:46] Warning: There are 2 users waiting for shell, displaying last 2: Dbeeson (waiting 40 minutes) HanneleOlenius (waiting 8 minutes) [17:17:15] Warning: There are 2 users waiting for shell, displaying last 2: Dbeeson (waiting 54 minutes) HanneleOlenius (waiting 21 minutes) [17:30:45] Warning: There are 2 users waiting for shell, displaying last 2: Dbeeson (waiting 67 minutes) HanneleOlenius (waiting 35 minutes) [17:36:07] aude, still stuck? I'm running a test on a fresh instance and so far it's doing what I'd expect. [17:44:14] Warning: There are 2 users waiting for shell, displaying last 2: Dbeeson (waiting 81 minutes) HanneleOlenius (waiting 48 minutes) [18:01:57] hey. My project admin said he added us to the administrators group but I don't have sudo on the machine [18:02:16] ptarjan is not allowed to run sudo on hhvm-mw. This incident will be reported. [18:02:17] ptarjan@hhvm-mw:~/dev/hiphop-php$ [18:02:19] what should I do? [18:03:42] ptarjan, what project? [18:04:27] andrewbogott: performance [18:04:28] ? [18:05:56] ptarjan, the sudo policy in that project lists a few users: khorn, preilly, Asher, Demon. It doesn't have any wildcards. [18:06:07] So you need your project admin to add you, or to add 'All' to the policy [18:07:14] andrewbogott: awesome, thanks. Emailed [18:07:39] ptarjan, if you're an admin you can do it via https://wikitech.wikimedia.org/wiki/Special:NovaSudoer [18:07:44] ptarjan, of course, you /are/ a project admin, so you could change the sudo policy yourself :) [18:07:57] No Nova credentials found for your account. [18:08:05] relogin [18:09:46] yay! [18:09:47] thanks guys [18:14:54] <^demon> ptarjan: Whoops, I totally overlooked that page. [18:15:13] <^demon> I hadn't done it in awhile, and for some reason thought giving you project admin was enough. [18:15:45] ^demon: np [18:15:48] ^demon: are you Chad btw? [18:15:54] <^demon> Yep [18:15:59] great ;) [18:16:40] <^demon> So like I said on-list, we've got http://hhvm.wmflabs.org/ allocated now. [18:16:53] <^demon> So pretty easy to put that in front of hhvm on whatever port we want to use for testing. [18:18:35] ^demon: that's great [18:18:42] ^demon: you don't think we'll need any other configs? [18:18:44] ^demon: rewrite rules? [18:19:03] ^demon: virtual hosts? [18:19:11] <^demon> Just some simple ones for our usual /wiki/$foo -> /w/index.php?title=$foo [18:19:20] <^demon> We can add as many hostnames to that public IP as we want. [18:19:34] <^demon> https://wikitech.wikimedia.org/wiki/Special:NovaAddress - this is where you can add them [18:19:36] ok, then I'll have to do that one rewrite rule in the config.hdf [18:19:59] ^demon: does that hit apache? [18:20:07] when I use the single node mediawiki set up in labs I can't run any maintenance scripts [18:20:34] <^demon> ptarjan: Yes [18:20:34] <^demon> manybubbles: Grr, what error? [18:20:53] PHP Notice: Undefined index: SERVER_NAME in /srv/mediawiki/LocalSettings.php on line 30 [18:20:57] ^demon: should we tear down apache and hit hhvm directly? that'll be the best for perf [18:21:14] <^demon> Can do [18:23:27] <^demon> manybubbles: Working around it. [18:24:44] <^demon> // SERVER_NAME isn't set from command line [18:24:45] <^demon> // $wgServer = "//" . $_SERVER["SERVER_NAME"]; [18:24:45] <^demon> $wgServer = "//solr-mw.instance-proxy.wmflabs.org"; [18:24:50] <^demon> That's what I did. [18:25:38] k. is this something we should fix in the puppet config so it just works? [18:25:46] so far everything else has just worked really well [18:25:52] <^demon> Probably. Trying to remember who did this. [18:27:14] <^demon> I think andrewbogott :) [18:27:55] manybubbles, I see that notice frequently but it's a notice not an error… do you have something it's sactually breaking? [18:28:47] I'm sure my stuff is broken on my own but I expected the first notice to be interesting. I'll just ignore it then. [18:29:01] <^demon> It broke his command line script. And without an isset() guard on it or something it'll result in $wgServer being set wrong. [18:29:29] I'm pretty sure that it's a red herring, but… will look after the meeting [18:29:30] <^demon> Considering you already know labs_mediawiki_hostname, you could just set that to a string at install time. [18:30:25] petan: are you here? [18:30:26] <^demon> Anyway, I'm out to lunch and then a dr's appt. Later folks. [18:32:16] no [18:32:25] Pyfisch what is up [18:32:28] :o [18:34:06] petan: thats bad, that you are not here. otherwise I would have asked you if you have changed something on bots python httplib2 is now missing, this module is required for the pywikibot [18:34:22] on bots? [18:35:43] petan: yes bots--login [18:35:49] * bots-login [18:36:21] done [18:36:25] ptarjan, I've been watching hiphop@lists.wikimedia.org and noticed you mentioned private keys propagating... You didn't upload your SSH private keys did you? [18:36:43] Krenair: no, my public key [18:36:50] took about 5 mins to let me log in with it [18:36:55] ok :) [18:37:20] but thanks for checking [18:40:24] Coren, petan: Could you 'for JOBID in 234932 234935 234908 234853 234891 234925 234898 234902 234903 234904 234905; do qacct -j "$JOBID" | mail -s "$JOBID" scfc; done' on tools-master, please? [18:40:35] yes [18:40:56] in meeting [18:43:44] scfc_de done [18:46:14] petan: thanks for fixing it [18:46:28] yw [18:51:37] petan: Thanks! [18:53:39] * Jasper_Deng pokes petan [18:53:50] ? [19:02:22] New review: Tim Landscheidt; "In the process of resolving the merge conflict, you deleted the actual typo fix :-). I'll upload a ..." [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/65643 [19:08:14] New patchset: Tim Landscheidt; "Fix typo in jsub." [labs/toollabs] (master) - https://gerrit.wikimedia.org/r/67288 [19:30:15] ok, now… manybubbles, are you still running into puppet trouble? [19:30:44] not really - if I ignore the NOTICE: error then everything works fine for me [19:32:23] cool :) [19:32:35] * andrewbogott looks at suppressing that warning [19:33:16] Coren: https://wikitech.wikimedia.org/wiki/New_Project_Request/Wikisource [19:33:45] Coren: I commented on the talk page, but I'm not sure if that user will see it. [19:34:34] Ryan_Lane: I think he did since he has requested access to tools since [19:34:43] ah. cool [19:35:32] my rss feed application wasn't updating feeds, so I just saw it. heh [19:56:33] Coren / petan do you know if anything changed to /data/project/pywikipedia recently? Is it updated nightly? Or? [19:56:51] valhallasw I didnt even know it existed [19:57:12] valhallasw: I don't know who maintains that. [19:57:20] :D [19:57:24] It's also not an actual project... [19:57:29] oh? [19:57:33] who created it then... [19:57:48] it maybe was project and someone removed it from ldap? [19:58:35] there is no such folder, valhallasw [19:58:41] valhallasw: Bots? [19:58:53] aha [19:59:01] in that case it is likely outdated [19:59:02] ugh, that's why [19:59:04] * valhallasw feels silly [19:59:23] of course, the mail clearly states 'bots-bnr1' [19:59:23] you shouldnt use it at all [19:59:24] >_< [19:59:30] it's not me :P [19:59:48] let me ssh to bots then [20:26:50] manybubbles: https://gerrit.wikimedia.org/r/#/c/67311/ [20:33:42] andrewbogott: Thanks! [21:06:14] hii [21:18:49] Reedy: can I haz http://meta.wikimedia.org/wiki/Steward_requests/Permissions#Ebraminio.40test.wikidata please? :P It would be great for developing this gadget http://www.wikidata.org/wiki/MediaWiki:Gadget-Merge.js [21:19:19] I was about to say I'm not a steward ;) [21:20:40] Okay :) [21:21:05] But as it's a testwiki.. [21:23:48] ebraminio: done [21:25:49] Reedy: Thanks a lot! [21:30:09] Reedy: Can I copy http://test.wikipedia.org/wiki/MediaWiki:Robots.txt on that wiki? [21:30:32] I am asking if is okay [21:31:53] valhallasw: iirc russblau set up a cronjob to update it nightly [21:35:29] Reedy: Done with a bit being *BOLD* :P [21:41:16] Warning: There is 1 user waiting for shell: Plavi (waiting 0 minutes) [21:52:29] Did anything eventful happen ~3.5 hours ago? [21:54:37] Warning: There is 1 user waiting for shell: Plavi (waiting 13 minutes) [21:55:29] legoktm: ok, thanks [22:08:07] Warning: There is 1 user waiting for shell: Plavi (waiting 27 minutes) [22:21:31] Warning: There is 1 user waiting for shell: Plavi (waiting 40 minutes) [22:34:56] Warning: There is 1 user waiting for shell: Plavi (waiting 53 minutes) [22:48:26] Warning: There is 1 user waiting for shell: Plavi (waiting 67 minutes) [22:53:14] !rq Plavi [22:53:14] https://wikitech.wikimedia.org/wiki/Shell_Request/Plavi?action=edit https://wikitech.wikimedia.org/wiki/User_talk:Plavi?action=edit§ion=new&preload=Template:ShellGranted https://wikitech.wikimedia.org/wiki/Special:UserRights/Plavi