[03:43:18] YuviPanda, we are not using mediawiki-utilities anymore. [03:43:38] halfak: ok! so we should get that out of requirements.txt [03:43:56] Isn't it out of there already? [03:44:14] Oh! We still need it in ORES [03:44:19] I got it out of revscoring. [03:44:44] halfak: oh, ok. so we are using it in ores. is that 'on the way out' or is it gonna be in there? [03:44:58] On the way out [03:45:14] I can basically swap it for mwapi and mwreverts :) [03:45:51] Gotta run away again. Have a good night. [03:46:16] halfak: you too! [03:46:31] * YuviPanda goes to spam more people on craigslist [14:41:08] Workers 01 and 04 are offline [14:41:42] Looks like they are just holding onto their "active" tasks [14:43:00] Looks like the log-level is INFO on worker-01 [14:43:25] Looks like the last thing that happened was a batch request for user info. [14:43:29] Weird. [14:43:37] I wonder if that request is blocking somehow. [15:37:12] halfak: ho [15:37:14] Hi [15:37:23] halfak: is it still offline? [15:37:28] Yes [15:37:33] I've been looking into it. [15:37:36] Flower is down too. [15:37:51] I think it got killed in the re-provisioning. [15:37:56] of web-01 [15:38:13] Also, Hi & Good morning :) [15:39:00] halfak: hello [15:39:07] halfak: did you strace? [15:40:59] negative. [15:41:14] I dunno what you mean by strace other than it probably stands for Stack Trace. [15:41:50] Aaah [15:41:51] So [15:42:01] ps aux [15:42:05] Lists the processes [15:42:11] Our celery parent process is stuck [15:42:17] We wanna know what it is doing [15:42:19] With [15:42:31] sudo strace -f -p [15:42:49] The kernel will tell us what system calls it is making / waiting for [15:42:52] Live [15:42:54] Very useful tool [15:44:14] I can do it in like 30mins [15:44:16] if you want [15:46:37] I'll have a look to see what I can learn. [15:48:00] I get almost no output from "sudo strace -f -p " [15:48:13] Do you get at least one? [15:48:16] A read call? [15:48:18] "Process 22121 attached [15:48:18] read(54," [15:48:21] Ah [15:48:25] So it is stuck somewhere [15:48:31] So now do [15:48:42] sudo lsof -p [15:48:56] That lists the open file / network handles [15:49:05] I suspect 54 will be to the redis host [15:49:21] And that the server keepalive wasn't good enough [15:49:44] What is 54? [15:49:47] A node? [15:49:49] Pid? [15:52:13] halfak: a file handle [15:52:16] Number [15:52:21] In unix everything is a file [15:52:26] Including network sockets [15:52:37] So 54 is a socket handle number [15:52:51] halfak: the output of lsof will have that [15:53:00] The file handle number [15:53:14] And the type of file and the destination if it is a network [15:53:21] Connection [15:53:36] I'm looking for the right header [15:53:45] In the lsof output [15:53:50] Ok [15:53:52] DEVICE? [15:53:56] I usually just pipe it to grep [15:54:02] For 53 [15:54:13] Since there isn't too many other 53s around [15:54:17] Hmm not sure. [15:54:24] Look for a value 53u [15:54:29] Err [15:54:31] 54u [15:54:55] So many 54 [15:54:58] There [15:55:07] I've got 54r [15:55:12] Which is just "pipe" [15:58:42] halfak: huh [15:58:49] halfak: I'm be at a laptop really soon [15:58:56] Not sure what that is then I'll investigate [15:59:19] kk thanks dude. I'll not touch. [17:48:58] halfak|Lunch: it was the redis one again [17:49:04] I saw it was stuck on '99' recvfrom [17:49:15] and lsof said that was a connection to redis [17:49:19] I restarted redis and it was all good [17:57:33] halfak|Lunch: wait a second [17:57:39] halfak|Lunch: recvfrom is UDP [17:57:43] not tcp [17:58:52] why did I see a recvfrom on a TCP connection [17:58:55] I'm very confused now [18:03:42] halfak|Lunch: the redis restart fixed things, need to wait for them to fuck up again [18:03:54] halfak|Lunch: also the sysctl parameters are already set ( I just checked with sudo sysctl -a) [18:29:21] o/ YuviPanda [18:29:30] Seems like others should be having this problem. [18:29:44] indeed [18:29:54] I'll be submitting a PR to extend our logging for ORES a tiny bit around the scoring steps. [18:30:03] Can we selectively turn up logging? [18:30:16] E.g. set ORES to DEBUG and revscoring to WARNING [18:30:59] halfak: https://github.com/celery/kombu/pull/490 might be it? [18:31:23] BROKER_TRANSPORT_OPTIONS = {'socket_timeout': 3600} [18:31:25] apparently [18:31:31] the recfrom matches what we see [18:31:33] *recvfrom [18:32:37] also https://github.com/celery/celery/issues/2464 [18:32:39] oooh [18:32:42] it's reported in the version of celery we are using [18:33:18] I'm down for more timeout settings. [18:33:30] We can set this much lower than 1 hour [18:33:33] halfak: yup [18:33:37] 30s [18:33:38] Like 30 seconds. [18:33:38] ? [18:33:40] hah [18:33:42] :) [18:33:53] I suppose this will go in ores-wikimedia-config [18:33:56] yeah. [18:34:08] But I think I'll set a default value on the dev config too. [18:34:26] ores-localdev.yaml, that is. [18:34:47] halfak: sure. [18:35:01] halfak: oh, is ores-localdev.yaml also used in ores-wikimedia-config? [18:36:41] Nope [18:37:17] ah [18:37:19] 'too' [18:37:20] I missed that [18:37:21] It's more there to (1) allow you to run a minimal ores right from the repo and (2) instruct a user on how to construct their on config. [18:37:29] I really should figure out how to get new glasses >_> [18:37:37] Do you need glasses? [18:37:40] yeah [18:37:49] I wear 'em all the time. Broke mine on the flight back here [18:38:03] Boo. That's a bummer. [18:38:22] yeah [18:38:31] and I've to figure out how the insurance system works in the US [18:38:41] which I haven't before [18:39:02] I tried interacting with flex-plan.com and of course I just wanted to pull out my eyeballs and hair and brains with a fishook instead [18:39:35] YuviPanda, biggest protip I have is this [18:39:46] Ask the staff at the clinic to help you figure out your insurance. [18:39:59] they specialize in this. They want your money and love when it comes from insurance. [18:40:15] ah :) [18:40:17] ok! [18:40:41] 99% of my stories of how american insuracne work that I've read before I got here are of the 'you will not believe this bill this person got' :P [18:40:46] so I'm clearly biased [18:41:13] This is yet another administrative thing we wouldn't waste money on if we had a single payer system. [18:42:05] When I switched away from my wonderland of grad health insurance, I asked the clinic I wanted to go to, "Can you help me make sure my insurance will pay you?" And I got an emphatic "yes, of course we will." [18:42:17] nice [18:42:36] I'm also a bit... idk, nervous [18:42:48] going to hospitals / dealing with insurance always seem to be 'adult' things I shouldn't be doing [18:43:36] lol @ YuviPanda not being an adult [18:56:05] halfak: let me know when you do the config changes? [18:56:11] halfak: I think I'll go shower and head to the office now [18:56:26] Sure. I'll have a couple PRs for you in about an hour [18:56:58] halfak: ok! [18:57:04] halfak: will be good to put this to rest :D [18:57:12] Indeed. [18:58:02] halfak: btw, putting https://github.com/wiki-ai/ores-wikimedia-config/blob/debianization/requirements.txt out here - is a WIP for now [18:58:23] halfak: that's all the (non-transitive) ores and revscoring dependencies [18:58:50] Gotcha. I'll look into getting mediawiki-utilities out of ORES too. [18:59:04] halfak: yeah, that'll be nice :) [18:59:09] * halfak finishes up his logging patch. [18:59:20] halfak: I'll try to eliminate everything except sklearn from that today [18:59:20] err [18:59:25] eliminate as in 'build and upload packages for' [18:59:34] gotcha [19:00:29] I should probably change our sklearn req. to 0.16.1 per https://phabricator.wikimedia.org/T108451 [19:00:33] Want a PR for that? [19:01:08] Oh! I suppose you can just hardcode that in the wikimedia-config reqs [19:01:25] halfak: I haven't fully followed up the issues with that package yet [19:01:29] I should do so soon [19:01:41] kk [19:28:42] halfak: I'll admit that http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/ has made me finally want to 'actually figure out wtf is going on with machine learning' [19:30:12] RNNs seem to have a lot of promise these days. We're using old, basic techniques compared to that. [19:30:31] I listed "Simple, supervised machine learning" as a workshop that I'd like to give at the allstaff. [19:30:40] indeed [19:30:46] I don't know the very basics [19:31:05] but I also know that there's only so much I can do - am currently digging into learning basics of networks, for example [19:31:54] Yeah. [19:32:12] This is something I run into a lot at the WMF. [19:32:24] There's been a rise in the number of highly active editors in enwiki recently. [19:32:48] "Halfak, how come you don't know what's going on there?" Man, I'm trying to get ORES stable. I can't study all the things at all times. [19:33:08] heh yeah [19:33:29] I supposed there's my hidden world of data analysis that doesn't appear in this channel all that much. [19:33:43] halfak: you're also a SPOF on that now, and I hope that the new research hire fixes that / takes some load off you [19:33:46] halfak: yeah indeed [19:33:54] E.g. right now, I'm spending a lot of time figuring out efficient ways to measure labor hours for Wikipedia editors. [19:33:57] someday I'll have as much energy as you do [19:34:51] Meh. I've got a healthy (for some definitions of healthy) feedback loop of (1) work hard, (2) get credit for hard work (3) repeat. [19:35:03] That tends to keep energy levels high [19:35:50] heh, with labs (2) doesn't happen too much :) [19:35:52] It's good to have people who appreciate the hard work and don't get upset with me when it's too much. [19:36:15] halfak: +1 [19:36:16] You've got a small, but adamant following who appreciates your hard work :) [19:36:34] :) I think the beers matter more as a unit of appreciation than for the alcohol [19:37:00] because ususally I get https://lists.wikimedia.org/pipermail/xtools/2015-September/000264.html [19:37:12] > The "no webservice" errors are the fault of Labs. It is much more explosive than Toolserver was. [19:37:31] lol @ that. [19:37:33] * YuviPanda has been close to walking away from the clusterfuck a few times now [19:37:36] Was this person on toolserver? [19:37:40] no idea [19:37:47] but I know that his tool is running via CGI [19:38:12] anyway [19:38:13] Seriously. I have deep appreciation for toolserver admins, but that service was a pile of junk compared to labs. [19:38:42] I'm guessing that NFS was the issue here? [19:39:20] halfak: no idea what the issue was since it wasn't reported anywhere and is just a snide remark [19:39:59] Oh. So the issue was between the computer chair and keyboard. [19:40:18] possibly. but the attitude is present enough that it's frustrating [19:40:22] Didn't Toolserver™ run on some weird software that wasn't Linux? [19:41:34] yes it was solaris and lots of proprietary software as well [19:46:26] YuviPanda, we use redis for our long-term cache too. [19:46:35] We probably want to set a timeout for those interactions as well. [19:46:39] * halfak does that [19:46:48] halfak: I wonder if we should have two redises [19:47:28] halfak: a smaller redis with no persistance for the job queue and a larger one with persistence for cache [19:47:48] YuviPanda, why not switch to RabbitMQ or something like that? [19:47:59] One of the celery defaults [19:48:27] halfak: because they're a massive nightmare to maintain and it'll make putting this in production more problematic since we don't have any rabbitmq there [19:48:37] Gotcha. [19:48:39] Makes sense. [19:48:42] Just checking :) [19:48:50] halfak: we already use redis for the mediawiki job queue, so we have in house experience doing this thing with redis [19:48:57] halfak: yeah :) [19:48:57] +1 for Redis, we're slowly moving towards that for Fundraising and it's night n day [19:49:10] halfak: it's also far simpler than rabbitmq and that's a huge +, IMO [19:49:25] We've been using ActiveMQ, another minimally supported freak queue, and it's horrible to set up locally [19:49:38] Gotcha. [19:49:40] Redis is flexible enuf to do almost anything we'd need [19:50:02] yeah that too [19:50:04] YuviPanda, how come quarry doesn't suffer from these socket_timeout issues [19:50:10] it's very enterprisey [19:50:13] wtf [19:50:16] irccloud is losing messages [19:50:59] btw, I'm still trying to finish up the sklearn 0.16.1 packaging... Stared at a gfortran last night without making any progress. [19:51:19] awight: what version of scipy / numpy is it being compiled against? and is it the same as the python-sklearn package? [19:51:36] awight: I'm just curious if we're using the same version matrix as python-sklearn and if not why not? [19:52:07] YuviPanda: we are, but it's python-sklearn from the near future. [19:52:20] aaaah [19:52:23] that makes sense [19:52:24] from stretch? [19:52:26] or sid? [19:52:36] The real work is to rewrite the deb rules to use pybuild [19:52:53] YuviPanda: sid [19:53:20] awight: ok. [19:53:31] I guess it's probably ok since it's a stable version of numpy / scipy [19:57:29] * halfak benefits from his investment in a flexible configuration strategy [19:57:30] :) [19:58:14] I really should get off my ass and go to the office [19:58:18] BUT THIS BED IS SO COMFY [19:59:14] bring bed [19:59:31] OK YuviPanda. Both PRs are in. [19:59:40] I tested them both locally to make sure the system still works. [19:59:57] If you merge, I'll go to staging and start up a precached [20:00:19] This one too https://github.com/wiki-ai/ores/pull/92 [20:00:45] Woops. looks like the commit commeent there wasn't very good [20:00:46] halfak: remember the submodule bump :) [20:00:48] Sorry about that [20:00:51] Oh yeah! [20:00:56] Wait... We don't care. [20:01:02] 'cause it's just the dev config. [20:01:35] apparently I wrote: [20:01:37] This is for the growing number of things that are 'deployed' to labs only - ORES, wikilabels, Quarry, Phragile, Wikimetrics... Testing is labs, production is labs, life is labs, love is labs, pain is labs, labs is moloch... [20:01:41] at some point in the last few months [20:01:48] (in https://phabricator.wikimedia.org/T107167#1488657) [20:02:11] * halfak googles moloch [20:02:44] "Moloch, [...], is the name of an ancient Ammonite god." O.O [20:03:08] halfak: it's a prominent character in Ginsberg's Howl [20:03:14] which is a very intense and interesting poem [20:03:50] I saw the best YuviPanda of my generation destroyed by madness, starving hysterical naked... [20:03:50] http://slatestarcodex.com/2014/07/30/meditations-on-moloch/ is a nice article about moloch as 'the system' and how it has properties distinct from the sum of its parts [20:05:45] * halfak runs 'fab stage' [20:06:51] * halfak waits for restart [20:08:01] * halfak figures out how to install Flower on -01 again and does that. [20:09:12] hello [20:09:25] so halfak you wanted to talk about how to handle deleted content [20:09:27] o/ White_Cat [20:09:34] I think we need to consider our options [20:09:34] White_Cat, new thoughts? [20:09:39] well a few [20:09:44] I think they have been laid out. [20:09:48] The options, that is. [20:10:03] one other option is to have a community decision on a new user group [20:10:15] whom would have access to sellect deleted content and not more [20:10:25] not admin level access [20:10:37] community may be more accepting of such a thing maybe [20:11:34] some logic like: if (usergroup(userid) && isinCampaign(revid)) [20:11:54] some logic like: if ((usergroup(userid) && isinCampaign(revid)) || isadmin(userid)) [20:12:01] something like that [20:12:12] How would MediaWiki know what campaign a revid is in? [20:12:20] it wouldnt [20:12:32] ores would determine this [20:12:38] How would it allow a user to know what content to show? [20:12:42] since we would operate with a copy of the deleted content [20:12:56] Oh! So ORES would have rights to access deleted content and it would use user_groups to see if a user has the right? [20:13:18] Well, we can limit a campaign by user group if we want to . [20:14:04] YuviPanda, ores-staging looks good with the new code. [20:14:21] halfak: ok let's deploy? [20:14:24] halfak yes [20:14:24] Do you think we should deploy it now, or are you going to head to the office? [20:14:30] halfak: let's deploy now [20:14:33] kk [20:14:36] with that it isnt JUST admins. [20:14:38] Will start with workers [20:14:46] Admins tend to be not the ideal choice of free labour [20:14:54] White_Cat, file a bug for limiting a campaign to a set of user_groups. [20:15:03] sure [20:15:13] I got a new cat today [20:15:17] the poor thing is terrified [20:15:23] Is it White? [20:15:24] so I am kind of distracted by that too [20:15:27] its not actually [20:15:41] it was abandoned [20:15:56] was choking on its own collar because it was tossed so young [20:16:09] Boo. [20:16:21] All my pets have been adoptions. [20:16:31] It had been an on going even for the past 3 weeks [20:16:40] among other things the cat was pregnant [20:16:44] It was really hard with the ferrets because you can only really train them when they are young and they were trained bad things. [20:17:01] bad things like bank robbery? :p [20:17:26] Biting == play time [20:17:53] Workers deployed. [20:18:11] halfak: yay that was quick [20:18:18] Yup :) [20:18:29] Now that we're not running pip <_< [20:19:11] halfak: yup [20:20:21] I also got a 3d printer [20:20:27] I do not intend to 3d print cats though [20:21:17] Cat toys maybe [20:21:25] Best cat toy == the box it came in. [20:21:37] YuviPanda, {{done}} [20:22:10] That was our fastest deploy by far [20:22:13] \o/ [20:22:21] Most of the time was spent waiting for uwsgi to restart [20:22:38] Nooo! We lost our flower history! [20:22:51] We were at ~8.5million when it died [20:23:05] we need to get graphite going at some point [20:23:07] err [20:23:08] statsd [20:23:13] select count(*) from pagelinks where pl_title = "WikiProject_Directory/Description/WikiProject_Cannabis"; [20:23:13] Yeah. [20:23:20] is there a reason this query should take more than a few seconds to do? [20:23:24] Yes. [20:23:31] Add a where condition on namespace [20:23:38] pl_namespace, okay [20:23:39] The index is (namespace, title) [20:23:42] Yeah [20:23:54] There we go. [20:23:56] :D [20:25:24] the cat is hiding under the couch [20:25:41] I am very tempted to hide under the couch myself [20:26:06] Print a taller couch [20:26:29] halfak: ok, I'll go now [20:26:38] halfak: and yay for less painful deploys \o/ [20:26:56] this sadly will take a while: select count(*) from pagelinks where pl_namespace = 4 and pl_title like "WikiProject_Directory/Description/%"; [20:27:18] ^ Samoyed it shouldn't [20:27:28] There are 2,600 of those description pages [20:27:50] Oh. Yeah that might take a while [20:27:59] Then again, it depends on how many of them got linked to. [20:29:09] All of them have at least one link. [20:29:48] Meh.. 1 * 2600 isn't that many [20:30:10] halfak: btw, did I tell you I wrote a small script that can look at a tsv / csv and print out a mysql schema? [20:30:18] halfak: should make importing datasets into quarry nice and easy [20:30:29] I saw it in the scrollback. That's cool :) [20:30:33] halfak: now the question is how to specify indexes [20:30:38] since that'll most definitely be needed [20:30:44] What about ELschema style? [20:30:52] yeah I was thinking of that [20:30:58] wait does ELSchema let you specify indexes? [20:31:00] It would be great if we had a description of fields anyway. [20:31:15] It doesn't yet, but that would be a trivial modification to the scheme page [20:31:33] I wouldn't want to specify it on the field since I want to do multi-field indexes. [20:31:59] halfak: ya so I was thinking of a yaml file [20:32:05] halfak: that'll have tuples of indexing [20:32:11] e.g. "indexes": [[rev_id], [rev_page, rev_timestamp], [rev_user, rev_timestamp]] [20:32:14] so index would be a (title, type, fields_list) [20:32:21] type being unique, not null, etc [20:32:29] in the future, maybe spatial [20:32:59] so you can have a YAML file which has: 1. link to dataset, 2. link to dataset license, 3. indexes [20:33:00] Good to keep it simple for now and assess demand. Unless you think it'll be trivial [20:33:03] and feed that into this script [20:33:18] it'll be consistent and then I can do it and forget about it which I really like :D [20:33:28] then I can give multiple people rights to do this and keep the yaml files in a git repo [20:35:18] YuviPanda, do me a favor and include field definitions in the yaml. [20:35:30] yes [20:35:31] It's always good to have space for metadata. [20:35:53] so it'll be: name, url, field definitions (which will be verified / inferred), indexes, license [20:36:14] You might even ask for field types and throw an error if a field values are improperly specified. [20:36:25] But then you wouldn't benefit from the cool script you wrote. [20:39:33] halfak: yeah, but then you can also run a part of it to generate it for you :) [20:39:48] GOod point! [20:40:01] halfak: so you give it a url and it generates a 'skeleton' (URL, types, encoding, etc) and then you just fill in the rest [20:40:02] You might even validate your yaml against your dataset :) [20:40:16] You said 'str', but I found a NULL. [20:42:11] halfak: indeed [20:42:17] halfak: not sure how to deal with NULLs [20:42:19] I ignore NULLs [20:42:20] now [20:42:27] "NULL" [20:42:30] Heh. [20:42:40] ah :) [20:42:59] Limitation of the underlying technology. [21:55:07] YuviPanda: I have some strange news to share. [21:55:26] go on awight [21:55:39] The sklearn debuild actually works fine. Must have been some subtle system config thing on my laptop... let's try this on your build server now! [21:55:47] hah! [21:55:51] wheee [21:55:57] ores-misc-01 [21:56:03] am on phone tho [21:56:06] k [21:56:08] on way to office [21:56:54] Looks like I don't have a login, although all I get is Connection closed by UNKNOWN [21:57:10] Drop by when you get to the office! [21:59:35] Well that was a fun little power outage [22:00:06] halfak: add awight to project? [22:00:19] Will do. Which project? [22:00:54] halfak: I'm currently writing tests and tests, you will see one of them here :D [22:01:05] :) [22:01:05] o/ [22:01:15] Getting close to where I can actually start working on models with you. [22:01:29] YuviPanda, which project to add awight to? [22:01:44] I wrote more than 50 tests just for pywikibase, and more than 20 for wb-vandalism [22:01:53] halfak: the labs project giving me access to ores-misc-01; no rush! [22:01:58] Oh! [22:02:09] so many "projects" [22:02:27] https://github.com/Ladsgroup/wb-vandalism/blob/master/tests/test_revision.py [22:02:33] this is one of them [22:02:34] awight, what's your name on wikitech? [22:03:03] looks like it is Awight [22:03:05] So I did that [22:03:19] https://github.com/wikimedia/pywikibot-wikibase/tree/master/tests [22:03:26] awight, {{done}} [22:03:28] thanks! [22:03:53] Still no access. hrm... ssh ores-misc-01.eqiad.wmflabs, right? [22:04:09] It should be. You might need to wait for puppet [22:04:12] k [22:04:23] * halfak digs around [22:06:38] How do you tell puppet to run again? [22:07:06] I can wait 5 min, no worries [22:07:14] kk [22:07:39] awight: halfak don't need to wait for puppet usually [22:07:43] On labs [22:07:45] Hmm [22:07:57] I'm logged in. [22:09:07] YuviPanda: debuild: command not found [22:09:08] Coool! [22:09:13] Are you sure this is the right server? [22:09:15] awight: ya nothing installed yet [22:09:17] Just aptly [22:09:25] Only the Debian serving bits are there [22:09:56] k [22:10:18] Cool, I have sudo [22:10:26] Though--shouldn't we puppetize this? [22:10:31] You do! [22:10:34] awight: the building? [22:10:42] no, but provisioning the build server [22:10:59] awight: wikitech:aptly is puppet docs for the serving server [22:11:12] awight: hmm it is just a few package installs and we already have one for prod... [22:11:22] So for prod we will build packages on it [22:11:26] ok lemme not bother u with this minutae [22:11:34] awight: I'll try get it working on this host at some point [22:11:39] awight: ya just install away :) [22:13:26] YuviPanda: halfak do you know why this one failed? https://travis-ci.org/Ladsgroup/wb-vandalism/jobs/80331932 [22:14:13] I need to remove some unneeded dependencies but I'm not sure if it helps [22:14:31] Travis has been failing forever no [22:14:41] Blas (http://www.netlib.org/blas/) libraries not found. [22:14:59] Regretfully, you need BLAS for sklearn [22:15:08] So I think we should just disable travis for now [22:15:22] Unless YuviPanda/awight have a better suggestion [22:15:55] Nah, I almost have a fix ready, but shifted priorities [22:18:04] okay :) [22:26:25] halfak: https://github.com/orgs/wikimedia/people you are in this list, can you go to travis-ci.org and enable travis tests for wikimedia/pywikibot-wikibase? It will pass since it doesn't require any dependencies [22:27:40] I think YuviPanda should be in that list too [22:28:31] Amir1, {{done}} [22:28:33] I think [22:28:39] Thanks :) [22:29:09] I will check by adding some more tests [22:29:17] Looks like you might need to sign up a travis account [22:29:24] And enable the connection. [22:29:44] You'll need to get a token to enable github [22:30:53] I did that for my projects but for a repo in wikimedia organization, I don't have access [22:31:18] YuviPanda: 2057378 Sep 14 22:30 python3-sklearn_0.16.1-3_all.deb [22:31:40] This is lame. I'll just add you to the org. [22:31:45] There's gotta be a team for this [22:31:58] awight: woooooooooooooooo [22:32:12] * awight wipes thermometer-watching sweat from brow [22:32:33] awight: woooooooooooooo [22:32:47] :D [22:33:08] awight: we don't need to package revscoring BTW [22:33:10] I can build some more of those puppies while I'm logged in... [22:33:12] Is a submodule now [22:33:14] ah, okay [22:33:15] awight: yesss [22:33:24] awight: also halfak added 3 new dependencies [22:33:28] All trivial and pure python [22:33:29] O_O [22:33:30] Amir1, invite sent [22:33:32] ok then [22:33:40] Sorry awight [22:33:41] awight: simple pybuild should do [22:33:54] It sounds weird, but those 3 dependencies are better than the one they replaced. [22:34:05] oh thanks :) [22:34:36] It made my day [22:35:11] OK. Your repo is in the pywikibot team, so it should just work :) [22:35:12] https://github.com/orgs/wikimedia/teams/pywikibot/repositories [22:35:34] Let me know if you have any trouble [22:35:43] YuviPanda: When you get to the office, pls show me where to put the built packages [22:36:05] I wish you could have sub-orgs. [22:36:13] I want to put wiki-ai in wikimedia. [22:36:19] But keep our cool prefix [22:36:45] awight: wikitech.wikimedia.org/wiki/Aptly [22:36:58] thanks! [22:37:04] awight: I think we should also move or mirror the repos in gerrit. [22:37:09] I can do all the repo creations [22:37:30] no problem [22:52:25] I'm off to run some evening errands. I should swing back by in a couple of hours. [23:02:10] YuviPanda: I've published the debs I was involved in to the jessie-ores repo [23:02:33] Wooooo [23:03:04] awight: can you make a list with links so I can import to gerrit? [23:03:05] I can dig up the new dependencies and package, if that's a good next step [23:03:13] YuviPanda: sure! [23:05:34] YuviPanda: https://phabricator.wikimedia.org/T107493#1639673 [23:06:25] Awesome [23:06:57] It is gonna be another hour before I'm in the office but at least I'm at laptop now [23:07:00] In train [23:08:21] Upside-down hours [23:08:31] heh [23:08:40] 5 to 9! [23:08:46] awight: are you using gbp? [23:08:51] eh? [23:09:03] git buildpackage [23:09:14] thought for a minute you had £ to dump [23:09:21] no, I'm using debuild [23:09:21] haha [23:09:22] ok [23:10:23] awight: I'm going to move them to git-buildpackage format [23:10:26] which is fairly trivial [23:10:33] it just uses standard branch names [23:10:34] notihng else [23:10:40] so it's just a bunch of git munging [23:12:37] excellent. I'd like to learn how [23:15:31] so you have upstream/ branches [23:15:45] and master is just on top of them + your local patches + debian dir [23:15:50] and then you do gbp buildpackage and it 'does its thing' [23:19:01] It builds the quilt patches n stuff? [23:22:12] yeah [23:24:58] awight: see https://github.com/wikimedia/operations-debs-python-stopit [23:25:17] awight: upstream tracks the upstream tag and master just has extra patches