[07:49:01] I gathered some pages in https://meta.wikimedia.org/wiki/Category:Geolocation [12:34:48] oh my god [12:34:51] I just had an epiphany [12:35:19] the reason Minnesotans are so nice is that you literally DO have to sit in a small room with other people for 6 months and not leave the house [13:36:11] Good morning science people! [13:36:56] halfak: figuring out that rc_id and rc_this_oldid are two different things is like Dr. Nick figuring out inflammable means flammable [13:37:16] harej, lol [13:37:26] Also "rc_this_oldid" WTF [13:37:40] rc_this_oldid == rev_id [13:38:02] I'd like to learn the etymology of that name. [13:38:06] :P [13:38:14] If I didn't know that by now, I resign. [13:39:47] My script failed to catch an edit that was almost immediately reverted. I investigated. [13:40:13] I loaded the reverts library into a terminal and found that not only did it work, it performed exactly how I expected it to. Reverted edits returns a tuple; non-reverted edits returned None. [13:40:24] \o/ [13:40:32] So that meant I was feeding the script bad information. [13:40:35] WOOO! This is the goal of every API dev. [13:40:50] BAD INFORMATION. No. Having intuitive APIs :) [13:41:02] I'm going to frame that quote. [13:41:15] " not only did it work, it performed exactly how I expected it to." [14:25:53] hey halfak! [14:26:01] so this morning I came up with a plausible hypothesis for why minnesotans are so nice [14:26:03] Hey Ironholds. [14:26:06] :) [14:26:10] social darwinism! [14:26:18] Yes [14:26:30] when you're trapped in a small house with people for 6 months, the people who can't deal with other people don't get to pass their memes on [14:26:40] If we weren't nice to each, we'd be kicked out of the shared housing in the winter. [14:26:51] at night, the ice weasels come [14:27:09] ice weasels? How is there a weasel type that I don't know about? [14:27:39] https://s-media-cache-ak0.pinimg.com/236x/bc/09/b1/bc09b102180ec4e67ad715cd8511417b.jpg [14:28:09] * halfak misses his weasels [14:33:04] _o/ [14:37:08] Sleeping ferret: http://imgur.com/eWCecO5 [15:05:10] halfak, Matt Groening quote [15:05:14] "Love is a snowmobile racing across the tundra and then suddenly it flips over, pinning you underneath. At night, the ice weasels come." [15:05:34] although if we're exchanging pictures of adorable animals, let me grab mine of The Anti-Ferrets (tm) [15:05:44] halfak, https://scontent-lga.xx.fbcdn.net/hphotos-xpf1/t31.0-8/11212173_10153314066436255_2708734515553746338_o.jpg [15:05:53] those dogs collectively weigh 300lbs [15:06:48] * guillom invites Ironholds & halfak over at #wikimedia-kawaii [15:06:55] hahah [15:07:40] (It exists. And we post links several times a day.) [15:43:40] Ironholds, beard lickings! [15:43:53] beard lickings? [15:43:54] That photo looks like it is 0.5 seconds before being knocked over backwards [15:43:59] haha [16:16:00] quiddity, you're missing one hell of a coworking session [16:17:48] ;_; I'm in an actually useful/informative meeting!! [16:22:33] quiddity, oi! [16:23:18] Ironholds, What?! I want to come, and will be there soon (as soon as meeting is over) [16:23:32] the implication that our meetings are not useful ;p [16:24:55] Awww [16:25:12] dagnabbit! /me throws whiffleballs at Ironholds [16:26:24] ;p [18:13:18] Hey folks. Just setting up for the research showcase. The link to the stream should be coming out soon. [18:13:39] I'll be presenting, so leila/lzia will be filling in on Q&A duty. [18:27:00] o/ GLCiampaglia [18:27:38] hola @halfak [18:31:35] We're running a couple of minutes late. [18:31:43] It should start soon :) [18:31:49] Good! I'll be in the office in a few :D [18:32:40] we're running 5 min late. sorry everyone [18:33:45] I think we need to keep the notice about public logging. [18:34:08] Channel is publicly logged @ http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-research [18:34:09] Just about to get started. [18:34:17] (I can't seem to be able to edit the topic.) [18:34:25] Thanks guillom [18:34:35] {{done}} [18:34:41] Thanks :) [18:34:48] guillom, you should be able to through chanserv [18:35:52] Here we go. [18:35:59] audio issues [18:36:02] Didn't know I had ops here. Will do next time [18:36:02] :) [18:37:21] Alrighty. Starting now! [18:37:53] Usual slight lag on the steam. [18:38:13] stream, too! [18:40:23] https://en.wikipedia.org/wiki/Paramecium [18:42:05] "The decline graph" is much nicer than the other name I've heard for this graph (i.e. the "Oh shit graph") [18:43:05] dang, missed the start, somehow got the time wrong, hope I didn't miss too much [18:43:17] Nettrom: No, we started a few minutes late [18:43:34] you're good Nettrom. And Aaron is reviewing. :-) [18:43:51] halfak presented his biological analogy and is now talking about tech-assisted patrolling etc. [18:44:01] cool, thanks for looping me in :) [18:44:05] Probably nothing you haven't heard before :) [18:46:16] can someone watching the streaming tell us if the sound is good? [18:46:29] sound is great here [18:46:36] thanks Nettrom. [18:47:13] very clear, might joke about halfak being clearer than I've ever heard him :P [18:47:32] :D [18:47:33] :D [18:53:10] if you have questions for Aaron, I'd be happy to collect them. [18:53:29] or feel free to ask it yourself if you are in the Hangout session. [18:53:37] the ores URL gives me 404 not found [18:53:38] link for the stream please? [18:53:39] https://ores.wmflabs.org/scores/enwiki?models=reverted&revids=638307884|542215410 [18:53:54] ToAruShiroiNeko: it's in the topic :) [18:54:38] besides the papers by the STiki author that aaron mentioned, there were also quite a few that all used the 2010 PAN vandalism corpus for training https://meta.wikimedia.org/w/index.php?title=Special%3ASearch&profile=default&search=PAN+corpus+prefix%3AResearch%3ANewsletter%2F20&fulltext=Search [18:56:18] YEah, reading the topic helps :) [18:56:34] I'm getting the same error spagewmf. I'll let halfak update us on this when he comes back here. [18:57:09] server seems to have issues at the moment [18:57:33] I have a hunch that it is a config issue I am not going to mess with [18:58:44] darn, gotta run again, great presentation! [19:01:14] https://ores-test.wmflabs.org/scores/enwiki?models=reverted&revids=638307884|542215410 is the working link [19:01:35] (Main server is temporary down, this is the secondary, test server.) [19:02:03] https://meta.wikimedia.org/wiki/Wiki_labels screenshot looks different from slide screenshot, latter has more green bars [19:02:26] spagewmf: That depends on the number of diffs you have in your set [19:02:58] That number is configurable for each labelling project. [19:03:20] Ask halfak about how to optimize that number based on how long people want to contribute :) [19:05:58] Gender gap [19:06:28] halfak: we should sit down before you leave so I can help puppetized this [19:06:38] And now the second presentation :) [19:06:47] One instance being down shouldn't change anything [19:06:51] I think it great to devise a machine classifier that many tools can make use of. But don't we lose some of the variability between classifiers? Aren't some tool's classifers better able to identify some types of "bad" edits, and others are better at identifying others? [19:06:57] yuvipanda you have a halfak sockpuppet? [19:07:29] Paul__ we utilise a number of different classifiers [19:07:39] o/ [19:07:45] Hey folks :) [19:08:05] we hope to take feed back from tools such as cluebot and return the processed feedback to them [19:08:25] +1 [19:08:29] cluebots current algorithm relies on reverts [19:08:37] not all reverts are done in good faith [19:08:42] not all reverts are correct [19:08:48] OMG PARC OMG PARC OMG PARC :) [19:09:07] PARC? [19:09:09] is "FX PARC" the legendary Xerox PARC? [19:09:17] Palo Alto Research Center? [19:09:19] nooo. FXPAL [19:09:37] It says so on the slide [19:09:55] ah! [19:10:21] aaron's slides are at https://www.mediawiki.org/wiki/Analytics/Research_and_Data/Showcase#May_2015 btw [19:10:23] I think Xerox -> Fuji Xerox, so yes [19:10:25] Paul__ the idea here is to have a system that we the users can influence [19:10:38] handcoding achieves just that [19:10:43] yuvipanda: stop fanboying [19:10:50] HaeB: these slides are not, yet. [19:10:55] :p [19:10:58] Noo. [19:11:07] I wanted to respond to Paul__ [19:11:12] We want lots of classifiers! [19:11:29] Want to build one that is *good* at SPAM, cool! Let's get it in the service! [19:11:32] halfak thats what you get for presenting. Too distracted by the act of presenting :p [19:11:40] leila: https://commons.wikimedia.org/wiki/File:The_people%27s_classifier_--_Research_Showcase_(May,_2015).pdf ? [19:11:53] It should be on the wiki, HaeB [19:11:55] halfak: slide 31: "This software and it’s users". Not a contraction of "it is" or "it has" #grammarnazismustdie [19:12:02] all the world is a dataset and all the men and women are mere features. [19:12:05] https://www.mediawiki.org/wiki/Analytics/Research_and_Data/Showcase#May_2015 [19:12:13] spagewmf, I feel shame [19:12:24] ToAruShiroiNeko, lolwut [19:12:37] This is why I do not write literature. [19:12:43] :) [19:13:08] @halfak Nice presentation and great work. Awesome to finally see all these ML tools sharing data. Labeling would be a great mobile app edition. [19:13:30] People are bored to tears when commiting to work in public transport [19:13:36] they could hand code while doing that [19:13:41] computermacgyver, +1 [19:13:59] Seems like design for quality control tools and labeling generally would be good for mobile. [19:14:16] if it exists a classifier can probably help with it [19:14:36] Absolutely. Easy, short tasks, no problem if interrupted [19:14:53] a typical hand coding task should take .5 to 1.5 minutes to complete [19:15:04] if it is taking longer than that you are doing something wrong [19:15:08] ToAruShiroiNeko: Great work as well (saw you listed as a collaborator) [19:15:10] And it's OK to drop in the middle. [19:15:18] indeed [19:15:41] computermacgyver thanks. But it is an awesome team. I have great pleasure to share the credit. [19:16:19] If this is available as an external tool with OAuth that can just run on mobile [19:16:26] Is it? [19:16:40] We dont have a mobile focus yet [19:16:46] yuvipanda, yup. [19:16:51] All OAuth'd up. [19:16:51] Link? [19:16:53] halfak: I'm probably missing the point, but it seems A Good Thing if all the work that WMF product managers and analysts do in hand-coding for specific projects were public and shared, so yay [19:17:07] spagewmf, good idea! [19:17:24] yuvipanda, https://labels.wmflabs.org/gadget/ [19:17:31] Regretfully, it is hard-coded to enwiki right now. [19:17:38] spagewmf absoultely. I would have been happy to do coding through such an interface as an academic too. [19:17:44] It should let you switch wiki/language [19:17:59] Yeah is halfak had made wikilabels 2 months earlier, I wouldn't have created my own script to handcode VE diffs :p [19:18:01] I wonder why github is considered more "social cool happening" than Wikimedia wikis [19:18:21] (Although I made me learn JS so it's not a loss.) [19:18:32] The scale of code on github is a lot more than on wiki [19:18:45] the best part is we can harness the energy of fellow wikipedians. It is possible to say have each revision be reviewed by 3 different users to minimise bias halfak talked about [19:18:59] but thats 3x the workload which can be excessive for some tasks [19:19:23] ToAruShiroiNeko: You mean 3 different users from our pool of straight while american males? :) [19:19:28] white* [19:19:37] guillom we can discriminate that [19:19:44] halfak: hmm doesn't work on mobile at all [19:19:47] (Even login) [19:19:48] revisions be locked to timezones for example. [19:19:54] ToAruShiroiNeko +1 to multiple coders to minimize bias (not everything has to be doubled/tripled coded, just enough overlap to evaluate the agreement) [19:19:57] yuvipanda, boo. pull requests welcome :) [19:20:07] ToAruShiroiNeko: oh, timezones. Interesting, I wouldn't have thought of that one. [19:20:09] halfak: yeah I'll look at it when not on phone [19:20:29] guillom in a pool of 20k revisions I do not expect bias to stick too much [19:20:49] if we did a 3x run on it thats 3x20k=60k handcodings [19:21:03] even the most dedicated volunteer wouldnt be able to influence the bias that much [19:21:21] it is also important to view the issue beyon damaging/not damaging [19:21:45] the task could simply be about determining if an article belongs to a certain wikiproject [19:22:04] ToAruShiroiNeko: sure, makes sense. I was half joking. My point is that even the best tool won't be able to completely balance systemic bias in the pool of reviewers. [19:22:05] hand coders would be determining if the article is relevant to say british history [19:22:08] likewise vs github, why don't wiki edit summaries get the love and social pressure that github commit summaries do? Do wiki editors follow "rock star article editors" ? etc. etc. [19:22:11] ToAruShiroiNeko halfak Beyond damagingin/not damging seems a great tools to validate/expand Wikidata for hard to automate data points [19:22:18] guillom you are absolutely right though [19:22:27] +1 computermacgyver [19:22:38] halfak: I think I'll be of better use helping you with puppetizing / solidifying the infrastructure tho :) [19:22:39] we will definately have bias but it will become evident in a organised manner. [19:22:45] agreed [19:22:59] One of our collaborators (User:Ladsgroup) is digging into wikidata support. [19:23:07] right now we do have bias just in an unorganised and unstructured manner [19:23:12] halfak great [19:23:17] We found him because he built a NN classifier for predicting missing properties for Wikidata [19:23:23] yuvipanda, +1 [19:23:29] I still want to look into mobile [19:23:35] I can't even get the page to load :( [19:23:38] computermacgyver we do have interest to supply wikidata with data [19:23:59] halfak: have time later today? [19:24:02] ToAruShiroiNeko great. seems like a nice usecase [19:24:17] yuvipanda, no :( [19:24:26] Haha [19:24:28] :D [19:24:37] yuvipanda, tomorrow! [19:24:38] :) [19:24:39] The biggest challenge for us is that we want to do so much and we only have so much time :/ [19:24:41] heh, Wikidata statements with green rating bars [19:24:42] :) [19:24:57] Yeah. +1 ToAruShiroiNeko [19:25:03] spagewmf, ultimate subjective alg. predicts Truthiness [19:25:04] ToAruShiroiNeko Completely understand [19:25:24] We also want to work on processing images at some point [19:25:35] extracting information off of images [19:25:41] this observation about the treatment of newbies on github sounds somehow familiar ;) [19:26:01] Since we're all oauth'd up, it should be easy to turn labels into edit tags and categories too. [19:26:14] Nothing on the roadmap, but I want to talk about it. [19:26:37] Revert them all just in case! [19:27:00] then you can identify whats on the images and apply classifiers. That task could be to study x-rays for automatic diagnostics of diseases for some researcher beyond the scope of what we are doing. [19:27:18] Our project intends to enable researchers in ways we cannot even imagine [19:27:46] ^ this [19:27:47] What are some application of this beyond "vandal fighting" [19:27:49] This is "hearing" [19:28:13] harej really thihnjk of any backlogged task, thats where our project can help [19:28:18] if nothing else, to prioritise [19:28:30] harej, auto-image tagging, adding missing properties to Wikidata, detecting article quality classes, building a good-faith newcomer classifier. [19:28:47] identifying systematic bias [19:29:02] also validating wikidata properties (beyond adding missing ones) [19:29:09] indentifying people violating their arbcom restrictions with new accounts [19:29:20] image tags on wikicommons, etc. [19:29:32] auto-categorise bad images on commons [19:29:35] Google's define valence leaves out https://en.wikipedia.org/wiki/Valence_%28psychology%29 [19:29:36] say eiffel tower at night [19:30:13] an obscure case law makes all images of the effiel tower at night to be copyrighted while morning shots are fine [19:30:37] if you train that to a classifier you will have this specific problem prioritised for you [19:30:48] images that appear more likely to be at night versus morning [19:31:27] ToAruShiroiNeko really? crazy about that case law. had no idea. [19:31:46] computermacgyver the company that did the ligting claimed copyright and the courts granted it [19:32:01] lol, that was a slick move for them [19:32:11] even though the tower and its design has its copyright expired the lighting isnt [19:32:26] Eiffel tower of course is not a major issue but there are other such problems that are frankly boring to deal with on a day by day basis [19:32:39] various statues for instance [19:32:40] absolutely [19:33:07] in some countries freedom of panaroma exists to a certain degree, others it doesnt [19:33:21] In Japan buildings exteriors are copyrighted [19:33:28] while in the US they arent. [19:33:40] Yeah, that one I've heard [19:33:49] (PS hope you're in in Japan....it's crazy late) [19:33:53] :-) [19:33:59] I am in Brussels. :) [19:34:38] sore no hou ga yokatta [19:34:38] I just explained how we can help Lawyers with our classifier. [19:34:48] absolutely [19:34:57] great examples [19:35:13] It isnt exactly where you would expect a machine learning classifier to help, but it really can. [19:36:03] or it can just as easily help save lives with automaic early diagnistics of diseases [19:36:25] yea, both are great examples (you've won me over already :-) ) [19:36:31] :D [19:36:58] I just wanted to give a better idea of the kind of impact our project can have [19:37:05] it is of such a great scale. [19:37:19] Agree ^_^ [19:37:20] ^ assuming we get to keep doing it and recruit an army to help us [19:37:26] well yes [19:37:32] this project is doomed without people [19:37:42] General story of my life and research :) [19:37:45] If you have questions for Lauran and Jenn, please send them along. [19:38:04] halfak but thats the point of handcoder. It is to interest people in our project [19:38:14] halfak: re: current presenter, do tools like Huggle and Wiki labels show details about the user who made the edit? [19:38:18] those people could be teenagers that end up in the field of AI just because of us :) [19:38:37] spagewmf, not now, but we have feature requests for it that I'm unsure about. [19:38:54] spagewmf you can click on the diff to see the user but its not immediately present. [19:39:04] Handcoding as a sneaky recruiting tool for recruiting students. I like it :) [19:39:10] There are cons and pros of showing usernames. [19:39:36] I am building a version of the github streaks visualization for wiki edits [19:39:41] COnsider the case that the username is of pakistani origin (echoing halfaks presentation) [19:39:55] halfak do you think we can annonimise the username? [19:39:57] ToAruShiroiNeko: yes, I'm thinking a "This user makes a lot of sucky edits that get reverted" overview :-) Sure to be very popular [19:40:01] yuvipanda: You're awesome. [19:40:17] spagewmf yes but thats also bias [19:40:40] Is IP, is USer, is Admin, etc. [19:41:02] ToAruShiroiNeko: exactly. But maybe we want the social aspect a la github of encouraging rock star editors. Or not [19:41:42] I think editors that consistantly make bad edits ought to be blocked. [19:41:53] guillom: I feel like edit activity visualizations are a bit of a minefield [19:42:08] bad faith & bad edits [19:43:03] yuvipanda: Yeah. In previous years, there have been some backlash with those. People worried about wikistalking & privacy implications, even if you're only compiling & visualizing public data. [19:44:03] Several edit counters became opt-in for that reason. [19:44:18] guillom: yeah. So I dont want to end up having to deal with that, as much. [19:44:32] maybe we can graft github project coolness onto WikiProjects [19:44:49] spagewmf: harej is working on that [19:45:05] guillom they used to be opt-in due to german laws [19:45:26] spagewmf: https://en.wikipedia.org/wiki/User:Harej/sandbox#Sample_WikiProject_description_page_using_potentially_inaccurate_information [19:45:26] who are the rockstar editors on the https://en.wikipedia.org/wiki/WP:WikiProject_Breakfast [19:45:41] spagewmf: I will have a report for that too. All part of my new automated directory that will be ready by the end of the month. [19:45:41] spagewmf: see all the cool stuff harej's doing at Wikipedia:WikiProject_X [19:46:02] ToAruShiroiNeko: I didn't know about the relation to German law. Interesting. [19:46:05] spagewmf the way I would try to solve that is to obscure the usernames and make people judge if the dits that make are good or bad [19:46:15] Or more verbosely at https://en.wikipedia.org/wiki/Wikipedia:WikiProject_X/Newsletter/Past_issues [19:46:17] guillom german privacy law to be more specific [19:46:25] it was quite an issue back in the day [19:46:27] re moiz question, obligatory advertising for this list of "let's make a github-like wikipedia" proposals https://en.wikipedia.org/wiki/User:HaeB/Timeline_of_distributed_Wikipedia_proposals ;) [19:47:14] ToAruShiroiNeko: Yeah, I'm slightly familiar with German privacy laws as they relate to web analytics etc. (I looked into it a bit when I investigated Piwik) but I didn't know they had an influence on Wikimedia metrics and tools. [19:47:58] guillom the argument (from memory so take it with a grain of salt) was that some guy was hounded by his boss for editing wikipedia during work hours [19:48:02] guillom: ToAruShiroiNeko I guess that also related to toolserver being in Germany [19:48:05] But toollabs isn't [19:48:08] and the tool was making that obvious [19:48:22] yuvipanda yes but tools were transfered from toolserver [19:48:31] people simply didnt want to take the risk I suppose [19:48:35] Yeah [19:48:45] unfortunate if you ask me [19:48:47] Either way it feels like a complicated situation [19:48:52] Yeah I agree [19:48:58] ToAruShiroiNeko: I guess it does make it more obvious than the contributions page. I'm not totally convinced, though (personally). [19:49:13] guillom no one was, aside from the german judge [19:49:20] I'm tempted to just build things and see I'd I get flak [19:49:32] the information is already on the contribution page [19:49:39] yup [19:49:44] its trivial to push it on to excel and egenrate the same graph [19:49:56] Yup [19:50:03] that was the argument I made back then too [19:50:12] When you outlaw analytics only outlaws will have analytics [19:50:14] :P [19:50:19] :) [19:50:25] yuvipanda I think that was for guns, but yeah same logic [19:50:57] ToAruShiroiNeko, guillom: not sure there was a rigorous legal evaluation re european laws, but see e.g. https://meta.wikimedia.org/wiki/Requests_for_comment/X!%27s_Edit_Counter [19:51:25] Thanks HaeB [19:52:09] yeah [19:52:18] seems like there is considerable controvercy [19:52:23] harej: that's very cool. FWIW I pictured "Har-edj" as a diligent editor adding "Beautiful villlage" to Indian articles. O_o [19:52:49] My name is James Hare, which might be the opposite of an Indian name. [19:53:14] ...If Indian names have opposites. I don't know. [19:53:35] harej thats easy [19:53:47] Naidni names [19:53:55] Well played [19:54:11] it was a pain to type it, I am glad you appreciate it. [19:56:50] What I want to see as part of revision tagging: article tagging! [19:56:58] In fact, I proposed an extension that does just that :P [19:57:09] article tagging for what specifically [19:57:19] perhaps a new era of categorization? [19:57:30] Wikidata could potentially be a back-end to this. [19:57:58] Wikidata can be the ultimate output. I would not want to use it as a database though [19:58:11] Rewrite WikiGrok with WikiLabels! [19:58:38] There are methods in to testing data with different classifiers to determine which one fits the data best [19:58:46] ToAruShiroiNeko: I currently store each article–WikiProject pairing in a bespoke database. The database has 11 million rows. [19:59:12] For some tasks Naive Bayes would gey the job done. Others you may go for ANN or SVM. [19:59:52] You may even decide to go crazy with Logistic regression. [20:00:17] or you could utilise natural language processing via text mining [20:01:04] it is better to gather information first, run a few experiments to determine classifier and then pick it as the model [20:01:24] you would use this model to predict cases [20:02:01] Looks like I categorised those geolocation pages just in time for a wiki-research-l post :) [20:02:08] sorry, analytics@ [20:03:14] I am somewhat sad that #countervandalism people werent that interested in the handcoder [20:03:23] we do have adequate number of voulnteers [20:12:39] What's the best way to make a list of revisions which created or significantly expanded pages linked to a certain Wikidata item? [20:13:26] So far, without using dumps, it seems the easiest is to get a list of sitelinks with pywikibot, get the full list of revisions for each, do my calculations on diff size etc. [21:29:10] Nemo_bis: depends on what you mean by "significantly expanded". If you can identify potential candidate revisions by diff size, you can pull rev size from the API so you won't need the text of all revisions [21:32:18] Well of course :) [23:02:54] hey, DATA PEOPLE [23:02:59] we're hiring another me - https://boards.greenhouse.io/wikimedia/jobs/66494 [23:03:09] work for search & discovery! Be paid to dig into terabytes of awesome data! [23:03:14] have me review your code! [23:03:28] (This is not as bad as it sounds I promise) [23:09:20] This job sounds cool. Here is my code. https://github.com/harej/wikiproject_reports [23:10:00] the link breaks [23:10:10] Wrong URL! [23:10:10] but if you'd like to do it and think you could, apply! [23:10:21] Sorry I was trying to recite it from memory [23:10:40] https://github.com/harej/wikiproject_scripts [23:11:09] I enjoy it a lot I'm just not experienced at it. [23:14:47] Because of inexperience, I do in a day what might take Aaron's cardboard cutout two hours. [23:33:12] harej, I mean, his cardboard cutout is better than me too!