[05:50:06] TimStarling: I wanted to take html5depurate for a spin, but I gave up after about a dozen of apt-get invocations. [05:50:51] In the Total Perspective Vortex in the Hitchhiker's Guide to the Galary, the whole of creation is extrapolated from a fairy cake. Java is the same way. You somehow need an entire ecosystem of tools or it just scoffs at you for having the wrong computer. [05:52:01] yeah, I didn't get around to writing install docs just yet [05:52:50] although I've got some notes about packages you might need [05:53:10] just for runtime though, no notes for building [05:53:35] but you shouldn't need to install all the libraries separately, maven should do that [05:56:53] if you install openjdk-7-jdk and maven2 you shouldn't be too far away [05:57:12] I got that far, but I was squeamish about letting it download things [05:57:33] so I tried to stack the deck by installing the libraries I thought it would need [05:57:38] yeah, it downloads a *lot* of things [05:57:50] $ du -sh ~/.m2/ [05:57:51] 44M /home/tstarling/.m2/ [05:58:24] I have it set up in a chroot, just for building java things [06:00:43] running it is another matter, I have uncommitted scripts for running it [06:01:09] the long-term plan is to put it all into a single .deb package with init scripts and everything [06:02:44] https://github.com/wikimedia/operations-debs-kafka/tree/debian/debian should be useful [06:03:02] "This package is created using a custom build system based on Makefiles instead of the standard sbt/gradle build system used by upstream. The reason for this is to satisfy debian policy that no internet connection should be required during package building as well as security concers about the downloaded JAR files by sbt" [06:03:16] I think they were just hoping ottomata would give up :P [06:03:21] if you run "mvn dependency:build-classpath" then that will give you a classpath you can use for testing [06:03:35] mine is /home/tstarling/.m2/repository/nu/validator/htmlparser/1.4.1/htmlparser-1.4.1.jar:/home/tstarling/.m2/repository/commons-daemon/commons-daemon/1.0.15/commons-daemon-1.0.15.jar:/home/tstarling/.m2/repository/commons-cli/commons-cli/1.3.1/commons-cli-1.3.1.jar:/home/tstarling/.m2/repository/org/glassfish/grizzly/grizzly-framework/2.3.22/grizzly-framework-2.3.22.jar:/home/tstarling/.m2/repository/org/glassfish/grizzly/grizzly-ht [06:03:35] tp-server/2.3.22/grizzly-http-server-2.3.22.jar:/home/tstarling/.m2/repository/org/glassfish/grizzly/grizzly-http/2.3.22/grizzly-http-2.3.22.jar:/home/tstarling/.m2/repository/org/glassfish/grizzly/grizzly-http-server-multipart/2.3.22/grizzly-http-server-multipart-2.3.22.jar [06:04:14] * ori tries [06:04:32] then you do java -cp $classpath org.wikimedia.html5depurate.DepurateDaemon [06:07:44] that should listen on localhost:4339 [06:07:52] then to test, something like: [06:08:04] curl -w\\n -vvv 'http://localhost:4339/depurate' -F text=hello [06:12:18] hello [06:12:19] nice [06:13:41] i had to add the html5depurate jar file that was generated by 'mvn install' to $classpath, but then it worked [06:15:18] it starts up quickly, too [06:15:20] ah yes [06:16:03] I have target/classes appended to my classpath which allows it to load the *.class files without installing a .jar [06:29:04] careful with http://localhost:4339/depurate?text=%3Cscript%3Ealert(%22XSS%22)%3C%2Fscript%3E [06:31:21] might be better to set the content-type to text/plain and add 'X-Content-Type-Options: nosniff' [07:01:10] bd808: took a shot at phabricating authmanager tasks, feel free to close them liberally if you don't like how they are organized [07:01:19] everything is under the T89459 tree [13:30:09] TimStarling: I see you filed T110269 about that [16:28:13] 10MediaWiki-Core-Team, 10MediaWiki-API, 7Database, 5MW-1.26-release, and 3 others: allpages filterlanglinks DBQueryError - https://phabricator.wikimedia.org/T78276#1575936 (10Anomie) 5Open>3Resolved The part of this bug that isn't T97797 should be fixed now, although if the DB people have anything to a... [16:46:18] Nikerabbit: https://gerrit.wikimedia.org/r/#/c/234002/ [16:48:55] ori: by when do you need/want it to be reviewed and merged? [16:50:48] Nikerabbit: so, Freud's model of the psyche is made up of three parts: he id is the set of uncoordinated instinctual trends; the super-ego plays the critical and moralizing role; and the ego is the organized, realistic part that mediates between the desires of the id and the super-ego. [16:50:57] I will answer on behalf of each part [16:51:04] id: now! [16:51:30] super-ego: there is no definite deadline, really. [16:51:36] ego: soonish? [16:52:13] :) [16:52:37] ori: thanks that helps ;) [16:52:50] ori: I'll ask in tomorrow's daily if I or someone else can take it [16:52:57] much appreciated [19:31:51] bd808: when you have a chance, could you review https://gerrit.wikimedia.org/r/#/c/234036/? [19:42:46] Can someone take a look at https://gerrit.wikimedia.org/r/#/c/233964/ ? [19:43:28] I'd like to get it deployed quickly [19:47:19] Krenair: does it jut need to specify the table? notification_timestamp is from echo_notification, not echo_event [19:47:30] it's merged now [19:48:23] bd808, possibly, I was just trying to get it to select the same fields as it was before we managed to break it, because those were known to work [19:49:06] yeah. I bet it worked on line 83 (right join) and just not line 99 where it looks like copy-pasta [20:45:21] !bash ori: by when do you need/want it to be reviewed and merged? Nikerabbit: so, Freud's model of the psyche is made up of three parts: he id is the set of uncoordinated instinctual trends; the super-ego plays the critical and moralizing role; and the ego is the organized, realistic part that mediates between the desires of the id and the super-ego. I will answer on... [20:45:23] ...behalf of each part id: now! super-ego: there is no definite deadline, really. ego: soonish? :) ori: thanks that helps ;) [20:45:24] Dangit [20:45:33] heh [20:45:34] bd808: Feature request: another way to add quips than through IRC [20:45:52] bd808: Feature request 2: a way to remove quips that were mangled due to #1 [20:46:03] (specifically https://tools.wmflabs.org/bash/quip/AU9rwgVU1oXzWjit5Vry ) [20:46:08] both good ideas [20:47:10] * bd808 goes to figure out the right elasticsearch delete command [20:47:31] * RoanKattouw has saved the quip at https://office.wikimedia.org/wiki/Bash#August_2015 for posterity so he can close his editor window [20:47:37] up to now I've been fixing things by dumping and reloading the db [20:47:57] s/db/index/ [20:48:13] using https://www.npmjs.com/package/elasticdump [20:50:20] elasticsearch-head can delete items [20:50:37] a bit too powerful for serious use though [20:52:29] legoktm: any comments on https://gerrit.wikimedia.org/r/#/c/223790/ ? [20:53:11] Krinkle: want to look through my random doc changes at https://gerrit.wikimedia.org/r/#/q/owner:%22Aaron+Schulz%22+status:open,n,z ? [20:54:02] RoanKattouw: fixed it for you -- https://tools.wmflabs.org/bash/quip/AU9rwgVU1oXzWjit5Vry [21:05:41] Thanks! [21:06:38] AaronSchulz: for wikimedia wikis, this just switches it to use $wgSessionCacheType instead of memcached right? [21:07:00] yes [21:07:14] AaronSchulz: does it need to be deployed to both branches at the same time? [21:08:18] yes [22:01:27] tgr: thanks for all the new phab tickets. I'm going to try and make a dashboard to organize them a bit [22:03:30] bd808: sorry to drive by bug report, but I have to run [22:03:48] remember the fix you guys did to fix the "required": "" problem? [22:03:52] check this link out: https://meta.wikimedia.org/w/api.php?action=jsonschema&title=CentralNoticeBannerHistory&revid=13332080 [22:04:15] because that schema has nested required properties, the top level ones are fine because of your fix but the nested ones are not, because the fix was only for the top level [22:04:41] milimetric: doh. Thanks. I'll get it in phab [22:04:49] I just had to mention in case I get hit by a bus, feel free to cc me if it's not clear