[01:18:48] (PS1) Madhuvishy: Add script to update refinery source jars from jenkins [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290617 [01:19:39] (CR) Madhuvishy: [C: 2 V: 2] "Self merging - not to the master branch" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290617 (owner: Madhuvishy) [02:25:08] (PS1) Madhuvishy: Change jar update script permissions to executable [analytics/refinery] - https://gerrit.wikimedia.org/r/290621 [02:25:34] (Abandoned) Madhuvishy: Change jar update script permissions to executable [analytics/refinery] - https://gerrit.wikimedia.org/r/290621 (owner: Madhuvishy) [02:27:15] (PS1) Madhuvishy: Change jar update script permissions to executable [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290622 [02:27:42] (CR) Madhuvishy: [C: 2 V: 2] "Self merging - this is not branch master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290622 (owner: Madhuvishy) [02:36:25] (PS1) Madhuvishy: Modify update jars script to make the commit and git review [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290623 [02:36:59] (CR) Madhuvishy: [C: 2 V: 2] "Self merging - this is not the master branch" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290623 (owner: Madhuvishy) [02:44:24] (PS1) Madhuvishy: Change git review to git push [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290624 [02:44:51] (CR) Madhuvishy: [C: 2 V: 2] "Self merging - this is not master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290624 (owner: Madhuvishy) [05:08:55] (PS1) Madhuvishy: Add git hook to append change ids to commit msgs [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290629 [05:12:57] (CR) Madhuvishy: [C: 2 V: 2] "Self merging, not to master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290629 (owner: Madhuvishy) [05:23:03] (PS1) Maven-release-user: Add refinery-source jars for v0.0.26 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290630 [06:26:02] (PS1) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 [06:27:12] (PS2) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [06:27:48] Analytics-Kanban, Patch-For-Review: Get jenkins to update refinery with deploy of new jars {hawk} - https://phabricator.wikimedia.org/T130123#2325424 (madhuvishy) [06:33:45] Analytics-Kanban, Patch-For-Review: Get jenkins to update refinery with deploy of new jars {hawk} - https://phabricator.wikimedia.org/T130123#2325437 (madhuvishy) a:madhuvishy [08:29:53] elukey: o/ [08:30:44] hello! [08:31:00] quick talk around aqs compaction? [08:31:21] maybe in 15 mins? [08:31:41] sure, ping me when ready [08:37:12] joal: in the meantime https://gerrit.wikimedia.org/r/#/c/290643/2 [08:37:43] the idea would be to set dynamic execution enabled by default, and to fail only in case the standalone options are set [08:38:24] so by default in spark-defaults.conf you get spark.dynamicAllocation.enabled and spark.shuffle.service.enabled set to true [08:40:04] even if we might set it to false by default and then explicitly enable it in our roles [08:40:16] I am sure that Andrew will ask me to do it [08:40:29] now that I think more about it [08:40:30] ahahhaah [08:42:11] mmm but then we'd need the role for spark [08:42:15] only for that option [08:44:25] no probably not, but extra config [08:44:51] https://puppet-compiler.wmflabs.org/2900/analytics1032.eqiad.wmnet/ looks good [08:47:12] posted my comments, will ask to Andrew this afternoon what is the best way forward [08:47:38] going to grab a coffee [09:02:36] joal: going to finish debugging a memcached metric issue, will be back in 30 mins [09:02:42] ok [09:03:12] what do you think about the spark settings? [09:05:03] elukey: Sounds good to me except for the timeout setting for caching executors (not present, is it? [09:06:48] elukey: about the puppet way of doing things, I'm so bad in puppet I don't even think of having an opinion : [09:07:48] ahhh you mean spark.dynamicAllocation.cachedExecutorIdleTimeout [09:07:57] correct sir [09:08:18] got fooled by spark.dynamicAllocation.executorIdleTimeout that is 60s [09:08:22] adding it thanks! [09:08:25] we said the other timeout was good (standard idleEecutor), but we don't want infinity for that one [09:08:38] np elukey, thanks for doing it ! [09:09:00] elukey: I think we want 1h for the spark.dynamicAllocation.cachedExecutorIdleTimeout [09:15:31] joal: https://gerrit.wikimedia.org/r/#/c/290643/3/templates/spark/spark-defaults.conf.erb [09:16:39] oh yes elukey, another (less important) thing that would be great [09:17:00] elukey: Can we silent spark log in shells about losing executors? [09:17:37] elukey: In dynamic allocation mode, losing executors is normal behavior, so we shouldn't log error messages [09:17:51] this requires a "mmmmmmmm" [09:18:02] elukey: at least one, for sure ;) [09:18:50] joal: have you already an idea about the logging setting by any chance? [09:19:09] elukey: not really, looking as well [09:20:43] To specify a different configuration directory other than the default “SPARK_HOME/conf”, you can set SPARK_CONF_DIR. Spark will use the the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc) from this directory. [09:21:02] so I guess that it is just a matter of adding log4j.properties ? [09:21:26] elukey: https://issues.apache.org/jira/browse/SPARK-4134 [09:21:36] file { "${config_directory}/log4j.properties": [09:21:36] source => 'puppet:///modules/cdh/spark/log4j.properties', [09:21:36] } [09:22:01] elukey: Let's wait for an upgrade to 1.6 ;) [09:23:59] joal: we already have a custom log4j file, if you give me the config to shut of those errors I can add it easily [09:24:07] shut off [09:24:31] elukey: makes sense, but I think it's not really a good idea: in some cases (when not in dynamic allocation), those errors are real errors [09:24:51] ahhh okok [09:25:04] so we need a cdh upgrade :D [09:25:05] elukey: I should have thought of that before asking you ... sorry :S [09:25:25] nono good to know, explored a bit the cdh module :) [09:25:27] elukey: indeed, but I think CDH doesn't have a new release, do they ? [09:25:37] not that I know.. [09:25:45] right, so we'll wait a bit :) [09:25:50] no bother thought [09:26:32] 5.7.0 [09:26:37] http://www.cloudera.com/downloads/cdh/5-7-0.html [09:27:54] Wow, elukey, this 5.7 release gives HoS (Hive on Spark) ... I think I want to test that :) [09:28:04] ahhahaahah [09:28:21] + spark rebased on 1.6 --> Good call [09:28:36] Let's discuss that with Andrew when he comes online :) [09:29:27] elukey: cassandra real quick ? [09:29:28] yep! going to finish my work with memcached, then I'll be back! [09:29:31] :P [09:29:34] ok, after then [10:42:55] joal: ganglia and memcached are failing me, sorry :P [10:43:11] ok elukey, doing other things :) [10:43:37] we can chat now or after lunch, as you prefer [10:43:46] elukey: your call ;) [10:43:56] would be better now for me, but ok after if you prtefer [10:46:06] joal: sure let's do it now [10:46:18] ok, batcave ! [11:05:32] joal: I needed to read again leveled compaction, now it makes more sense. thanks for the patience :) [11:06:16] I've read http://www.scylladb.com/kb/compaction/ that pictures it very nicely [11:06:22] how things move from L0 upwards [11:08:52] and non-overlapping ranges among levels [11:10:39] what my brain wanted to know (probably) was how many levels would have "jumped" cassandra for an avg random read [11:11:24] anyhowww, will try to make my ideas clearer [11:12:21] but for the moment it looks much more consistent than DTCS [11:12:31] especially because we are not writing stuff all the times [11:15:50] on aqs1004-a, SSTables in each level: [1, 10, 100, 179, 0, 0, 0, 0, 0] [11:16:30] so with this data, cassandra would need to read MAXIMUM from 4 sstables [11:16:39] (worst case) [11:16:53] given the fact that we have non overlapping ranges of keys in each level [11:17:04] auto-answered [11:17:28] * elukey lunch! [12:44:38] joal: for some reason there was a cassandra process running on aqs1006 that wasn't a cassandra instance [12:45:00] so we were getting [12:45:01] May 25 06:27:56 aqs1006 java[27322]: java.lang.IllegalStateException: Cannot determine instance name of process running as PID 26720 (Hint: missing -Dcassandra.instance-id=?) [12:45:20] of course cassandra-metrics-collector didn't even try to check the other processes [12:45:28] and wasn't pushing metrics :/ [12:45:34] killed the process [12:45:39] May 25 12:43:29 aqs1006 java[21157]: 2016-05-25 12:43:29,126 [DefaultQuartzScheduler_Worker-1] INFO o.w.c.metrics.service.Discover - Found instance aqs1006-a [12:45:42] May 25 12:43:29 aqs1006 java[21157]: 2016-05-25 12:43:29,503 [DefaultQuartzScheduler_Worker-1] INFO o.w.c.metrics.service.Discover - Found instance aqs1006-b [12:46:08] Collection of aqs1006-b complete; Samples written to graphite-in.eqiad.wmnet:2003 [12:46:14] Collection of aqs1006-a complete; Samples written to graphite-in.eqiad.wmnet:2003 [12:46:24] so theoretically metrics should show up in a bit [13:00:16] ah nice, here they are: https://grafana.wikimedia.org/dashboard/db/aqs-cassandra-compaction [13:00:20] joal --^ [13:15:55] Ahhhh :) Thanks elukey ! [13:17:13] I've double checked: aqs1006 have data, so it seems it was a metric only issue [13:17:17] elukey: --^ [13:22:27] yep yep [13:28:14] GOOD MORNNINGGG [13:28:29] Mr ottomata good morning to you ! [13:29:11] elukey: Read the scylladb compaction thing: indeed, very well explained ! [13:32:50] joal: am looking over schema changes one last time [13:32:51] q [13:32:55] https://gerrit.wikimedia.org/r/#/c/288210/7/jsonschema/mediawiki/user_blocks_change/1.yaml [13:33:11] is 'user_id_blocks_changed' a name in the db or mw code, or did we make it up? [13:33:29] ottomata: made up [13:33:59] k, it just sounds really awkward, would something like 'user_id_affected' make more sense? or [13:34:06] maybe it would make sense to call this field just user_id [13:34:08] and the other field [13:34:35] user_id_initiator [13:34:37] or [13:34:37] hm [13:34:37] that's bad [13:34:39] but something like that [13:35:30] ottomata: user_id and user_text as is are used everywhere else as user info for the user making the action, so it makes sense to keep it this way I think [13:35:41] ok [13:35:51] About the naming, I agree it's weird ... But it's the most explicit we have ... [13:36:03] well, this is a user_blocks_change event [13:36:08] I am happy to change it, but don't really know to waht :) [13:36:11] user_id_affected makes sense to me [13:36:24] the user_id that the blocks_change event affects [13:36:36] would work for me [13:36:52] hmm effected? [13:36:53] haha [13:36:57] affected vs effected [13:36:57] huhu [13:36:57] hmmm [13:37:12] time for googling my native language! [13:37:29] huhu: affected seems right ;) [13:37:36] Analytics, Reading-Web-Backlog, Wikipedia-iOS-App-Product-Backlog, Mobile-Content-Service: As an end-user I shouldn't see non-articles in the list of trending articles - https://phabricator.wikimedia.org/T124082#2326276 (Mholloway) [13:37:43] i think affected [13:37:51] ja [13:38:18] k, will update that [13:38:47] Marko had a comment about getting old status as well as new one not feasible... [13:38:56] ottomata: --^ [13:39:10] Any idea who I should contact to get another opinion as well? [13:39:50] o [13:39:51] h [13:39:52] right [13:39:53] hm [13:40:03] ottomata: last question on the affected stuff: Do you want me to change the comment as well, or is it ok as is ? [13:40:32] no comment is fine [13:40:35] the description you mean? [13:40:43] yes [13:40:45] sorru [13:40:54] ja, t hat is fine [13:40:57] k cool [13:41:03] pushing the change ! [13:41:40] joal maybe we can look, have you looked in MW code to find stuff resonsible for blocks changes? is there a hook for it? [13:42:49] ottomata: I NEVER have looked in MW code (I'm too afraid of what I could find) [13:42:52] :P [13:43:08] haha, ok I'll do some quick snorkeling [13:43:13] see what i can see [13:43:49] ottomata: If you find me some places to look at, let's do it together, but I won't be able to help finding the place [13:44:36] joal: found these two hooks [13:44:37] https://www.mediawiki.org/wiki/Manual:Hooks/BlockIp [13:44:43] https://www.mediawiki.org/wiki/Manual:Hooks/BlockIpComplete [13:44:45] that look promising [13:45:02] there is also this [13:45:02] https://www.mediawiki.org/wiki/Manual:Hooks/SpecialBlockModifyFormFields [13:45:03] but i dunno [13:46:04] ottomata: thanks for the review, adding the last changes :) [13:46:05] this is the Block object [13:46:06] https://github.com/wikimedia/mediawiki/blob/master/includes/Block.php [13:46:10] yup! [13:47:27] ahhh joal! [13:47:33] the Blocks object calls the affected user target [13:47:37] so [13:47:37] let's call our fields [13:47:40] user_id_target [13:47:41] ok? [13:47:56] makes sense ottomata :) [13:47:59] Will change again [13:48:00] coo [13:48:55] joal: my mental todo list says you had some questions regarding https://gerrit.wikimedia.org/r/#/c/289594/ ? [13:49:06] joal: this is the block form that actually creates the block [13:49:06] https://github.com/wikimedia/mediawiki/blob/master/includes/specials/SpecialBlock.php [13:49:36] indeed mobrovac, good memory ! [13:50:09] mobrovac: I don't know the scap various stages and their order, so it's difficult for me to have a nopinion on the change :) [13:51:46] basically, there used to be 4 stages, the last of which (promote) restarted the service [13:51:56] euh, s/4/3/ [13:52:17] there are now 4 and promote is not the latest stage, restart_service is [13:52:26] so we want to check the service after the restart, not before [13:52:37] mobrovac: indeed :) [13:52:41] Makes sense [13:52:59] mobrovac: any link on doc on those stages? [13:53:09] euh, dunno frankly [13:53:25] mobrovac: I'll +1 the patch, letting elukey merge it :) [13:53:27] thcipriani sent a mail to wikitech-l and the kind on it [13:53:40] (CR) Joal: [C: 1] Scap: execute checks after the restart_service stage [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/289594 (https://phabricator.wikimedia.org/T135609) (owner: Mobrovac) [13:54:06] heh joal, elukey said he'll let you inspect and handle it [13:54:21] huhuhu, didn't recall that, I'll merge it then :) [13:54:22] it's not a hot potato! [13:54:42] (CR) Joal: [C: 2 V: 2] Scap: execute checks after the restart_service stage [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/289594 (https://phabricator.wikimedia.org/T135609) (owner: Mobrovac) [13:55:07] elukey: aqs deploy now or tomorrow morning? [13:55:28] joal: even now is fine for me [13:55:42] elukey: ok let's go [13:55:48] I am DRYng my code review after Andrew's review :P [13:56:00] elukey: you do it or I do it ? [13:58:33] joal: if you have time please go ahead [13:58:40] k [14:00:47] (PS1) Joal: Correct deepStrictEqual assertion function use [analytics/aqs] - https://gerrit.wikimedia.org/r/290688 [14:00:56] elukey: if you don't mind before I deploy --^ [14:05:00] ok joal i think mobrovac is right [14:05:09] you can't know old block settings in the hook at least [14:05:24] there is a moment during form submission when the previous block settings are loaded from the db [14:05:40] we maybe could make another hook there [14:05:46] buut, i dunno [14:05:51] it would be pretty awkward [14:06:20] the problem with putting a hook there is that then that ceases to be an atomic operation [14:06:47] as people could do all kinds of stuff in the hook to abort the process [14:06:56] so i think that'd be a no-go [14:07:26] joal: looks sane to me even if I have no context :) Don't feel bad to auto-merge! [14:07:35] elukey: ok will do [14:07:44] ottomata: https://phabricator.wikimedia.org/T134056#2326297 [14:07:46] \o/ [14:07:54] ah already doing [14:07:56] super [14:07:57] didn't see it [14:08:03] (CR) Joal: "Self-merging bug before deploying." [analytics/aqs] - https://gerrit.wikimedia.org/r/290688 (owner: Joal) [14:08:19] I understand mobrovac [14:08:28] elukey: aye! if you want to you can! i just pinged chris in ops, not doing anything yet [14:08:58] nono don't worry, I think that you need to do the usual dd right? [14:09:09] mobrovac, ottomata: So we'd only have new_blocks instead of old + new (or we could even keep old + new in schema for coherency purpose, and make old not required)? [14:09:42] what would be the point of old_ when we could never fill it? [14:10:01] mobrovac: coherency from a schema perspective [14:10:17] We have other places where we try to follow the old/new pattern (when feasible) [14:11:27] we run the risk of having wrong data this way, so i wouldn't be in favour of that [14:11:36] correctness trumps coherence in my books [14:11:54] yeah, i think i agree with mobrovac, we'd never be able to have old ever [14:12:01] unless we hacked up mw core code, which i doubt we'd get merged :) [14:12:44] mobrovac: just so i understand, you are saying a hook in the middle is bad, because something could fail and then what? [14:12:48] but, that's true already, no? [14:13:20] mobrovac, ottomata: And also, what about revision_visibility_change, same concern ? [14:13:28] there is a hook SpecialBlockModifyFormFields [14:13:46] that gets called after the old blocks are read from the db and merged with the incoming form fields [14:13:52] but before the block is saved [14:14:03] that allows for someone to hook into it and alter the block object before save [14:14:38] (CR) Joal: [C: 2 V: 2] Correct deepStrictEqual assertion function use [analytics/aqs] - https://gerrit.wikimedia.org/r/290688 (owner: Joal) [14:14:47] hmmm [14:14:50] you know [14:14:51] actually [14:14:54] it wouldn't be so bad to add a hook here [14:15:03] here == ? [14:15:07] https://github.com/wikimedia/mediawiki/blob/master/includes/specials/SpecialBlock.php#L256 [14:15:15] s/==/=/ :P [14:15:21] if i am readin gcorrectly [14:15:24] at $block = Block::newFromTarget( $this->target ); [14:15:39] if there was previously a block for target (IP or user) [14:15:44] then $block is what was loaded from db [14:16:02] the code after that is merging what is coming in from the form with what was in the db [14:16:05] and will use that for an update [14:16:12] so, if we added there a [14:16:21] Hooks::run( 'SpecialBlockPreviousFromTarget', array( $this, &$a ) ); [14:16:22] or something [14:16:24] well [14:16:28] Hooks::run( 'SpecialBlockPreviousFromTarget', array( $this, &$block ) ); [14:16:32] something like that [14:16:35] ahhhhh [14:16:35] NO [14:16:38] !log Deploying aqs into aqs_deploy [14:16:41] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log, Master [14:16:42] that would just call a hook with the old data [14:17:15] we'd have to add a hook at the bottom of this maybeAlterFormDefaults [14:17:37] we'd need something with both new and old data, and we'd need that at the point where we are 100% sure the transaction's been completed [14:17:44] that passes both $block (old data) and uhhh, $fields? [14:17:56] yeah, that would be the right way to do it [14:18:10] yeah we'd have to save the block in the object for a bit [14:18:19] and pass it to BlockIpComplete [14:18:27] yes [14:18:33] whihc, hm, could be done i guess [14:18:37] that sounds doable [14:18:40] (PS1) Joal: Update aqs to 2d20711 [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/290693 [14:18:40] hehe [14:19:51] mobrovac: ja who do you think we should bug about this? anomie? [14:20:10] afaik AaronS is the hooks guy [14:20:10] (CR) Joal: [C: 2 V: 2] "Self-merging for deploy." [analytics/aqs/deploy] - https://gerrit.wikimedia.org/r/290693 (owner: Joal) [14:20:16] ah ok [14:20:17] k [14:21:19] i guess aaron is SF time, so pretty early now [14:21:26] all right during the next puppet run the spark configs will be picked up [14:21:35] coool! [14:21:42] awesome elukey, I'll make my pleasure to test that ! [14:21:48] :) [14:23:06] mobrovac: also one thing that have changed a bit in the schemas is https://gerrit.wikimedia.org/r/#/c/288210/8/jsonschema/mediawiki/page_move/1.yaml [14:23:41] mobrovac: Added old and new objects to follow the pattern in other places, and added redirect_title to that [14:24:37] !log deploying aqs from tin [14:24:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log, Master [14:24:51] elukey: be ready in case I have broken something ;0 [14:25:00] sure [14:27:05] mobrovac, ottomata : I guess we're gonna wait for Aaron opinion, but that'd be awesome if we could have both old and new status :) [14:28:17] * joal has not broken anything !!!! YAY [14:29:56] joal: i will try to look it code up for visibility and page move too [14:30:03] then will post on ticket and solicit for aaron's opinion [14:30:29] ottomata: Thanks a mil for that ! [14:30:42] ottomata: I wouldn't have been able to do the audit you did ... [14:30:56] ottomata: At some point I might ask for some help and explanation :) [14:31:00] sho! [14:31:08] joal: my process was: [14:31:13] google: mediawiki hooks [14:31:25] found this page (hadn't seen it before) https://www.mediawiki.org/wiki/Manual:Hooks [14:31:31] search page for 'block' [14:31:39] then search mw core code for results [14:31:40] :) [14:31:53] then start reading code [14:32:03] (but i guess i have a little bit of previous knowledge about what mw hooks are :) ) [14:32:28] ottomata: That's this last bit on which I'll ask, but later :) [14:32:36] k [14:42:03] ottomata: the disk has been swapped, if you want I can create the partition+fs [14:42:24] hi, do you use git fat w/ archiva.wikimedia.org for deploys? [14:42:47] elukey: go right ahead thank you! [14:42:57] dcausse: yes [14:42:58] we do [14:43:15] ottomata: do you have a way to debug git fat pull when it refuses to find my artifact? [14:43:18] https://github.com/wikimedia/analytics-refinery/tree/master/artifacts [14:43:40] dcausse: i have special powers, so yes :/ i can log into the archiva box and check the shas it has for the artifacts [14:43:41] I checked sha1sum and they are identical but my jar is not resolved :/ [14:43:41] but [14:43:50] dcausse: they are identical in archiav? [14:43:52] or on your local? [14:43:58] I dl from archiva [14:44:00] oh! [14:44:06] then that should match [14:44:12] so the dl from archiva sha matches what git fat says? [14:44:18] yes :/ [14:44:25] ok looking, which jar? [14:44:38] swift-repository-plugin-2.3.3.jar [14:44:56] #$# git-fat 61b7500a331cdf2b9e56779455eb2fa299c19177 14925 [14:45:15] Analytics-Kanban, RESTBase, Services, RESTBase-API, User-mobrovac: Enable rate limiting on pageview api - https://phabricator.wikimedia.org/T135240#2326487 (mobrovac) The reduced limits were deployed earlier today and show warnings when the rate is surpassed globally. [This dashboard](https:/... [14:45:28] hm, dcausse that jar exists in the git fat store with 61b7500a331cdf2b9e56779455eb2fa299c19177 [14:45:31] that look scorrect [14:45:46] ls -l git-fat/61b7500a331cdf2b9e56779455eb2fa299c19177 [14:45:46] lrwxrwxrwx 1 archiva archiva 122 May 25 13:41 git-fat/61b7500a331cdf2b9e56779455eb2fa299c19177 -> ../repositories/releases/org/wikimedia/elasticsearch/swift/swift-repository-plugin/2.3.3/swift-repository-plugin-2.3.3.jar [14:45:59] file exists with that size too [14:45:59] so maybe some sort of caching in the .git folder maybe? [14:46:13] maybe? what is the error you are getting? [14:46:35] I see no error [14:46:46] oh, its just not dling the actual file? [14:46:48] git fat pull [14:46:48] ? [14:46:50] does nothing? [14:46:55] yes [14:47:12] dcausse: do you have this working for other jars? [14:47:38] yes [14:48:05] well not all jars are in archiva yet, maybe it bails out when it encounter too many errors? [14:48:18] hm, maybe? [14:48:21] but some are dled? [14:48:24] yes [14:48:30] that is strange [14:48:41] ok I'll continue to upload, thanks for checking there [14:48:45] ok [14:48:49] yeah lemme know if you figure anything out [14:48:55] sure, thanks! [15:30:21] Analytics-Kanban, EventBus, Patch-For-Review: Propose evolution of Mediawiki EventBus schemas to match needed data for Analytics need - https://phabricator.wikimedia.org/T134502#2326641 (Ottomata) 3 schemas being altered in https://gerrit.wikimedia.org/r/#/c/288210 require knowledge of the previous s... [15:30:34] joal: ^ https://phabricator.wikimedia.org/T134502#2326641 [15:31:43] Wow ottomata ! Thanks mate :) [15:32:17] mobrovac: look that comment over and see if it makes sense to you [15:32:21] i'll try to ping aaron later [16:00:59] a-team standup wooo [16:06:18] Analytics-Kanban: Get jenkins to automate releases {hawk} - https://phabricator.wikimedia.org/T130122#2326856 (Nuria) [16:06:20] Analytics-Kanban: Figure out if the Changelog file can be updated in the release process by Jenkins {hawk} - https://phabricator.wikimedia.org/T132181#2326854 (Nuria) Open>declined [16:06:36] Analytics-Kanban: Figure out if the Changelog file can be updated in the release process by Jenkins {hawk} - https://phabricator.wikimedia.org/T132181#2190783 (Nuria) This was not needed as spec-ed out here. [16:08:27] Analytics-Dashiki, Analytics-Kanban, Patch-For-Review: Visualize unique devices data {bear} - https://phabricator.wikimedia.org/T122533#2326896 (Nuria) Open>Resolved [16:08:29] Analytics-Kanban, Patch-For-Review: Make Unique Devices dataset public {mole} - https://phabricator.wikimedia.org/T126767#2326897 (Nuria) [16:08:31] Analytics-Dashiki, Analytics-Kanban, Patch-For-Review: Add support to 'displayName' param in config - https://phabricator.wikimedia.org/T134924#2326900 (Nuria) Open>Resolved [16:08:48] Analytics-Kanban, Patch-For-Review, RESTBase-API: Start date of unique devices counts endpoint is always off by one day - https://phabricator.wikimedia.org/T134840#2326918 (Nuria) Open>Resolved [16:18:19] Analytics: Provide a ua-parser service using the One True Wikimedia UA-Parser™ - https://phabricator.wikimedia.org/T1336#2327013 (Krinkle) [16:20:54] Analytics: Browser report on analytics.wikimedia.org has broken icons - https://phabricator.wikimedia.org/T136217#2327028 (Krinkle) [16:28:18] (PS1) Krinkle: Use HTTPS for wikimedia.org urls [analytics/analytics.wikimedia.org] - https://gerrit.wikimedia.org/r/290711 [16:31:14] Analytics, Wikipedia-iOS-App-Product-Backlog, Mobile-Content-Service: As an end-user I shouldn't see non-articles in the list of trending articles - https://phabricator.wikimedia.org/T124082#2327089 (Jhernandez) [16:31:58] (PS3) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [16:35:42] ottomata: when you get a chance can you add maven-release-user@wikimedia.org to it's own gerrit group - Analytics-ci? and give it push rights here https://gerrit.wikimedia.org/r/#/admin/projects/analytics/refinery,access [16:36:26] (PS1) Krinkle: index.html: Minor HTML clean up [analytics/analytics.wikimedia.org] - https://gerrit.wikimedia.org/r/290712 [16:37:15] (CR) Madhuvishy: "Not sure if we should consolidate the download-refinery-source-jars script and this one, or we can just leave it be even though some code " [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) (owner: Madhuvishy) [16:40:48] hehe, madhuvishy waht about to comment that [16:42:03] ottomata: yeah I mean we can remove the other script - this one assumes we will always be in the right workspace because jenkins workspace is going to be refinery root. it will also run the git push. not sure if the other one that displays the command is still useful [16:43:45] (CR) Ottomata: "Cool." [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) (owner: Madhuvishy) [16:44:00] madhuvishy: ja i think we should have one script that is smart enough for both use cases, perhaps with some options [16:44:07] we can rewrite in python if you think that would be easier [16:44:30] ottomata: sure - python would work - jenkins doesn't care i think [16:44:38] aye, up to you [16:44:51] i like parsing options better in python, but prefer running shell commands in bash [16:44:56] so usually I balance those two :) [16:45:08] ottomata: hmmm [16:45:10] if i have to parse a bunch of options, i will use python and then deal with python's annoying subprocess or whatever [16:45:21] what is the git fat init thing? i dont really know [16:45:22] if I don't, then I'll just make the cli parsing dumb so i can easily run shell commands [16:45:40] https://wikitech.wikimedia.org/wiki/Archiva#Setting_up_git-fat_for_your_project [16:45:47] https://github.com/wikimedia/analytics-refinery#setting-up-the-refinery-repository [16:46:14] madhuvishy: when a repo is cloned, the git fat files will be those little text files [16:46:21] in order for git-fat to resolve and dl them, it needs set up [16:46:39] ottomata: ah hmmm [16:46:48] since this script is just adding new files [16:46:52] probably just a git-fat init [16:46:53] is enough [16:47:44] okay [16:47:52] this was the commit it made yesterday [16:47:54] https://gerrit.wikimedia.org/r/#/c/290630/ [16:48:58] ottomata: and logs https://integration.wikimedia.org/ci/job/analytics-refinery-update-jars/9/consoleText [16:50:07] hmm, yeah, it shouldn't be a binary file [16:50:10] that is adding the actual file [16:50:12] that's why you need git fat [16:50:20] okay, i'll add git fat init - I feel like this script has more shell stuff than option parsing [16:50:23] k [16:50:31] i'll make the option parsing smarter in bash [16:50:34] a little bit [16:50:34] k sounds good [16:50:38] cool [16:50:42] thanks [16:50:50] madhuvishy: did that actually get merged to the git repo? [16:50:53] no no [16:50:59] ok no [16:50:59] cool [16:51:00] great [16:51:06] don't want binary files like that merged in [16:51:06] :) [16:51:08] also using a different branch to test [16:51:31] and there are no push rights to refinery for anyone [16:51:54] ok cool [16:51:54] right [16:55:03] joal, will you be working on edit data now? [16:55:08] mforns: YES ! [16:55:13] can I join? [16:55:20] Please ! [16:55:23] batcave ? [16:55:33] sorry, I dropped too fast :) [16:55:55] joal, sure [16:55:58] omw [16:59:43] ottomata: i can make a task for the gerrit repo rights thing if you are doing other stuff now [17:07:48] oh i am eating lunch but can do real quick [17:07:56] what is user? [17:08:40] madhuvishy: ^ [17:09:27] maven-release-user@wikimedia.org [17:10:32] ottomata: also the archival creds private repo thing work? [17:10:38] Archiva [17:11:39] madhuvishy: prviate repo thing? [17:12:23] ottomata: yeah you mentioned on the task to leave it open because you were having some trouble adding to puppet? [17:12:51] OH [17:12:52] yeah [17:12:58] i haven't got around to that [17:13:04] i think you should put it in paused and leave it open and assigned to me [17:13:07] it might be a bit... [17:13:20] Cool np [17:13:22] madhuvishy: what gerrit group is maven-release-user in? [17:13:23] not sure how to find out... [17:13:35] ottomata: its in Analytics-devs [17:13:38] oh [17:13:43] But may be we should make one? [17:14:00] yeah, probably, and move it out of analytics-devs, because dcausse mentioned that he might be interested in using your jenkins release stuff too [17:14:02] Like Analytics-ci and then give it push [17:14:10] Yeah [17:14:11] Cool [17:14:23] Will also need to add to refinery-source [17:14:29] aye [17:14:33] For push and push annotated tags [17:14:45] ok, madhuvishy this sounds like a task :) [17:14:45] i [17:14:55] you can make and assign to me if you like [17:14:59] i gotta run to make this apt. son [17:15:01] ottomata: okay I'll make one :) [17:15:07] Yeah np [17:22:26] ottomata: I posted what I've done on analytics1047 on https://phabricator.wikimedia.org/T134056#2327282 [17:22:52] I didn't manage to figure out what is the disk to partition, probably I am a bit tired [17:23:08] will restart tomorrow, but if you see something trival to fix please let me know :) [17:23:47] elukey: !!! /boot! :) [17:23:47] ha [17:23:48] uhhh [17:23:53] i don't see anything [17:23:55] yes horrible [17:25:31] so the disk should be configured now [17:25:39] but it still doesn't have any partition [17:25:51] and since we don't have UUID the mount order is a mess :D [17:26:02] HMMm well i mean, we can chnage them to uuid [17:26:14] elukey: i gotta run to this apt. sorry i forgot we were going to look at it together after standup [17:26:22] if you don't get it by the time i'm on tomorrow then let [17:26:27] 's see what we can find then [17:26:55] yepp! [17:27:20] going offline a-team, byeee o/ [17:27:29] elukey, bye! [17:27:32] bye elukey ! [17:30:42] Analytics-Kanban: Create separate Analytics-CI gerrit group and add maven-release-user - https://phabricator.wikimedia.org/T136221#2327290 (madhuvishy) [17:31:09] Analytics-Kanban: Create separate Analytics-CI gerrit group and add maven-release-user - https://phabricator.wikimedia.org/T136221#2327303 (madhuvishy) p:Triage>Normal [17:54:18] Analytics, Pageviews-API, RESTBase-API: AQS: query multiple articles at the same time - https://phabricator.wikimedia.org/T118508#2327342 (mobrovac) [17:56:53] kaldari: let me know if you have more questions about throttling cc mobrovac [17:57:08] will do, thanks for the info [17:57:26] nuria_: when does the new limit go into effect? [17:57:41] kaldari: the relevant ticket is https://phabricator.wikimedia.org/T135240 [17:58:06] kaldari: likely early next week [17:58:34] kaldari: we would want to enable it soon but note that in parallel we are working on scaling so limit will be higher when that weork is done [17:58:37] *work [17:58:42] kaldari: so it is not final [17:58:50] good to know [17:58:55] kaldari: makes sense? [17:58:58] yes [17:59:22] kaldari: bottom line is that cassandra needs a lot of work to be efficient in retrieving data for pageview api dataset [17:59:37] kaldari: new hardware will help but there are several other things to address [18:00:01] mobrovac: this dashboard is .. ahem.. a bit hard to read: https://logstash.wikimedia.org/#/dashboard/temp/AVToXWJes_MKeI4jrSeM [18:00:22] it's very pretty though :) [18:00:30] mobrovac: every item there means that is a request taht would have been returned with a 503 due to throttling? [18:00:49] kaldari: ya, kibana is pretty until you want to use it for something [18:01:29] nuria_: with a 429, but yeah, each line means a req that wouldn't reach AQS [18:01:41] why 429? [18:01:47] retry later [18:01:58] mobrovac:429? [18:02:26] yes [18:02:36] mobrovac: all right, noted [18:02:46] https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error [18:04:08] mobrovac: ya, i have used always 503 but i guess that is a more specific one [18:04:39] yeah, it's nicer because we pretend it's the client's fault :P [18:04:53] mobrovac: from what i can tell from the dashboard we woudl have limited a spike at 13 hours today but that's it. Am i reading that correctlty? [18:05:40] yes [18:06:09] mobrovac, nuria: just to make sure I have the terminology correct: The client limit is a limit on a particular IP address client, while the client/endpoint limit is a limit on a particular client from a specific URL? [18:06:17] which makes me think 10's even too much [18:06:44] kaldari: both, a client can make up to 10 req per second per endpoint [18:07:22] so you can have 20 reqs/s if you use 2 pageview endpoints [18:07:36] up to 40, as there are 4 of them [18:07:58] so a user using an interface such as http://tools.wmflabs.org/massviews/ (which is making asynchronous client-side ajax requests to the API) would be limited to 10 requests per second, correct? [18:08:43] if the ajax is sending the reqs directly to the rb api, then yes [18:08:59] and by "2 pageview endpoints" that would mean, for example, having 2 instances of the tool running in 2 different tabs in the browser? [18:09:12] no, that counts as the same client [18:09:30] by endpoints i mean the different REST URLs one can use [18:09:58] ah, got it [18:10:07] kaldari: i.e. 10 per sec for each of the URLs listed here - https://wikimedia.org/api/rest_v1/?doc [18:10:32] that makes sense [18:11:31] I was confused for a minute since "pageview" is also the name of one of our web interfaces :) [18:11:39] hehe [18:11:50] context is a bitch [18:12:47] I'll see about tightening up our throttling for the meantime [18:13:15] we're also currently limiting to interface to only process page sets with 500 or fewer pages. [18:13:51] sorry if we've still been spamming the API :) [18:19:48] mobrovac, nuria: Looks like we're currently inserting a 100 millisecond delay between each request, so we may be fine actually [18:20:34] cool [18:23:15] mobrovac: sorry i disconnected [18:24:35] kaldari: ya, that is no issue [18:25:14] if you go lower than 10 per second though, let me know :) [18:25:28] mobrovac: actuallly [18:25:41] mobrovac: with the limits how we have them [18:26:59] mobrovac: and looking at your teammate's comments here: https://phabricator.wikimedia.org/T135240#2324102 [18:27:57] mobrovac: seemed that the number of aqs nodes played a part, doesn't look like they do from wht you are saying [18:27:59] *what [18:28:29] mobrovac: the load we can sustain now in cassandra is 30 req/sec in the article endpoint [18:28:35] mobrovac: not 10 [18:58:58] Quarry: Excel does not recognize Quarry CSV output as UTF-8 - https://phabricator.wikimedia.org/T76126#2327562 (valhallasw) No, the 'UTF-16' seems to actually be UTF-8... [19:03:42] (CR) Nuria: [C: 2 V: 2] "Thank you!" [analytics/analytics.wikimedia.org] - https://gerrit.wikimedia.org/r/290712 (owner: Krinkle) [19:04:30] (CR) Nuria: [C: 2 V: 2] "Indeed, thanks for the catch." [analytics/analytics.wikimedia.org] - https://gerrit.wikimedia.org/r/290711 (owner: Krinkle) [19:05:55] Quarry: Excel does not recognize Quarry CSV output as UTF-8 - https://phabricator.wikimedia.org/T76126#790571 (Dzahn) Btw, separate from the encoding... what Excel considers to be a "CSV" actually depends on language settings in Windows. For example if you use a German Windows, the delimiter character by def... [19:47:15] yoo milimetric, yt? [19:48:01] hey ottomata [19:48:02] what's up [19:48:12] oook want to try to query the pageviews thing [19:48:23] from the tutorial [19:48:48] do I need a filter? [19:48:52] a simple count is good [19:49:11] k, uh, logging in one sec [19:49:56] https://www.irccloud.com/pastebin/GX3txuCr/ [19:50:06] ottomata: try that ^ [19:50:26] should work if there's data with timestamps around 2015-10-14 [19:50:41] ok that field doesn't work, [19:50:42] just got [19:50:46] url, user, latencyMs [19:50:55] no view_count? [19:50:57] working with this [19:50:58] http://druid.io/docs/latest/tutorials/tutorial-batch.html [19:51:03] {"time": "2015-09-01T00:00:00Z", "url": "/foo/bar", "user": "alice", "latencyMs": 32} [19:51:41] uh... oh, ok, just replace "fieldName": "latencyMs" and "metric": "latencyMs" [19:51:58] (it's kind of nonsensical but that's ok) [19:53:16] dimension? [19:53:30] url [19:53:46] or user [19:54:47] what do I post to? [19:54:51] trying to find in docs [19:54:55] i guess broker, but i don't know url [19:56:16] oh right, sorry [19:56:21] /druid/v2/?pretty [19:56:22] ? [19:56:36] overlord yes [19:56:49] wait sorry [19:57:05] broker? [19:57:16] "error" : "Instantiation of [simple type, class io.druid.query.topn.TopNQuery] value failed: Must have an AggregatorFactory or PostAggregator for metric[latencyMs], gave[[LongSumAggregatorFactory{fieldName='latencyMs', name='latencySum'}]] and [[]]" [19:57:38] milimetric: druid101:/home/otto/tutorial/latency_sum_query.json [19:57:41] in labs [19:57:52] tried [19:57:52] ok, sorry, my internet was acting up it's back now [19:57:52] url -L -X 'POST' -H 'Content-Type:application/json' -d @latency_sum_query.json localhost:8082/druid/v2/?pretty [19:57:55] curl -L -X 'POST' -H 'Content-Type:application/json' -d @latency_sum_query.json localhost:8082/druid/v2/?pretty [19:58:05] I'm gonna log in and I'll figure it out and let you know [19:58:10] k [20:00:54] sorry, I had the fields backwards, check /home/milimetric/latency_sum_query.json [20:01:03] ottomata: works now ^ [20:01:38] heh, adding up latencies like a boss [20:01:45] OK! [20:01:47] so a query works [20:01:49] sounds good to me! [20:01:55] ship it! [20:02:00] milimetric, mforns_gym : I managed to get the stuff to work :) [20:02:09] yay [20:02:32] We now have joal.simplewiki_20160501_denorm full of almost real data (only user registration date is fake) [20:02:36] thanks milimetric ok diving back into puppet cleanup [20:02:46] This data can be accessed from hive and/or spark [20:03:07] sweet [20:03:47] milimetric: Was thinking of that: Let me know when you want to start working on the queries for pageview prep for druid, I have some tricks /experience to share :) [20:04:05] And now is time to STOP WORKING ! Yay :) [20:04:14] HMM cool! [20:04:22] milimetric: ssh -N druid101.eqiad.wmflabs -L 9090:druid101.eqiad.wmflabs:9090 [20:04:24] a-team, except if any of you need me now, I'm gone ! [20:04:26] http://localhost:9090 [20:05:43] ottomata: coooool ! [20:05:51] anyway, GONE ! [20:05:56] Have a good evening a-team [20:06:10] laters! [20:09:18] (PS4) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [20:10:41] nuria_: I forgot about 1:1 - still on? [20:11:15] madhuvishy: yes, we have time. jump in [20:27:15] Analytics-Kanban, RESTBase, Services, RESTBase-API, User-mobrovac: Enable rate limiting on pageview api - https://phabricator.wikimedia.org/T135240#2327958 (kaldari) @MusikAnimal: I was initially worried about this affecting http://tools.wmflabs.org/massviews/, but since we're already inserti... [20:33:05] ottomata: I updated https://gerrit.wikimedia.org/r/#/c/290639/4 [20:34:40] cool ok madhuvishy will look shortly [20:34:46] thank you :) [20:38:23] milimetric, back are you here still? [20:38:28] yes mforns [20:38:37] but I didn't get to work on my queries :( [20:39:15] milimetric, if you want, you can update me and give me a starting point, and I can continue today and tomorrow morning and then we meet to continue [20:39:19] ottomata: you know tunneling and I don't get along, that never works for me :) [20:39:30] but I am imagining how cool it is, it's great! [20:39:31] :) [20:39:41] mforns: yeah, we can work together, batcave? [20:39:47] milimetric, sure omw [20:40:15] ha :) [21:19:02] (PS1) Madhuvishy: Update script with better option parsing [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290798 [21:22:51] (CR) Madhuvishy: [C: 2 V: 2] "Merging for testing - not to master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290798 (owner: Madhuvishy) [21:27:10] (PS1) Madhuvishy: Fix mode if block [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290802 [21:27:40] (CR) Madhuvishy: [C: 2 V: 2] "Merging for testing - not to master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290802 (owner: Madhuvishy) [21:28:23] (PS1) Maven-release-user: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290803 [21:29:12] ottomata: does this commit look more correct? https://gerrit.wikimedia.org/r/#/c/290803/ [21:30:29] (PS5) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [21:33:54] hm madhuvishy not quite [21:34:00] those are just symlinks, ja? [21:34:13] just changing the symlink target [21:34:44] ottomata: yeah looks like it [21:36:02] ottomata: hmmm right the commit joal showed me has some git fat stuff - looks like it isn't working as expected [21:39:05] aye [21:47:37] (PS1) Maven-release-user: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290810 [21:48:07] (Abandoned) Madhuvishy: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290810 (owner: Maven-release-user) [21:48:23] (Abandoned) Madhuvishy: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290803 (owner: Maven-release-user) [21:48:39] hmmm ottomata not sure why [21:49:53] actually, trying something [21:50:56] (PS1) Maven-release-user: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290811 [21:51:09] nope [21:51:27] (Abandoned) Madhuvishy: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290811 (owner: Maven-release-user) [21:55:41] (CR) Ottomata: Add script for jenkins to commit latest source jars to artifacts (5 comments) [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) (owner: Madhuvishy) [21:56:49] ottomata: but its not workinggg lol [21:59:11] madhuvishy: it doesn't work when you run it manually [21:59:12] ? [21:59:15] not through jenkins? [21:59:28] ottomata: hmmm i'll check - I dont have git fat i think :P [22:00:03] it's late for you though - i'll poke around and we can talk tomorrow [22:02:48] madhuvishy: still working for a few more mins [22:02:58] you should try, you don't have git fat?! [22:03:04] ottomata: no I don't! [22:03:11] ha, i guess you never pull or push refinery jars, eh? [22:03:14] its pretty easy to install [22:03:23] just a single script too if you want to do it manually [22:03:29] yeah looks like i have never done it [22:03:34] ah [22:03:36] looking [22:03:57] (PS6) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [22:05:58] ottomata: okay installed, trying [22:07:33] (PS1) Madhuvishy: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290816 [22:07:49] ottomata: same - https://gerrit.wikimedia.org/r/#/c/290816/ [22:08:18] (CR) Madhuvishy: [C: -2] "Don't merge - test patch" [analytics/refinery] - https://gerrit.wikimedia.org/r/290816 (owner: Madhuvishy) [22:08:59] interesting! [22:09:05] madhuvishy: instaed of just running it, maybe add a --dry-run flag [22:09:09] to make it just echo git commands [22:09:14] so you can see what it is doing [22:12:36] ottomata: ya i was going dry run [22:12:59] i pushed to gerrit on my own to show you though [22:13:38] ottomata: i can see logs here - https://integration.wikimedia.org/ci/job/analytics-refinery-update-jars/19/consoleText [22:13:44] not giving me anything though [22:15:31] madhuvishy: what is the dry run git add command [22:15:32] ? [22:15:43] does it actually add the jars, not just the symlinks [22:15:55] did you initialize your local clone to use git fat via git-fat init? [22:16:10] ya this script will do that no? [22:17:51] trying the dry run [22:18:22] ottomata: uhhh [22:18:27] it just printed [22:18:30] https://www.irccloud.com/pastebin/XdAwEIVN/ [22:19:29] oh interesting [22:19:31] running this [22:19:35] didnt add the jars [22:20:27] madhuvishy: smells like bogus shell completion [22:20:28] ottomata: [22:20:31] https://www.irccloud.com/pastebin/GWopb9Nb/ [22:20:44] oh [22:20:45] ottomata: the jars are named wrong [22:20:46] :/ [22:20:49] looks like your wget didn't do the right thing [22:20:49] artifacts/org/wikimedia/analytics/refinery/refinery-camus-.jar [22:20:50] he [22:20:57] ya [22:21:00] no wonder [22:25:17] ottomata: what is -0? [22:25:33] heheh [22:25:34] madhuvishy: [22:25:36] man wget [22:25:42] -O file [22:25:42] --output-document=file [22:26:25] oh [22:26:28] it's ) [22:26:29] O [22:26:30] not [22:26:31] 0 [22:26:33] oh ha ja [22:26:33] lol [22:27:16] oook, madhuvishy i am out for theeve [22:27:19] good luck! :) [22:27:40] ottomata: okay :) it does say - [22:27:43] https://www.irccloud.com/pastebin/owtKYxOT/ [22:27:51] but git status shows otherwise [22:27:52] anyway [22:27:54] byeeee :) [22:28:01] have a nice evening! [22:28:06] you too laters! [23:06:45] (Abandoned) Madhuvishy: Add refinery-source jars for v0.0.27 to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290816 (owner: Madhuvishy) [23:15:22] (PS7) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [23:18:45] (PS1) Madhuvishy: Add dry run mode to update jars script [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290826 [23:19:23] (CR) Madhuvishy: [C: 2 V: 2] "Merging for testing - not master" [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290826 (owner: Madhuvishy) [23:34:06] (PS1) Madhuvishy: Fix typo [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290831 [23:34:28] (CR) Madhuvishy: [C: 2 V: 2] Fix typo [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290831 (owner: Madhuvishy) [23:37:45] (PS1) Maven-release-user: Add refinery-source jars for v0.0.30 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290835 [23:39:24] (Abandoned) Madhuvishy: Add refinery-source jars for v0.0.30 to artifacts [analytics/refinery] (jenkins-test) - https://gerrit.wikimedia.org/r/290835 (owner: Maven-release-user) [23:40:37] (PS8) Madhuvishy: Add script for jenkins to commit latest source jars to artifacts [analytics/refinery] - https://gerrit.wikimedia.org/r/290639 (https://phabricator.wikimedia.org/T130123) [23:50:58] legoktm (or other bot maintainers) "3:19 PM ⇐ wikibugs quit (tools.wiki@wikimedia/bot/pywikibugs) Excess Flood" - does it need to be restarted/rejoined manually?