[10:16:13] ottomata: where'y'at? [10:16:28] doing the amsterdam thing [10:16:36] drdee: ? [10:16:48] that's me! [10:17:07] so confused.. i just mean where is andrew physically [10:17:20] i went and joined the ops cave [10:17:24] its location is secret [10:17:27] just as I suspected [10:17:35] but I do not plan to stay here forever [10:17:43] just long enough to get faidon to approve the hadoop puppet stuff [10:17:44] :) [10:17:48] k, well we have power on the balcony, but I may go downstairs [13:25:24] ottomata,did you check https://github.com/viirya/puppet-hive ? [13:25:30] nope! [13:26:28] might be useful :) [13:26:33] reading [13:26:51] ha, this would never make it through ops :p [13:26:59] also, mine sets up mysql users and dbs [13:27:11] i proclaim that mine is better! [13:27:38] we'll see though, i'm just installing the package right now [13:27:41] that's all the hadoop nodes need [13:40:20] that reminds me, ottomata -- the kafka importer script probably needs to exec `hive -e 'MSCK REPAIR TABLE $table_name;'` for both the webrequest and all.1000 jobs [13:40:54] that will scan the hdfs directory for the table looking for new partitions [13:41:05] (namely, the partition that was just created by the import) [13:42:01] it'd be better to make the script add the new partition directly, as scans take a long time with big directories (like the mobile data, as it's frequent) [13:42:07] but that'd be more complicated :) [13:42:27] https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Recoverpartitions [13:43:16] oh, intersting, right now we just do this manually whenever [13:43:18] ? [14:59:29] ottomata1: yeah, i did it manually when i was doing my hive stuff [19:41:47] New patchset: JGonera; "(Story 743) Monthly charts" [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/65326 [19:42:33] Change merged: JGonera; [analytics/limn-mobile-data] (master) - https://gerrit.wikimedia.org/r/65326