[14:11:36] godog, herron: while looking for something completely unrelated I've found something odd regarding the ES logging on the logstash hosts [14:11:52] volans: hey, what did you find? [14:12:08] it seems that we rotate and compress the production-logstash-eqiad.log but there are only ~2 lines written there, while the current logging is going to [14:12:11] production-logstash-eqiad_jvm_gc.pid65614.log.1.current [14:12:19] and we somehow rotate those, but not compress too [14:12:28] see /var/log/elasticsearch on logstash1010 for example [14:13:06] ok having a look [14:14:30] thanks, let me know if you need me to open a task [14:15:02] volans: thanks! [14:15:04] so the current jvm garbage collection log is going into the jvm_gc log, and afaict the other log is just low noise [14:17:23] but why it's logging to a .1? [14:22:02] as I understand it java rotates through the gc logs based on the values for NumberOfGCLogFiles and GCLogFileSize settings [14:22:16] ES is running with -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=20M [14:22:30] so writes to .0 until that’s 20M, then .1, and so on [14:26:22] but that conflicts with logrotate settings [14:26:31] /var/log/elasticsearch/*.log { [14:26:52] wait, .0 is the first? [14:27:21] ahh ok [14:27:27] yeah, not sure it would ever hit since the files are named .N or .N.current [14:27:37] weird naming [14:27:51] and depends on the PID so we have not 2 .current [14:27:54] but only one is real [14:27:58] blergh [14:28:47] yeah, per our -Xloggc:/var/log/elasticsearch/production-logstash-eqiad_jvm_gc.%p.log [14:28:59] %p being process number [14:29:23] which I think is default for our es puppetization? I don’t think the logging cluster customizes that at least [15:05:49] ack