[02:47:15] hi! was there an update in the models used for ptwiki? I'm seeing a lot of red lines in my watchlist and recent changes lately [02:47:20] even from bot edits [02:48:56] consider these 3 edits for example: [02:48:57] https://ores.wmflabs.org/scores/ptwiki/?models=damaging|reverted&revids=45224556|45224507|45223695 [08:44:48] wiki-ai/revscoring#616 (feature_profiling - 569c796 : halfak): The build was fixed. https://travis-ci.org/wiki-ai/revscoring/builds/120251577 [12:21:04] hey halfak [12:21:07] :-) [12:23:29] o/ Helder [12:23:33] What's up? [12:30:31] o/ Helder [12:31:27] halfak, do you know if the models used for ptwiki changed this week? [12:31:44] They shouldn't have changed in any substantial way. [12:31:48] Why? What's up? [12:31:57] possibly regression with bot edits: https://ores.wmflabs.org/scores/ptwiki/?models=damaging|reverted&revids=45224556|45224507|45223695 [12:32:23] suddenly my watchlist is full of "red" and "orange" lines [12:32:25] Eek. that looks bad. [12:32:30] OK. I'm checking it out now. [12:33:24] yeah, I suppose so [12:51:48] Man.. I don't know what the heck is going on. [12:52:23] So, the model itself works when I use it directly with revscoring [12:52:39] My local version of ores runs the model just fine. [12:53:11] The *deployed* model actually works when it is only scoring one edit, but it breaks when it is scoring many edits. [12:53:24] Scoring many edits with the HEAD of ores-wikimedia-config works fine. [12:53:32] I think i might just try a deployment. [12:53:41] hmm.. [12:53:43] I'll increment the version IDs first so that we can invalidate the cache. [12:53:52] weird [12:54:12] yeah... and other models seem to work fine on the remote server. [12:54:47] no similar reports from other wikis? [12:55:49] Staging works! https://ores-staging.wmflabs.org/scores/ptwiki/?models=damaging|reverted&revids=45224556|45224507|45223695 [12:55:50] WTF!!! [12:55:54] WHY [12:57:53] Helder, is ORES *still* misbehaving for these models? [12:58:18] the one from ores-staging? [12:58:19] I mean to ask if *new* edits still look bad. [12:58:31] Na. regular ORES via scoredrevisions [13:00:12] This is almost all red for me: https://pt.wikipedia.org/wiki/Special:Contribs/Aleth_Bot [13:01:43] How is the RC feed? [13:04:04] * Helder is having issues with his connection [13:04:57] RC looks "as red as it should be" [13:05:29] Helder, I wonder if this was some sort of fluke [13:05:40] I'll file a bug to investigate. [13:05:54] oops... said it too early [13:05:54] But I think I'm not going to rush out a new deployment for cache invalidation [13:06:07] Uh oh [13:06:13] by increasing the number of rows to 100 in RC, I see the bottom of the page all red [13:06:24] Hmm... I wonder if it is a time period [13:06:28] (like 90%) [13:07:37] and they are marked as 100% damaging/100% reverted [13:07:49] (at least the 15 rows I checked) [13:08:19] e.g. https://pt.wikipedia.org/w/index.php?diff=45234235 [13:08:54] https://ores.wmflabs.org/scores/ptwiki/?models=damaging|reverted&revids=45234235 [13:10:02] (the user is a sysop) [13:10:39] Looks like it is a 15 minute time period [13:11:43] Helder, try again: https://ores.wmflabs.org/scores/ptwiki/?models=damaging|reverted&revids=45234235 [13:11:48] I just manually invalidated the cache [13:12:51] looks good on the history: https://pt.wikipedia.org/w/index.php?title=Categoria:Asas_voadoras&action=history [13:14:31] however many of his edits still have (too) high scores [13:14:45] and they span a period of ~30 minutes [13:15:43] 12:27 - 12:47 [13:22:09] OK. I'm going to leave this be for now. suddenly very tired (jet lag), so I'm going to take a break from the keyboard. [13:22:18] Thanks for reporting this Helder. [13:22:32] no problem [13:23:02] good rest [17:10:40] halfak: I know why it's happening [17:10:51] I'm thinking of a robust way to fix it [17:11:12] it's because we deleted the git clone from puppet and moved it to fabfile [17:12:25] so basically when "initialize server" on the fabfile is not being made the puppet fails [17:24:12] halfak: around? [17:24:37] ^ sort of. Working with people at the hackathon. [17:24:43] But can see what you are saying and I agree. [17:24:53] I had to do some manual folder creation before puppet would run [17:25:09] cool, I'm trying to fix that [17:25:21] if we want to make that folder, the git clone would fail [17:26:18] this issue happens in every initializing of a ores server [18:32:12] halfak: my PR apparently failed travis. no idea why