[19:36:14] halfak: question if you are there? [19:36:38] Hey! In a meeting. Can respond in 24 minutes. [19:51:17] nuria, done early. What's up? [19:51:39] halfak: remember this slide https://docs.google.com/presentation/d/1-gLgBn6F37IYOprCclNVwLlb9EIY2etDOEE8SCkskpM/edit#slide=id.g44de3cade0_0_342? [19:51:59] halfak: how did we computed the reduction of hours? [19:57:21] nuria, it's based on a theoretical model using the new RecentChanges filter. [19:57:54] Without ORES, one would need to review 100% of revisions to ensure that you catch vandalism. [19:58:19] With a model (in most cases) you can filter that down to 10% of revisions and be sure you're catching practically all the vandalism. [19:58:27] This is not an actual measurement of time saved. [19:58:42] It is based on English Wikipedia's scale and does not include other wikis. [20:18:07] halfak: ok, i see [20:18:22] halfak: and for commons now we coudl apply asimilar reasoning [20:18:46] halfak: right? like every single upload now is moderated by hand [20:19:20] Right. Assuming the model fits. E.g. if people are patrolling uploads, then in theory, they are missing stuff unless they patrol them all. [20:19:32] And if we could narrow down what needs patrolling, that would be huge. [20:20:07] halfak: i imagine that vadalism and porn uploads are patrolled by hand, ok, ya, this is enough [20:20:13] halfak: thank you [20:20:25] No problem. Happy to hep! [20:20:27] *help [20:50:54] nuria, we're looking at that NSFW classifier with interest too. We think it might fit into our k8s plans for ORES 2.0