[06:18:30] 10Scoring-platform-team (Current), 10Research, 10drafttopic-modeling: Add wikidata to articletopic pipeline - https://phabricator.wikimedia.org/T254289 (10Dibyaaaaax) **Run-5** | Classifier| Fasttext |Parameters | `loss=ova` `epoch=25` `dim=50` `lr=0.1` `pretrainedVectors=word2vec/wikidata-20200501-lea... [10:47:37] 10Scoring-platform-team (Current), 10Research, 10drafttopic-modeling: Add wikidata to articletopic pipeline - https://phabricator.wikimedia.org/T254289 (10Dibyaaaaax) **Run-6** | Classifier| Fasttext |Parameters | `loss=ova` `epoch=25` `dim=50` `lr=0.1` `pretrainedVectors=word2vec/wikidata-20200501-lea... [13:51:10] 10Scoring-platform-team, 10drafttopic-modeling: Clean up History and Society.Society in the topic taxonomy. - https://phabricator.wikimedia.org/T246912 (10Isaac) See https://github.com/wikimedia/wikitax/pull/6 for implementation of these changes [14:01:04] 10Scoring-platform-team (Current), 10Research, 10drafttopic-modeling: Add wikidata to articletopic pipeline - https://phabricator.wikimedia.org/T254289 (10Dibyaaaaax) **Run-7** | Classifier| GradientBoosting |Parameters | `n_estimators=150` `max_depth=5` `max_features="log2"` `learning_rate=0.1` | Num... [15:35:00] hi all. Do you any pointer to find the list of features used by ORES to compute articles' quality? [19:41:31] hey there, ORES people [19:42:05] i am wondering what would be a good test to see if ORES works also over https [19:42:07] Aaron doesnt seem to be connected [19:42:13] kevinbazira: ^ [19:42:17] Could you help [19:44:18] so.. the goal is: have TLS between caching layer (ATS) and the ORES backends. and what i did is: add envoy-proxy on each ores* backend, create a certificate for, configure it to listen on 443 and talk "upstream" (local_port) to 8081 [19:44:27] that's where uwsgi is listening [19:48:18] what i'm testing is ... is talking to existing "http://ores.discovery.wmnet:8081" now equivalent to talking to "https://ores.discovery.wmnet". and it's not yet, so i will keep debugging that [22:00:50] need some help with `revscoring extract` I'm extracting features for about 1M revisions, but the script terminated in between because of some error. Does anyone know how to resume extract from the point it was terminated?