[00:12:58] AaronS: asyncio is awesome! wikibugs uses asyncio_redis and an asyncio IRC library [00:22:24] 3Datasets-General-or-Unknown, MediaWiki-Core-Team: Improve Wikimedia dumping infrustructure - https://phabricator.wikimedia.org/T88728#1019822 (10Legoktm) [00:22:50] 3Datasets-General-or-Unknown, MediaWiki-Core-Team: Improve Wikimedia dumping infrustructure - https://phabricator.wikimedia.org/T88728#1018933 (10Legoktm) (Every time I see "Hack" I always think of the programming language) [00:41:41] awk is so awesome [00:42:00] is there some sort of servlet container for running awk web scripts? ;) [00:43:05] http://crashcourse.housegordon.org/webawk/ [00:43:58] I wrote an awk script for aggregating xenon logs in a few different ways [00:44:46] oh, cool. [00:45:15] total with recursive functions properly handled: http://paste.tstarling.com/p/lnNWoy.html [00:45:18] what does it aggregate by? [00:45:55] that one is basically the percentage of wall clock time in which a given function is present anywhere on the stack [00:46:07] right, so same as the flame graphs [00:46:16] no, not really [00:46:24] oh, i see [00:46:26] anywhere in the stack [00:46:40] by level 1 function, this is fairly similar to a flame graph: http://paste.tstarling.com/p/ORoOWN.html [00:46:56] yeah, if a function has multiple callers, the aggregate % of wall clock time is not indicated [00:47:12] you could continue on up, this is level 2: http://paste.tstarling.com/p/ERMfxC.html [00:47:30] it is basically an aggregate of the given vertical position in the flame graph [00:48:10] it'd be awesome (and easy) to automate the generation of these reports by adding a couple of lines to the cron job script [00:48:19] and it can go from the top down: http://paste.tstarling.com/p/XcUUoI.html [00:48:19] could you do that? [00:48:29] that is pretty damn neat [00:48:57] probably [00:49:07] btw, what do you mean, recursive functions properly handled? do you think abbreviating away recursive calls is the wrong thing to do? [00:49:14] btw you should see what I wrote in #mediawiki_security, the logs on fluorine should really be in /a [00:49:25] yeah i am already rsyncing the current contents to /a [00:49:36] well, the old profiler double-counted recursive functions [00:49:45] it was basically time spent multiplied by recursion depth [00:49:59] the startprofile xenon code collapses recursive calls [00:50:37] well, I did run it once without the recursion filtering, and I got 89% PPFrame_Hash::expand [00:50:54] so whatever it is doing is not the same as what I am doing [00:50:56] https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/StartProfiler.php#L90 [00:52:17] otherwise the message size could get huge [00:52:46] and i was too lazy to implement a representation utilizing some lookup table [00:53:03] yeah, if a function calls itself directly then that will filter it out [00:53:25] but PPFrame_Hash::expand() usually doesn't, it is re-entered by functions it calls [00:53:32] ah, right [00:53:40] a -> b -> a -> b [00:58:06] legoktm: what's the plan for https://phabricator.wikimedia.org/T72576 (global user page) rollout? Just ride the train now that it's on testwiki, or are we holding off until more testing/etc? [01:00:21] <^d> bd808: The biggest drawback (lack of FLOSS) from yesterday seems better in the invite we got for next week. [01:00:27] * ^d was reading a resume [01:00:38] TimStarling: is now a good time to replace /srv/xenon with a symlink, or are you in the middle of something? It should only take a moment. [01:00:52] go ahead [01:01:09] I copied the files I wanted to work on into /a/tmp [01:02:29] greg-g: more testing (people have started to report bugs, yay!), and I'd like to get the cache invalidation (T76410) and batch lookup (T88644) bugs fixed before we do a full rollout..they're not true blockers but more like "it would be really nice to have these" [01:05:31] TimStarling: done [01:06:54] legoktm: cool, so "not next week" is all I'm remembering right now :P [01:07:43] greg-g: yes, not next week. Sometime before February ends though [01:08:10] * greg-g nods [02:12:12] robla: https://www.mediawiki.org/wiki/User:Deskana_(WMF)/Power_user_tools_development_team [02:24:18] http://jklmnn.de/imagejs/ [03:43:44] ori: hey, is there anything I can do to help investigate the outage you had? I don't really know many details of what happened [03:58:13] swtaarrs: I don't think so, but thanks for the offer! The details are here: . It was not directly related to HHVM. [03:58:27] cool [03:58:29] * swtaarrs reads [03:59:44] In fact one surprising and welcome discovery is that we're not completely dead in the water without memcached. The site was actually up and serving pages for 15-20 minutes without memcached. [04:00:03] huh [04:00:20] I think we'd be beyond dead in the water without memcache [04:00:41] but we don't have anything like varnish [04:08:45] swtaarrs: since you're here, I'm curious about your LLVM backend work and what it means for HHVM -- is the idea to drop the custom backend eventually and rely on LLVM eventually? [04:09:03] ori: depends on how broadly you define "backend" [04:09:09] it's replacing a relatively small part of the jit [04:09:33] and we're unlikely to completely replace that small part of the jit with LLVM due to code generation speed [04:09:35] what's the benefit, then? reduce duplication and benefit from ongoing work to optimize LLVM? [04:09:46] yeah [04:09:56] there are lots of optimizations implemented in llvm that we could do, but haven't yet [04:10:08] we're hoping to leverage those and any future work [04:10:59] ori: I don't know how familiar you are with the internal of the jit, but we basically go hhbc (bytecode) -> hhir -> lots of optimizations on hhir -> vasm -> some optimizations on vasm -> machine code [04:11:12] the llvm backend only really replaces the vasm -> machine code part of the pipeline [04:11:59] vasm is a low level IR that looks very similar to x86 [04:12:52] * ori wonders what LLVM bytecode looks like [04:13:12] http://llvm.org/docs/LangRef.html [04:13:13] I like it [04:13:47] unfortunately the open-source hhvm build can't use llvm until we get our changes upstreamed, which will be a few more months at least [04:14:07] or we could publish our internal fork of it but we'd much rather get things upstream [04:15:32] a few months isn't long, in the grand scheme of things. out of curiosity, what's an example optimization that LLVM implements and HHVM doesn't? (I imagine there are many and most are too obscure to explain concisely) [04:16:02] its load-store elimination is much better than ours, which is probably the biggest one [04:16:30] * ori reads http://blog.llvm.org/2009/12/introduction-to-load-elimination-in-gvn.html [04:16:32] instruction selection is a big one, too. it makes better use of newer instructions when the processor supports them [04:17:08] also anything having to do with loops, since we don't have any loop-based optimizations [04:17:14] like loop-invariant code motion [04:18:25] once we get things working more I'm planning on writing a blog post with some real-life examples of how it does better [04:18:30] assuming I can find some good ones :) [04:21:21] meanwhile, i have had to put hhvm away and focus on front-end optimization of our js-based visual editor, which is fun but very different! :) i've been working on a profiling tool that uses chrome's poorly-documented but very cool remote debugging protocol: https://github.com/wikimedia/operations-puppet/blob/production/files/ve/vbench [04:21:37] it has some wmf-specific things in it right now but i hope to clean it up so it's more reusable [04:22:03] nice [04:22:36] I should learn js some day [04:24:52] ori: is that the editor that people kept getting angry about and reverting? [04:26:07] heh. yes [04:49:24] yep [06:46:04] 3Phabricator, MediaWiki-Core-Team, MediaWiki-General-or-Unknown, Project-Creators: Allow to search tasks about MediaWiki core and core only (create MediaWiki umbrella project?) - https://phabricator.wikimedia.org/T76942#1020205 (10Nemo_bis) [07:52:16] <_joe_> < swtaarrs> I should learn js some day [07:52:18] <_joe_> DON'T [07:52:24] heh [07:52:29] it can't be worse than PHP, can it? [07:52:30] >_> [07:52:34] <_joe_> it is. [07:52:38] ouch [07:52:51] <_joe_> I've seen sgolemon in FOSDEM, I wondered if you would be there [07:53:02] <_joe_> but no one else from your team was, apparently [07:53:15] nope [07:53:22] she's the main conference-goer on our team [07:53:28] at least for open source stuff [07:53:30] <_joe_> you should come, it's nice and funny [07:53:50] <_joe_> it's like the punk-rock version of techie conferences [07:53:54] hah [07:54:00] oooh it was in brussels [07:54:04] <_joe_> yes [07:54:06] any idea where it will be next year? [07:54:12] <_joe_> in brussels [07:54:16] <_joe_> it's always there [07:54:19] oh cool [07:54:35] yeah maybe I'll try to go next year [07:54:44] and get a free trip to europe [07:55:11] <_joe_> eheh ok [07:55:30] <_joe_> you're working on a FLOSS project after all, right? ;) [07:56:15] yup [07:58:10] although it is controlled by an evil corporation [07:58:21] I'm sure RMS wouldn't approve [08:12:20] <_joe_> RMS was actually there [08:12:29] <_joe_> at FOSDEM [08:12:35] <_joe_> I could've asked him [08:13:59] <_joe_> and no, RMS doesn't think corporations are evil per-se, he just wants you to make all your code public. I'd love that too :P [08:14:30] <_joe_> oh well, his position about FB has to do with other things, not hhvm for sure. [09:16:20] 3MediaWiki-Page-editing, MediaWiki-Core-Team: Use parentRevId field for section change merging instead of timestamps - https://phabricator.wikimedia.org/T88734#1020348 (10Schnark) Sounds similar to what I suggested in T34037#366435. [09:54:41] 3MediaWiki-Configuration, MediaWiki-Core-Team: Support something similar to a resource template for localBasePath and remoteExtPath duplication in extension.json - https://phabricator.wikimedia.org/T88786#1020370 (10Legoktm) [09:55:07] 3MediaWiki-Configuration, MediaWiki-Core-Team: Support something similar to a resource template for localBasePath and remoteExtPath duplication in extension.json - https://phabricator.wikimedia.org/T88786#1020373 (10Legoktm) a:3Legoktm [13:01:08] 3Phabricator, MediaWiki-Core-Team, MediaWiki-General-or-Unknown, Project-Creators: Allow to search tasks about MediaWiki core and core only (create MediaWiki umbrella project?) - https://phabricator.wikimedia.org/T76942#1020540 (10Aklapper) [14:14:54] 3Wikidata, MediaWiki-Core-Team, wikidata-query-service: Write example queries in different query languages - https://phabricator.wikimedia.org/T86786#1020594 (10JanZerebecki) This also has suggestions: https://www.mediawiki.org/wiki/Talk:Wikibase/Indexing [14:41:20] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1020611 (10Neunhoef) This is a report about an actual experiment. I downloaded 20150126.json.gz to an AWS r3.2xlarge instance (61GB RAM, 80 GB SSD, 8 vCPUs) and then used ArangoDB V 2... [14:53:32] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1020623 (10Neunhoef) Further experiments: I dropped each of these indexes and recreated it. This isolates the rebuilding of the indexes from the loading (and scanning) of the collecti... [14:53:51] hashar: I filed https://phabricator.wikimedia.org/T88798 for that issue about Jenkins using old php-luasandbox. [15:04:06] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1020635 (10Neunhoef) Disclaimer: Sorry, I forgot to introduce myself: My name is Max and I also work for ArangoDB. Analysis The 3M documents need around 11 GB of main memory. If you... [15:36:18] <_joe_> looks like arangodb people really really want us onboard [15:51:00] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1020678 (10Manybubbles) >>! In T88549#1020635, @Neunhoef wrote: > Disclaimer: Sorry, I forgot to introduce myself: My name is Max and I also work for ArangoDB. Thanks! > The 3M docum... [15:52:27] https://gerrit.wikimedia.org/r/#/c/187624/ has two +1s, including one from Chris. Anyone feel up to +2ing? [16:01:02] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1020682 (10Manybubbles) >>! In T88549#1018178, @Fceller wrote: > Hi, I'm the CTO of ArangoDB, so my comments are most certainly biased. I still would like to tell you about our opinion... [16:29:00] anomie: {{done}} [16:29:09] bd808: Thanks [16:53:25] <_joe_> ok, link for the hangouts session is https://plus.google.com/hangouts/_/wikimedia.org/opssessions [17:03:54] you misspelled obsessions [17:11:44] "what you will actually do" :) [17:11:48] * greg-g is reading the slides [17:32:23] Any takers to merge https://gerrit.wikimedia.org/r/189031? [17:52:39] anomie: done :) [17:52:53] legoktm: Thanks! [18:14:03] legoktm: I'm having a brain block: do you have deploy rights? [18:14:12] greg-g: yes... [18:14:32] cool [18:15:43] legoktm: I'm just lining up the globaluserpage thingy, should we just plan on Wed the 18th? [18:15:44] Everybody on mw-core-team should have deploy rights. [18:16:02] * bd808 wonders if SMalyshev has applied yet [18:16:06] bd808: agreed, in the "if they don't, they should" sense [18:17:12] greg-g: yeah, that sounds good [18:17:57] legoktm: cool, I'll put it in the train window, but you can just do it after the train is done in the same window [18:50:00] _joe_: heh, I thought he was very against any non-GPL licenses [18:51:11] <_joe_> mh no. He is against non-free licences [18:52:11] <_joe_> swtaarrs: while we're here - the Translation Cache in HHVM gets evicted in some way, right? What is the algorythm you use? [18:53:07] _joe_: it gets evicted when the process dies :) [18:53:22] there's currently some support for moving code around in the TC but it's not hooked up to anything [18:53:55] <_joe_> oh ok [18:54:09] <_joe_> because I do see that it doesn't raise indefinitely [18:54:17] <_joe_> even with subsequent deploys [18:54:24] <_joe_> that should change the code [18:54:40] hmm [18:54:49] <_joe_> well, not that hhvm stays up for more than 2 days on average :P [18:54:53] _joe_: do you updates files in place or deploy a whole new tree? [18:54:56] haha, yeah [18:55:22] <_joe_> sorry gtg [19:04:55] 3Librarization, MediaWiki-Core-Team, MediaWiki-General-or-Unknown: [Regression] MediaWiki should detect absent or outdated vendor - https://phabricator.wikimedia.org/T74777#1020999 (10Legoktm) 5Open>3Resolved [19:05:07] swtaarrs: we do both in place updates and new trees. We have a bit of in place almost every day (M-F) and then add a new MW version each Wednesday [19:05:25] ok [19:06:07] 3MediaWiki-Core-Team: Revert HHVM tag hiding hack - https://phabricator.wikimedia.org/T1205#1021011 (10Legoktm) [19:06:07] basically on Wednesdays we remove version N-1 and add version N+1 with version N still active [19:07:03] 3MediaWiki-Core-Team, MediaWiki-extensions-CentralAuth, MediaWiki-Authentication-and-authorization: UserLoadFromSession considered evil - https://phabricator.wikimedia.org/T43201#1021014 (10Legoktm) [19:10:25] afk for a bit. I have to find a bday present for my Mom [19:11:36] 3MediaWiki-Core-Team, wikidata-query-service: Figure out if Neo4j is a possible alternative to Titan - https://phabricator.wikimedia.org/T88571#1014978 (10Manybubbles) Short answer is that Neo4j is indeed a contender and a pretty good one at that. [19:12:02] 3MediaWiki-Core-Team, wikidata-query-service: Figure out if Neo4j is a possible alternative to Titan - https://phabricator.wikimedia.org/T88571#1021033 (10Manybubbles) >>! In T88571#1017609, @daniel wrote: > Neo4j river plugin for Elastic: https://github.com/sksamuel/elasticsearch-river-neo4j Rivers are essenti... [19:14:27] 3MediaWiki-Core-Team: Port Wikidata-Gremlin to test against Neo4j - https://phabricator.wikimedia.org/T88821#1021064 (10Manybubbles) 3NEW a:3Manybubbles [19:14:43] 3MediaWiki-Core-Team, wikidata-query-service: Port Wikidata-Gremlin to test against Neo4j - https://phabricator.wikimedia.org/T88821#1021073 (10Manybubbles) p:5Triage>3High [19:50:06] 3MediaWiki-Core-Team, wikidata-query-service: Port Wikidata-Gremlin to test against Neo4j - https://phabricator.wikimedia.org/T88821#1021257 (10Manybubbles) a:5Manybubbles>3Smalyshev [19:56:46] 3MediaWiki-extensions-OAuth, MediaWiki-Core-Team: Add way for OAuth apps to only authenticate (no other valid rights) - https://phabricator.wikimedia.org/T88757#1021272 (10Anomie) a:3Anomie [20:51:35] <^d> manybubbles: I think I've hit a weird phantom shard bug in ES in some testing. [20:51:45] oh poopsies [20:52:14] <^d> So I have an index `A`, that I created. I shot one of the nodes in its setup, so the cluster went red and some of `A`s shards couldn't get allocated. [20:52:50] <^d> Here's where it gets fun [20:53:15] <^d> I see 10 unallocated shards (fair enough, I shot the node). Drop the index and the cluster goes back to green. [20:53:22] <^d> Yay, no more shards missing! [20:53:43] <^d> I recreate `A`...and guess what! The unallocated shards show back up! [20:56:06] <^d> Lunchtime! [21:09:14] ^d: oh boy! uh - that is phantom shit. [21:09:19] bug time! [21:09:34] I bet you can work around it by force allocating them. they'll be empty but oh well [21:10:05] <^d> Lets see.... [21:20:27] <^d> ElasticsearchIllegalArgumentException[[allocate] failed to find [A][3] on the list of unassigned shards] [21:23:38] <^d> or [21:25:07] <^d> ElasticsearchIllegalArgumentException[[allocate] allocation of [enwiki_general_first][6] on node [deployment-elastic05][wSPDvi8cRDWOvfgXcnvm6w][deployment-elastic05][inet[/10.68.17.179:9300]]{master=true} is not allowed, reason: [NO(shard cannot be allocated on same node [wSPDvi8cRDWOvfgXcnvm6w] it already exists on)][YES(node passes include/exclude/require filters)][YES(primary is already active)][YES(below shard recovery limi [21:25:07] <^d> t of [2])][YES(allocation disabling is ignored)][YES(allocation disabling is ignored)][YES(no allocation awareness enabled)][YES(total shard limit disabled: [-1] <= 0)][YES(target node version [1.3.7] is same or newer than source node version [1.3.7])][YES(enough disk for shard on node, free: [56.4gb])][YES(shard not primary or relocation disabled)]] [21:25:38] <^d> If you drop the index, the shards disappear and you can't route them. If you create the index the shards reappear and so you can't allocate 2 of the same shard, that's crazy talk. [21:25:52] <^d> (surprise, this is enwiki on beta) [21:26:48] <^d> I wonder if I need to do a cold restart. [21:35:00] 3MediaWiki-Core-Team: Convert JobRunner.php to PSR-3 logging and add levels - https://phabricator.wikimedia.org/T87521#1021541 (10bd808) Changes deployed but validation testing blocked by {T88732} [21:36:31] anyone see Ori around? [21:42:00] <^d> manybubbles: Even a cold restart didn't do it. They really are stuck somewhere. [21:42:09] kill ze shards! [21:42:19] <^d> howwwwww??? [21:42:44] hmmm - file a bug? [21:43:51] nuke from orbit? [21:44:00] would work [21:44:05] binary edit using butterfly wings and gamma rays? [21:44:07] <^d> nuke what? all the instances? [21:44:14] <^d> That'd kill it [21:44:20] foreach server ssh rm -rf /var/lib/elasticsearch [21:44:26] that's nute from orbit [21:44:40] you could just file a bug though [21:44:50] and get a response on Monday, probably [21:45:02] cold stop, rm in /var/lib/elasticsearch, cluster start [21:45:05] <^d> I'll file a bug for the phantom shard issue, but I need to unbreak beta today :) [21:45:13] or go complain in #elasticsearch [21:45:28] they sometimes have folks who can unbreak in there [21:45:45] legoktm: Are you still using the sul-test server I setup in labs? [21:46:36] <^d> I'll hold off nuking /var/lib/elasticsearch until we get a response [21:46:42] <^d> Might need me to go spelunking [21:47:28] SMalyshev: I added you to the https://wikitech.wikimedia.org/wiki/Nova_Resource:Mediawiki-core-team labs project. You can make vms there now and access the existing vms if needed [21:47:36] legoktm: I added you too ^ [21:47:44] bd808: oh, cool, thanks [21:48:10] it can be handy when you want to demo something to folks [21:48:46] is there a tutorial on how to use that system? [21:48:58] bd808: I am not [21:49:13] SMalyshev: https://wikitech.wikimedia.org/wiki/Help:FAQ is a good place to start probably [21:49:28] aha, thanks [21:49:37] There is a lot of doc in the Help namespace [21:50:00] and just shout if you get stuck. I learned a lot about labs by helping with beta [21:51:05] it's a lot like using AWS really. pick a vm size, initialize an instance, mess around [21:51:56] the tricky bits are writing your own puppet manifests to setup new things repeatably [21:51:59] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021608 (10Neunhoef) > OK. Maybe its just a function of not using a super nice machine for testing. We really do want the system to scale down to work with less ram and cheap, big sp... [21:52:29] and learning how to control that kind of custom puppet config in the wikitech interface [21:53:39] legoktm: k. I'll add archiving that wiki and nuking the instance to my list of random things to do [21:54:13] hmmm... maybe I'll just shut the instance down actually. night be easier [21:54:21] *might [22:01:42] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021682 (10Neunhoef) > Assuming we're OK with just planning for large server deployments: Does the memory requirement scale linearly with the size of the data? How does that play with... [22:03:42] AaronS: on https://docs.google.com/a/wikimedia.org/spreadsheets/d/1MXikljoSUVP77w7JKf9EXN40OB-ZkMqT8Y5b2NYVKbU/edit#gid=0 you made a row about generators. Did you mean continuations? Because I thought you meant continuations. [22:12:40] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021716 (10Smalyshev) The thing which worries me the most is the non-persistent indexes. Note that the real data size would be not 3M nodes but about 20M nodes //and// over 100M edges,... [22:16:36] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021733 (10Neunhoef) > Do you think sparse indexes are going to give us enough performance thousands of these indexes with similar startup times to what we see now? Is that something... [22:16:43] ^d: replied [22:17:13] 3MediaWiki-extensions-SecurePoll, MediaWiki-Core-Team: Set up mini wikifarm in Labs which has SecurePoll on it - https://phabricator.wikimedia.org/T88725#1021737 (10bd808) a:3bd808 [22:22:30] <^d> manybubbles: I had to nuke everything and start over. It was fubar'd in my attempts to cold restart it a few times [22:22:33] <^d> I'll rebuild. [22:22:52] 3MediaWiki-Core-Team, MediaWiki-extensions-CentralAuth, SUL-Finalization: Expose users_to_rename table publicly - https://phabricator.wikimedia.org/T76774#1021755 (10Legoktm) p:5Triage>3High [22:29:31] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021777 (10Neunhoef) @Smalyshev: Please also note: in my tests today the 4 indexes for 3M documents needed 672 MB of data, which is a reasonable amount of 226 bytes per document. That... [22:32:35] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021791 (10Manybubbles) >>! In T88549#1021733, @Neunhoef wrote: > I cannot really answer your question, in particular since it will depend on whether you have only "thousands of hash i... [22:39:05] 3MediaWiki-Core-Team, wikidata-query-service: Investigate ArangoDB for Wikidata Query - https://phabricator.wikimedia.org/T88549#1021807 (10Smalyshev) @Neunhoef In current data model, each edge carries a primary value, a boolean flag and a small set (usually well under 10, in most cases 1-3 or none) secondary va... [22:43:19] is there a hard max file size for commons uploads? [22:44:27] ah found it: https://commons.wikimedia.org/wiki/Commons:Maximum_file_size [22:54:52] <^d> Yeah, 100MB for general uploads, 1GB for chunked, can do larger from command line by request