[08:15:32] Lydia_WMDE: I'm wondering if we should create some wikidata-data-users-announce list or something like that [08:15:43] Very low traffic, just the important stuff that breaks your tools [08:16:43] Adrian_WMDE: i'd prefer not to create more channels but make the ones we have more well-known and useful. tool breakages should be in the weekly summary at least [08:17:03] if people read the weekly summary they should be aware of anythng important going on [08:17:09] I don't think weekly is a good interval for something like this [08:17:21] At least for me it wouldn't work [08:17:40] how so? [08:17:46] I read weekly stuff whenever I find time, not when looking for important or urgent things [08:17:59] Also, notification should be more than a week upfront [08:19:54] We already have a tech list don't we? [09:02:00] Lydia_WMDE: Do you have 15 minutes before sprint start today for terms ui? [09:24:49] Adrian_WMDE: i think there is wikitech-announce or something [09:25:06] it's more general but think breaking changes of wikidata stuff would be appropriate tehre [09:26:09] or maybe https://lists.wikimedia.org/pipermail/mediawiki-api-announce/ is what i was thinking [13:24:10] hello [13:26:16] Hello reiga [13:27:11] am looking for some guidelines/advices about historical data in wikidata [13:28:17] Historical data, like? [13:29:23] like genealogy of kings and others [13:29:43] Ah, and what kind of advices are you looking for? :) [13:30:12] have a look at this entry : Q316908 [13:30:28] specially his relatives [13:30:43] yup, I see [13:31:11] am wondering if that's the right to do it [13:31:20] way [13:31:54] It's not widely used, but it looks correct to me. [13:32:07] would have thought recording these as Father/Mother + some kind of P1480:sourcing circumstances would be better [13:32:13] Yeah. [13:33:04] Do we have a related WikiProject? Otherwise, start a discussion in the project chat [13:34:18] actually i don't know what you just said ^^ WikiProject ? project chat ? [13:34:40] am kind of new here :) [13:37:09] Oh sorry, thought you were pretty informed already. :) [13:38:16] You can ask on https://www.wikidata.org/wiki/Wikidata:Project_chat for more opinions. [13:38:29] hi everyone [13:38:33] i've a question [13:38:36] Hi Zeroth, please ask [13:38:42] first: [13:39:05] a property was recenty approved, is the ID of uruguayan authors in a database [13:39:10] autores.uy [13:39:27] i want to know the best way to import the ID to a bunch of wikidata entities [13:39:40] is there any bot i can ask for this? [13:39:42] Is there a easy overview of all ID's? [13:39:50] Like a XML file or something? [13:40:12] the database can be exported via a xml [13:40:17] yes [13:40:34] And there is no mapping yet between the both ID's I assume [13:40:40] but, the database has 7200 authors [13:40:52] and probably wikidata has 1000 or 2000 uruguayan authors [13:40:57] no, there is no mapping [13:41:07] Sounds like a good candidate for https://tools.wmflabs.org/mix-n-match/ [13:41:38] mix n match would be the tool to add the ID to the existing entities in wikidata [13:41:40] yes? [13:42:00] Yep, it tries to match items first and users can approve that. [13:42:09] perfect [13:42:13] and.. the second question is [13:42:23] for the items that doesn't exist in wikidata [13:42:35] what should i use to create them? [13:42:41] Mix-n-match :P [13:42:53] mix n match also creates items?? [13:43:18] There is a game mode, when I click on the button that indicates that there is no item yet, it creates the item [13:43:46] ohh.. [13:43:49] nice [13:43:50] It's easy to process 7200 id's with a bunch of people :) [13:43:51] :) [13:44:02] I recommend contacting Magnus on his talkpage. https://www.wikidata.org/wiki/User_talk:Magnus_Manske [13:44:08] i think i can get 5 people for the task [13:44:34] excellen sjoerddebruin [13:44:45] ill check that soft. [13:44:51] sjoerddebruin: thanks for your help. will go and ask in the project chat [13:45:01] reiga: np, hope you get responses [13:45:35] yeah i hope too as i have more :) [13:46:20] sjoerddebruin do you know how i can create a new set of items in mixnmatch? [13:46:35] i mean, a new catalog [13:46:39] Magnus created a tool for that Zeroth, but I can't find it anymore. [13:46:52] oh [13:46:54] ok [13:47:45] documenting things is not one of his highest priority's -_- [13:48:17] hehe [13:48:19] i see [13:48:29] i ll drop him a message [13:48:31] thanks again [13:49:51] Good luck! [14:08:59] hey all! [14:09:08] ooh [14:09:22] Hey [14:10:21] if I want to get all translations (labels) between two languages for entities that have labels, what kind of entity query would I use? [14:10:38] been using http://wiki.apertium.org/wiki/Wikidata to get countries, but I'd like something more general [14:10:49] also, I'm guessing I should use a dump if that exists? [14:16:26] Unhammer: on what kind of scale? [14:17:26] was thinking ~tens of thousands label-pairs [14:17:55] Would recommend using a dump then, think you get timeouts with the query service [14:18:28] sounds likely :) [14:18:32] See https://www.wikidata.org/wiki/Wikidata:Database_download :) [14:18:50] spectei: hi, if you have questions please shoot [14:19:17] sjoerddebruin, i just have the same questions as unhammer :D [14:19:45] What a coincidence. ;) [14:20:53] hehe :) [14:21:00] thanks sjoerddebruin [14:21:11] No problem. :) [14:21:40] Just waiting for Lydia_WMDE to add me to the loan list... ;) [16:18:50] Unhammer: the query service might also work, it depends how complex the query is to run (I did a fairly simple query the other day that returned over 200,000 rows without timing out, but I get timeouts on more complicated queries that would only return a few results) [16:40:54] aha [17:44:09] sjoerddebruin: ? :D [17:44:20] For my helpdesk work here. :) [17:44:28] ah :D [17:46:32] Will I see you at Wikimania? :) [17:54:20] * sjoerddebruin takes a sip out his Wikidata mug [18:45:30] aude: around? [18:46:17] SMalyshev: yes? [18:46:32] aude: hi! Wanted to ask you about https://gerrit.wikimedia.org/r/#/c/273780/ [18:46:58] ok [18:47:04] not sure what is going on there (maybe because I don't know what parsoid does) [18:47:16] I mean I know what parsoid is, but not the build part [18:47:40] i'll need to check with the parsoid people but looks like they maintain a build [18:47:43] https://github.com/wikimedia/mediawiki-services-parsoid-deploy [18:47:48] since we probably can't run npm in production [18:48:11] yeah we have the deploy repo too. But I'm not sure how it relates to GUI [18:48:12] it's probably something jenkins could generate [18:48:34] SMalyshev: right now, the javascript is unminified, etc [18:48:56] the build would take care of stuff like that [18:49:04] so current model of deployment is like this: we have deploy repo, which has Java code. We have gui repo, which has gui and is included in deploy repo [18:49:17] but the branch in deploy repo is production, not master [18:49:30] and I cherry-pick the changes right now when deploying. [18:49:36] oh, there is a deploy branch? [18:49:48] in gui repo, yes. It's called "production" [18:49:52] ok [18:50:13] https://github.com/wikimedia/wikidata-query-gui/tree/production [18:50:24] this branch does not have some files which production doesn't need [18:50:41] then we could probably use that with the build results [18:51:01] I'm not 100% happy with this setup, but that's the best I could think of so far [18:51:04] instead of needing to cherry pick [18:51:12] it's ok, it's a start [18:51:34] but I'm not sure how that is going to work with minified-unminified files, etc [18:51:39] and with tests [18:51:42] what the build puts into the dist directory is what we would want [18:51:55] but could be put elsewhere or done differently [18:52:11] what kind of tests? [18:52:34] aude: CI tests. So I don't need those in deployed version [18:52:44] ok [18:53:15] so the question is how I maintain deployment version and development version without going crazy [18:54:09] i suppose the production branch be automatically maintained with the results of the "dist" build [18:54:37] we do this also with wikidata, in a way [18:55:09] aude: so how it would look like? Would some process on CI commit into production branch? [18:55:17] yeah [18:55:23] where that hook is implemented? [18:55:34] didn't figure that out yet [18:55:43] aha :) [18:56:31] for wikidata, we have https://github.com/wikimedia/operations-puppet/tree/1341508cec1bf9c1a69adf53b45a08a0dd21e738/modules/wikidatabuilder [18:56:42] which is also not ideal but can work [18:57:04] this looks like something rather complex [18:57:17] I'd like to avoid that if possible [18:57:21] not sure it needs to be [18:57:45] it has cron, a bunch of ssh keys, etc... a lot of moving parts [18:58:23] I'd like to make something simple [18:58:30] yeah, me too :) [18:59:33] maybe instead of making gui production submodule of deploy, just do some kind of export+build there. But I'm not sure how to run that export [18:59:59] i can imagine just some bash script that wraps around grunt [19:00:13] yeah but how that script will be run? [19:00:29] I imagine I could just run it manually maybe... [19:00:43] manually, or on jenkins maybe [19:00:43] the thing is I also don't need grunt stuff in production [19:00:50] yeah [19:00:50] I only need the result [19:01:17] ok, have to go to another meeting... [19:01:26] ok [19:01:35] this helps... i had no idea there was a production branch [19:01:42] shall poke some more [19:01:48] but I'd like to discuss it more, maybe later today or tomorrow [19:02:01] ok [22:33:14] aude: https://gerrit.wikimedia.org/r/#/c/274305/ :) [23:14:19] Krinkle: thanks!