[03:14:48] anyone familiar with the French Infobox_Entreprise based on Scribunto & Lua? [03:33:17] anyone familiar with Scribunto & Lua? [04:02:59] rick_: what do you want to know about it? [04:52:09] jackmcbarn: want to know if its a good idea to follow the French use of Scribunto & Lua in their Infobox_enterprise for the en WikiPedia [04:53:26] ...or if other options should be evaluated? [04:55:02] would like to convert https://en.wikipedia.org/wiki/Template:Infobox_company so it draws info from WikiData [11:19:34] DanielK_WMDE: https://phabricator.wikimedia.org/T157013 Do you think that's worth a shot? [11:19:40] I have a hard time profiling such things [11:20:04] but I think that might gain us a fair bit of performance for dumping and potentially other things [11:30:38] hoo: yes, i think so. talk to SMalyshev, he's the expert on this, and did similar optimization for our RDF serialization. [11:31:52] I also bumped our dumpers from 4 to 5 shards [11:32:15] dumping is getting slower all the time (well, data grows by 0.5-1% per week, so that's expected) [12:02:31] Is there a hangout for the lightning talks? [12:02:42] Lydia_WMDE? Auregann_WMDE? [12:03:02] Thiemo_WMDE: yes [12:09:32] Auregann_WMDE: could you have a look at https://www.wikidata.org/wiki/Wikidata:Project_chat#Constraint_violation_warning ? I know some work had been done on integrating constraint violations better, but I don't know what exactly was planned [12:11:20] Wasn't it just a students thing that was left unfinished? [12:11:53] nikki: thanks, I'll have a look [12:54:57] nikki sjoerddebruin It's not left unfinished, we have currently a student working on the topic :) [12:56:26] :D [12:58:19] nikki: the student is currently working on building out the special page that does the checks [12:58:31] last i heard i'll get to see some result next week [12:58:47] and then she'll work on making the issues show up in items right next to the problematic statement [12:59:12] Suggester is also getting attention now, I see. [12:59:17] yes [12:59:29] we're getting our act together a bit :D [13:51:36] How do i create a profile for myself [15:15:05] Aleksey_WMDE: https://quarry.wmflabs.org/ [15:30:42] hoo: Hi, I was wondering if I could have you look at updated Lua code for statement tracking: https://github.com/hall1467/mediawiki-extensions-Wikibase/blob/master/client/includes/DataAccess/Scribunto/mw.wikibase.entity.lua [15:31:40] This should resolve the issue with pairs that you had identified last week [15:32:07] ah, cool [15:32:10] did you test it? [15:35:37] My collaborator Kevin did. Seems to work as it should now [15:37:47] The tests pass with the changes made, so that looks good (although they don't specifically test pairs()) [15:38:26] It would be nice if you could add some regression tests for that [15:38:32] hall1467: ^ [15:40:30] Okay, good. He created a test for pairs so I think that that will work as well. We can do the regression tests for you [15:41:14] ./client/tests/phpunit/includes/DataAccess/Scribunto/LuaWikibaseLibraryTests.lua [15:41:18] is the place for that :) [15:44:00] Sounds good. Thanks for pointing me there. To clarify, the regression tests should just test pairs, correct? [15:56:19] hoo: Any insight into what the tests might look like would be greatly appreciated. Thanks :) [15:56:44] hall1467: Well, take on the existing example entities (or more than one) [15:56:49] and use pairs on them [15:57:04] make sure the results are what they used to be [16:05:20] Okay, I'll work on doing that. Thanks for clarifying [19:59:33] hoo: I'm having a bit of trouble running the tests in the file you had mentioned earlier today. What does the command look like? I need phpunit, right? Thanks [20:00:36] Specifically this file: https://github.com/hall1467/mediawiki-extensions-Wikibase/blob/master/client/tests/phpunit/includes/DataAccess/Scribunto/LuaWikibaseLibraryTests.lua [20:08:33] hall1467: Yes [20:08:40] They are being run in client/tests/phpunit/includes/DataAccess/Scribunto/Scribunto_LuaWikibaseLibraryTest.php [20:08:57] so running that with MediaWikis phpunit runner will give you what you need [20:44:53] hoo: Thanks! Working to get the tests running [21:15:37] https://www.wikidata.org/w/index.php?title=Q937857&action=history [21:15:45] Should be semiprotected? [21:29:12] hi [21:29:17] anybody here [21:29:56] Hi doctaxon [21:29:58] does anybody know how I can get the language list of a wikidata item using SQL [21:30:13] from item Q999999 [21:30:47] The list of defined labels? [21:31:06] Or sitelinks to different Wikipedias? [21:31:08] the list to the wikis [21:31:12] yes [21:31:32] With SQL or SPARQL? [21:31:38] SQL [21:31:42] not SPARQL [21:32:24] Then I don't know, sorry :S [21:32:31] I searched the tables up and down and couldn't find anything [21:35:14] abian: do you know to get it using SPARQL? [21:41:04] doctaxon: For example, you can simply write DESCRIBE wd:Q999999 to get all the data, including all the sitelinks [21:41:29] But I think SPARQL isn't the easiest way of retrieving info from a single item [21:42:10] You can use, for example, https://www.wikidata.org/wiki/Special:EntityData/Q999999.json with the same purpose [21:42:26] Or https://www.wikidata.org/wiki/Special:EntityData/Q999999.rdf [21:51:43] hoo: for the phpunit runner to work, do I need to have a MediaWiki instance installed on my machine or is there an easier way? [21:52:27] hall1467: Some of them are integration tests [21:52:39] so you will need a MediaWiki+Wikibase installation I'm afraid [21:53:07] hoo: how can I get a list of the Wikipedias to a wikidata item by SQL query? [21:53:22] for example Q999999 [21:54:03] i couldn't find such a table in wikidatawiki database [21:54:24] doctaxon: SELECT ips_site_id, ips_site_page FROM wb_items_per_site WHERE ips_item_id = '1234'; [21:54:35] wb_items_per_site is the table in question :) [21:56:49] Thanks hoo, I'll work to do the installations [22:00:39] hoo, that's it: select page_title, ips_site_id, ips_site_page from page, wb_items_per_site where ips_item_id = page_id and page_title = 'Q999999' and page_namespace = 0; [22:01:02] cool thanks [22:01:44] You're welcome :) [22:01:48] hoo, but it's better to get the label name too, where can I find this? [22:39:49] I have a question about notability. Let's say that there is a really obscure band which releases a really obscure album, but they do publish it to bandcamp. Can this band and album be added to wikidata? [23:19:56] rodarmor: I would generally expect something like that to not be considered notable, unless it's notable enough on a particular wikipedia (e.g. bands which sing in smaller languages tend to be quite notable on the wikipedias for those languages) [23:21:13] nikki: Right, that sounds reasonable to me. However, from https://www.wikidata.org/wiki/Wikidata:Notability see #2 [23:21:56] It seems that this is a "clearly identifiable conceptual or material entity" and "can be described using serious and publicly available references" [23:22:27] And note that the guidelines say it must only meet one of them. [23:23:43] I think you would have trouble arguing that a band's own bandcamp page counts as a serious reference, if anything self-published counted, anyone and anything with a website/facebook page/etc would be notable [23:25:07] could someone explain to this person why it is fine to add ISNI identifiers even if the ISNI portal blocks resolutions coming from Wikidata? https://www.wikidata.org/wiki/User_talk:MovieFex#ISNI_identifiers [23:25:58] so it's not just me having problems with the isni links... [23:26:30] nikki: Sure, that's true. But I think the data would meet musicbrainz guidelines for inclusion, which would qualify as a serious reference. [23:27:10] nikki: And just to be clear, I'm not trying to be difficult, I'm just having a hard time understanding the policy as written. [23:27:16] nikki: yeah it's really annoying (and dumb, as it is quite easy to circumvent) [23:29:26] pintoch: and bizarre, since I'm using *more* of their resources by having to redo the request [23:29:37] exactly [23:39:55] rodarmor: understood :) I think it's hard for people to understand what is/isn't notable because people don't actually agree on what's notable, e.g. some people don't accept databases where anyone can add themselves as a serious reference, others do [23:41:08] so the criteria don't get applied very consistently :/ [23:44:17] nikki: Yeah, it's really tricky. Thinking at a higher level, what are the wikidata inclusion guidelines trying to accomplish? Are they about avoiding wasting computing resources? Or avoiding wasting editor resources? [23:46:57] I think personally I lean more on the side of inclusion, at least for wikidata. That if something can be objectively described and refers to a concrete entity, I'm not sure if notability is important. However, I don't make the rules, so all that is mostly moot :) [23:47:31] I think they're trying to find a balance between having lots of data and the work needed to maintain it (e.g. keep it free of vandalism, check that it's correct, update it as things change) [23:48:44] Yeah, I can understand that. [23:48:48] Trade offs are hard! [23:49:08] anyway I need to go to bed now :) [23:49:10] night [23:49:13] 'night [23:49:19] Thanks for the answers!