[00:05:36] is it my impression or $ grep -i page_coun MysqlUpdater.php [00:05:36] [ 'dropField', 'page', 'page_counter', 'patch-drop-page_counter.sql' ], [00:06:16] $ find | grep patch-drop-page_counter [00:06:16] ./maintenance/mssql/archives/patch-drop-page_counter.sql [00:06:16] ./maintenance/sqlite/archives/patch-drop-page_counter.sql [00:06:16] ./maintenance/archives/patch-drop-page_counter.sql [00:06:35] is it missing for Mysql or does it use the one on archives/ ? [00:09:13] hmm, nevermind [01:21:47] You can use just `find` like that? I'm skeptical. [01:21:54] `find .`, maybe. [06:01:41] Hi. I'm having troubles executing the math extension on my wiki. Unfortunately, the manual isn't helpful here. This is my error... (Failed to parse (Missing texvc executable.) [06:01:59] Can someone help me? [06:01:59] Hi TridenRake, just ask! There is no need to ask if you can ask [06:07:51] Okay.. Now I tried using mathoid. I'm getting a database error. Anyone? [06:08:27] Notice: Uncommitted DB writes (transaction from DatabaseBase::query (WikiPage::pageData)) [06:30:06] Okay. I ran the update.php now and fixed the error. But equations are not showing up... :/ [06:38:51] TridenRake: Try directly contacting the extension maintainer? [06:39:30] Niharika, A phabricator report? Okay. I will do that. :) [06:39:54] TridenRake: Sounds good! [07:45:29] Hey there, please check this page: https://de.wikipedia.org/wiki/Strumaresektion It seems that the wikisyntax is not parsed correctly. [07:46:56] Okay, the syntax was a bit strange, fixed it :) [13:26:17] Hi all! I am Abhinand and I have been selected for GSoC'16. I am working on [ https://phabricator.wikimedia.org/T128827 ] as my project. [13:28:25] As I am in my community bonding period, I have to request for a project in Phabricator. As we haven't yet decided on the name of the project, will I be able to request with a temporary name for the project and change it afterwards? [13:35:53] abhinand_: https://www.mediawiki.org/wiki/Phabricator/Creating_and_renaming_projects [13:40:41] OH-: Thanks:) [13:46:06] abhinand_: Please check first if there's an existing project name which applies and if so, can you use it. [13:46:59] abhinand_: Ah, that's a new extension, so your requested name should be like extensions-Page-Notifications or something. [13:50:10] Niharika: I have discussed this with my mentor and decided to give it a temporary name like 'UntitledNotificationExtension' and change it afterwards. [13:51:04] abhinand_: I am not sure that's a possibility. I don't see a problem with picking a name now vs later. You should ask andre about it when he's online. [13:51:13] (Not likely on a weekend) [13:52:40] I have asked him but I didn't get any reply. I will check with him once again. Thanks;) [14:08:21] The Project Formerly Known As Prince [14:19:56] o/ saper. [15:01:53] hello, I've installed mediawiki using "git clone http" [15:02:48] git clone https://gerrit.wikimedia.org/r/p/mediawiki/core.git [15:03:04] but how do i check everything is okay [15:58:16] hello. I recently found out that redirect pages can contains more links than just the target to redirect to. There is no way to tell from the page and pagelinks tables, which link represents the actual redirect, is there? [16:00:05] adrian_1908: redirects are stored in the aptly-named "redirect" table [16:00:58] Skizzerz: yes, but according to MediaWiki this table only started to record entries in 2007. If important redirects for popular topics are missing from it, then that's not very useful. [16:01:37] (I mean for wikipedia) [16:01:47] sorry, I know Wiki != Wikipedia [16:03:34] there is no way to tell if a link is a redirect or not using solely the pagelinks table, no. You could join page to text where page_is_redirect=1 and then parse the #REDIRECT link yourself, or you can use the API as I'm sure that offers some way to enumerate redirects [16:04:43] that said, 2007 was a long time ago, it seems unlikely that there are very many redirects that are not stored in the redirect table; keep in mind that if any of those redirect pages were edited in the past 9 years then the act of editing them will insert the record [16:05:37] it is very likely that the first link in the pagelinks table for a redirect page is the redirect, but that isn't a guarantee [16:07:15] Yeah, I had similar thoughts. Approximate likely isn't good enough though. Imagine missing something like U.S. -> United_States_of_America because nobody had a need to touch it. [16:08:30] you've never actually described what you're trying to accomplish [16:09:38] Create a map of article interlinks in the main namespace, but merging redirects into that so they behave like direct links. [16:10:31] I'm not working with a database directly, but rather the SQL dumps. I'm wondering if using the XML dump instead would give me that information. [16:13:55] I reckon a page title can be derived from a link on a page by a specific pattern, some PHP function inside MediaWiki, right? [16:17:57] Anyway, gotta be going now, bye! [18:23:53] Hello. Anyone knows how can I query this wikitext https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=jsonfm&titles=Main%20Page without these weird \n and \00x stuff? [19:01:31] ElGatoSaez: \n is a newline [19:01:35] none of that is weird [19:01:43] I don't like that [19:01:54] a json parser will transform it to the correct characters [19:02:24] ok [19:02:29] (e.g. the parser will turn the string "\n" into an actual newline character) [19:02:42] since you need that anyway just to extract the data, you'll be fine [19:02:50] ok [19:02:58] and the weird /00x stuff? [19:03:06] how can I turn them into á and é [19:03:11] the parser will do that too [19:03:58] ok [21:00:05] /msg NickServ identify bhyrava [21:01:59] Ouch. :P [21:19:21] So, there's a "most linked-to pages" as Special:MostLinkedPages but no equivalent for the _least_ linked-to. There's an "orphans" concept, but if two pages link to each other and otherwise nothing links to them, they don't count as orphans. How to find these islands? [21:32:09] they could be queried from sql [21:33:16] also note it's probably a long list [21:42:44] Platonides: That's kinda the thing, our instance is very small (531 content pages) but it still feels like we have a lot of very-poorly-connected stuff including some actual islands. We've been very good at chasing down orphans (since there's a report for that, and you can bribe people with alcohol), but now we're at the second-level of orphan-chasing and the difficulty just went way up. [21:43:07] What would that sql query look like? [22:04:20] myself: SELECT page_namespace, page_title, COUNT(*) AS count FROM page JOIN pagelinks ON (pl_namespace=page_namespace AND page_title=pl_title) WHERE page_namespace=0 GROUP BY page_id ORDER BY count ASC LIMIT 10; [22:04:35] remove "page_namespace=0" if you are interested for all namespaces [22:04:46] tweak the limit to your preference [22:11:07] awesome, thank you! [22:12:03] you are welcome [22:12:45] I'm one of the last people who should ever be let near sql, but I'll pass that to our more-savvy folks and bring them more wine, that should do the trick. [22:13:47] Now I just need a way to send a glass to folks on the internet who are super helpful... [22:15:49] :) [23:23:24] Is there a way to let normal users upload javascript? [23:23:32] but in a secure way [23:23:47] (i.e. like sites.google.com does by putting it in a seperate iframe)