[01:11:22] with existent databases for each wiki [01:11:32] is it possible to break it into a master/slave system= [01:11:33] ? [01:16:22] biberao: ? [01:16:31] if you have the proper passwords... [01:17:08] no i mean [01:17:14] i have like 5wikis [01:17:33] is it possibleto hae those 5 wikis converted to a master/slave? [01:18:21] or should it have be done from scratch? [01:18:55] what do you mean by master/slave? [01:19:04] what's a "slave wiki"? [01:19:24] i meant database [01:19:28] master and slave database [01:19:52] and how does it relate to those 5 wikis? [01:20:04] you can have a slave db for each of those 5 wikis [01:20:15] 5 master dbs + 5 slave dbs [01:20:19] (or more) [01:20:37] you just setup mysql replication [01:20:43] can only be like [01:20:46] and then tell mediawiki about the additional $wgDBservers [01:20:47] 1 master 5 slaves [01:20:52] or 2 server 5 slaves [01:20:54] and so on? [01:21:00] you have 5 wikis [01:21:09] yes [01:21:10] each wiki has its own db [01:21:12] yes [01:21:22] sure, you can have 1 master server, holding the 5 dbs [01:21:31] not one server per db [01:21:48] but you can't have a wiki only being a slave [01:22:01] since each wiki would have content different to the other [01:22:13] yes [01:22:14] and the slave contains exactly* the same as the master [01:22:47] I'm not sure if you are expressing things wrong, perhaps trying to use the wrong tool... [01:22:56] im expressing things wrong [01:23:19] well, maybe I'm being a bit obtuse, too [01:24:22] thanks [01:24:55] maybe you can try phrasing it differently? [01:25:02] what is your high-level goal? [01:25:33] i was just trying to understand [01:25:39] a good replica system for 5wikis [01:26:07] ok [01:26:16] you actually replicate db servers [01:26:21] so you have the 5 dbs in one master [01:26:29] and replicate to another [01:26:55] standard mysql (or your db flavor) replication [01:26:56] so what is slave then? [01:27:13] additionally, mediawiki can ask certain queries to the slave [01:27:31] so that results in less work for the master [01:27:56] the slave is a db server that keeps a copy of the data in the master [01:28:10] ok coolthanks [01:28:34] https://en.wikipedia.org/wiki/Replication_(computing)#DATABASE [01:29:06] ok thanks [01:29:49] you're welcome [01:30:17] going to bed [01:30:36] g'night [12:26:24] Does anybody got any idea how to edit images on their upload. I need to add a watermark with imageMagik [12:26:56] and limit dimensions [16:25:10] hi [16:25:22] i can see a wiki here has 120k pages [16:25:42] how come if cargo tables has only 300 pages [16:25:45] and nothing else added? [17:23:54] why the hell the array db works for some wikis and others dont [17:26:09] :| [17:36:45] Platonides: here? [18:09:12] Hello [18:10:33] Is there anyone online [18:11:32] hi [18:11:40] anuhello: do you need something? [18:13:32] lol [18:15:14] if i choose on mediawiki install default "my_wiki" for multiple wikis and not specifying a different prefix [18:15:19] will it work? [18:15:30] no [18:16:01] well, no if they're all using the same db server, yes if all of them are on unique db servers [18:16:15] its the same db server [18:16:18] different databases [18:17:47] weird [18:17:48] :| [20:02:58] why is "mediawiki api javascript example" search so useless [20:03:10] i'd like to use javascript outside of a wiki to make an api query [20:05:19] sveta: Search for CORS tutorial instead? [20:28:18] is there an api to read json formatted data from a wiki page? [20:29:25] example https://test.wikipedia.org/wiki/User:Gryllida/foo.json [20:34:03] https://test.wikipedia.org/w/api.php?action=parse&page=User:Gryllida/foo.json seems a bit useless [20:35:05] :| [20:35:25] https://duckduckgo.com/lite/?q=mediawiki%20json%20page%20content%20api absolutely useless too [20:35:32] whats next, i'm not a computer, i'm a human being! [20:37:33] sveta: https://test.wikipedia.org/w/index.php?title=User:Gryllida/foo.json&action=raw [20:37:36] sveta: action=query&prop=revisions&rvprop=content will return the page text (as a string, so you'll have to parse the JSON again in your code) [20:38:05] i don't think there is anything that will return the parsed JSON directly in the response [20:38:07] thats super excellent, thank you, legoktm and MatmaRex [20:38:23] MatmaRex, i presume you saw what legoktm said, it's raw parsed json [20:58:11] hey [20:58:24] any solution for having a mediawiki behind cloudflare display real ips? [21:03:40] biberao: there's some notes at https://www.mediawiki.org/wiki/Manual:CloudFlare [21:04:05] it's some years old, but a quick skim seems like the advice should still apply [21:06:04] cloudflare also seems to have published a guide at https://support.cloudflare.com/hc/en-us/articles/200170806-How-do-I-restore-original-visitor-IP-with-MediaWiki- [21:06:12] i tried the latter [21:06:21] I don't particularly recommend it though, as it suggests core modifications which are fragile and will revert on upgrade [21:06:22] the last one i mean [21:06:35] after all i didnt [21:06:40] i just did the nginx fix [21:07:40] also [21:07:49] why on some wikis [21:07:57] php hits 70% cpu and others dont :| [21:15:35] biberao: the easiest way would be to do in LocalSettings.php if ( isset( $_SERVER['HTTP_CF_CONNECTING_IP'] ) && validate_the request_comes_from_cloudflare() ) { $_SERVER['REMOTE_ADDR'] = $_SERVER['HTTP_CF_CONNECTING_IP']; } [21:16:01] implementation of validate_the request_comes_from_cloudflare() is left as an exercise to the reader / cloudflare support :P [21:18:33] they seem to provide a client certificate: https://support.cloudflare.com/hc/en-us/articles/204899617-Authenticated-Origin-Pulls [21:19:32] another option would be to filter on https://www.cloudflare.com/ips/ [21:22:39] what is this request_comes_from_cloudflare() ? [21:24:33] pls [21:24:50] I am documenting another wait [21:24:54] ah [21:24:54] thanks [21:24:55] :D [21:25:02] *another way, please wait [21:26:02] np thanks [21:35:29] biberao: see https://www.mediawiki.org/wiki/Manual:CloudFlare#Configure_CloudFlare_IP_addresses_directly_on_MediaWiki [21:35:38] it's completely untested, but it should work fine [21:35:46] much easier than the other solutions listed there [21:36:06] so it requires squid? [21:41:05] what about using xff? [21:41:45] biberao: 1) no, the variables are misnomers. squid isn't required. 2) that *is* using XFF [21:41:55] ah [21:41:59] misnomers? [21:42:57] mis·no·mer NOUN a wrong or inaccurate name or designation. [21:43:17] ah [21:43:54] thx