[06:00:05] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Marostegui) >>! In T218302#5060387, @Mholloway wrote: > The tables are created and the... [06:18:11] 10DBA, 10MediaWiki-Cache, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Marostegui: Replace parsercache keys to something more meaningful on db-XXXX.php - https://phabricator.wikimedia.org/T210725 (10Marostegui) I have merged and deployed the above change. The first time I have browsed enwiki th... [07:57:35] 10DBA, 10MediaWiki-Cache, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Marostegui: Replace parsercache keys to something more meaningful on db-XXXX.php - https://phabricator.wikimedia.org/T210725 (10aaron) They don't seem related nor are actually thrown exceptions (just warnings caught by MWExc... [08:17:13] 10DBA, 10MediaWiki-Cache, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Marostegui: Replace parsercache keys to something more meaningful on db-XXXX.php - https://phabricator.wikimedia.org/T210725 (10Marostegui) Thanks @aaron! I am going to keep browsing codfw during today, but so far it doesn't... [08:27:51] 10DBA, 10Goal, 10Patch-For-Review: Implement database binary backups into the production infrastructure - https://phabricator.wikimedia.org/T206203 (10jcrespo) ` *************************** 1. row *************************** id: 970 name: snapshot.s1.2019-03-26--20-00-01 status: finished... [08:31:17] dumps, on the other side, seem to not have executed [08:35:05] PermissionError: [Errno 13] Permission denied: '/var/log/mariadb-backups/backups.log' [08:35:13] heh [08:42:12] https://mysqlserverteam.com/the-bind-address-option-now-supports-multiple-addresses/ [08:42:15] nice [08:43:54] ok to go with https://gerrit.wikimedia.org/r/499163 ? [08:44:08] yep! [08:44:42] if there is any alias they can change it later [08:46:38] who can change it? [08:47:04] whoever uses the aliases [08:48:20] I think I am not getting your question [08:49:48] I have no questions [08:50:26] ok then [08:52:26] ah [08:52:35] After reading it 10 times I get the meaning [08:52:48] my bad [08:56:57] thanks for restarting sanitariums yesterday [08:57:08] are you using snapshot_test_T210292 screen on cumin1001? [08:57:13] nop [08:57:22] I am going to kill it [08:57:26] that is your screen actually [08:57:30] oh [08:57:47] wait [08:57:47] that red PS1 isn't mine [08:58:02] I am mixing it with 134684.T210713 [08:58:03] T210713: Drop change_tag.ct_tag column in production - https://phabricator.wikimedia.org/T210713 [08:58:29] 134684.T210713 -> that one is mine and it is in use [08:58:35] yeah, no issue [08:58:46] so I will kill 258587.snapshot_test_T210292 only [08:58:53] go for it! [08:59:39] it was confusing because I accidentally attached to your session from withing my session [08:59:51] screen-ception [09:00:42] hahaha [09:06:17] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by jynus on cumin1001.eqiad.wmnet for hosts: ` ['dbprov2001.codfw.wmnet'] ` The... [09:08:14] I don't know if I mentioned but dumps are running now [09:08:29] great [09:26:29] 10DBA, 10Wikidata, 10Wikidata wb_terms Trailblazing, 10User-Addshore: Send the schema to DBAs and follow up with their review - https://phabricator.wikimedia.org/T219142 (10alaa_wmde) a:03Addshore [09:28:24] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by jynus on cumin1001.eqiad.wmnet for hosts: ` ['dbprov2001.codfw.wmnet'] ` The... [09:46:07] so the Hard disks (spining disks) are sdb, the solid state ones are sda [09:46:27] I think that is only because the order in which they were setup on the raid [09:46:47] I am doing a partition manually just for testing anyway [09:47:04] ah cool [10:12:08] o/ marostegui jynus lots of context added to https://gerrit.wikimedia.org/r/#/c/mediawiki/extensions/Wikibase/+/499142/5/repo/sql/AddNormalizedTermsTablesDDL.sql :) [10:12:28] addshore: oh nice, thank you - will read it! [10:18:47] 10DBA, 10Wikidata, 10Wikidata wb_terms Trailblazing, 10User-Addshore: Send the schema to DBAs and follow up with their review - https://phabricator.wikimedia.org/T219142 (10Marostegui) [10:19:20] 10DBA, 10Wikidata, 10Wikidata wb_terms Trailblazing, 10User-Addshore: Send the schema to DBAs and follow up with their review - https://phabricator.wikimedia.org/T219142 (10Marostegui) [10:20:51] 10DBA, 10Wikidata, 10Wikidata wb_terms Trailblazing, 10User-Addshore: Send the schema to DBAs and follow up with their review - https://phabricator.wikimedia.org/T219142 (10Addshore) [10:21:22] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10ops-monitoring-bot) Script wmf-auto-reimage was launched by jynus on cumin1001.eqiad.wmnet for hosts: ` ['dbprov2001.codfw.wmnet'] ` The... [10:22:11] 10DBA, 10Wikidata, 10Wikidata wb_terms Trailblazing, 10User-Addshore: Send the schema to DBAs and follow up with their review - https://phabricator.wikimedia.org/T219142 (10Marostegui) We can probably follow up on the patchset rather than on phab,as there are comments there already [10:22:45] looks like the main discussion is going to be varchar vs var binary [10:23:08] wb_terms uses varbinary already, so is there any reason for varchar? [10:23:22] well, my response to that would be, what was the reason for using varbinary? :D [10:23:29] so that is a broader discussion [10:23:43] in theory *char should be used [10:24:00] which for WMF gets automatically translated into *binary [10:24:09] but it should be char from anyone else [10:24:12] addshore: that I don't know! I am just saying what the original table we want to normalize, uses varbinary :) [10:24:21] but then there was 2 issues [10:24:36] people seeing *binary and thinking they should be using that [10:24:43] (incorrectly) [10:24:55] which leads to inconsitencies [10:25:00] https://phabricator.wikimedia.org/rSVN80547 [10:25:01] why are they automatically translated for wmf use? [10:25:03] and the otehr issue was [10:25:09] addshore: mysql does that [10:25:16] because of the charset used [10:25:26] aah yes, gotcha [10:25:41] the other issue is that with the recent usage of utf8mb4 [10:26:01] index sometimes fail unles long prefix index is being used [10:26:10] so we no longer support real utf8 [10:26:33] and while it is not 100% clear, I think the latest suggestion would be not to support utf8, and everything is binary [10:26:43] but what core says doesn't necessarly apply to extensions [10:26:46] I kind of feel like this all ties into https://phabricator.wikimedia.org/T199833 [10:26:47] so it is a bit of a mess [10:27:05] the thing is, for WMF, in practical terms, we don't care as they are effectively the same [10:27:13] So I guess after https://phabricator.wikimedia.org/rSVN80547 it has all been varbinary? [10:27:20] the issue is for mw as a product, so for us it doesn't bother us much [10:28:14] the thing is, for external users [10:28:16] yeah, my main point on the patchset was for consistency really (with the "parent" table) [10:28:25] they still use utf8 [10:28:35] even if anyone agrees that it is a bad idea [10:28:45] but nobody set to create a migration path [10:29:03] but again, for us we don't care as varchar == varbinary [10:29:18] we shouldn't be using utf8? [10:29:31] depending what you mean with utf8 [10:29:42] there is utf8 and UTF-8 [10:29:55] the first is a 3-byte format by mysql [10:30:02] the second is an international standard [10:30:12] we definitely should not be using utf8 [10:30:31] if any, we should be using mysql's utf8mb4 [10:30:45] but that has issues, mostly lagging behind the real standard [10:31:22] so binary is used as a compromise, allowing for things that are not currently supported on mysql's implementation [10:32:04] this is simpler than it looks, it is just the practical ramifications that are complicated [10:32:08] so, if we are storing UTF-8 then should we not be using utf8 collation or something better in mysql? [10:32:24] something better == binary [10:32:54] collation is not handled by the database [10:32:59] it is handled by php [10:33:30] in some cases because we (wikimedia) are the source of collation, and that does not exist on mysql yet [10:33:48] So I have not been able to find why this was pushed: https://phabricator.wikimedia.org/rSVN80547 I mean, what triggered it [10:33:56] mediawiki has better collation support, in general, than mysql [10:34:00] apart from:  varchars cause problems ("Invalid mix of collations" errors) on MySQL databases with certain configs, most notably the default MySQL config [10:34:32] I don't have the context, but I think if you have any question, it is better to answer concrete questions than very general ones [10:34:42] ^ addshore [10:34:50] ack [10:35:08] the general question of "why are we using X" is very long :-) [10:35:23] @jynus @addshore sorry I'm jumping in the middle [10:35:23] I think the concrete thing can be seen on this patch https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Wikibase/+/499142 [10:35:54] alaa_wmde: we were discussing the whole varchar vs varbinary thing [10:35:58] so the short question is [10:36:04] So the way I see this patch is: if we have varbinary on the foundation for whatever reason, and the wb_terms also have varbinary, given that we don't care much, I would still go for varbinary for consistency with the rest [10:36:15] if I understand the conclusion would be to use VARCHAR with binary if we want to support UTF8 on mysql level, right @jynus ? [10:36:20] for wmf- we don't care, they all get converted transparently to varbinaries [10:37:02] and mediawiki does the collation [10:37:26] well, it has to be coded [10:37:34] but it is coded in other places [10:37:54] for wikibase as a product [10:37:59] yup, okay, I'm not actually sure if that is done for the text in wb_terms currently, at least, nowhere i can see it [10:38:12] it is an issue, and I will be glad to help tou understand the issue [10:38:47] for example, can wikibase be installed without mediawiki? [10:38:58] not currently [10:39:20] mediawiki only supports 2 charsets [10:39:36] utf8 (the 3 byte one)-semideprecated, and binary [10:39:55] using the utf8 collation, the old standard, would be a bad idea with *char [10:40:14] because you would not be able to put there many languages characters, emojis, etc. [10:40:31] utf8 (stress on the mysql usage of that config) doesn't support that [10:40:47] you want utf8mb4, which is not currently suported by mediawiki [10:41:03] or binary, which is the only well supported option, and the one we use [10:41:36] you can suppor utf8mb4, but you will need to fix lot of stuff on wikibase and mediawiki in general- and as far as I know, the plan is not to support it [10:43:08] for example, trying to use utf8mb4 makes allmost all extensions fail unit testing: https://phabricator.wikimedia.org/T193222 [10:43:58] I wrote here, not my decision, but the feedback I got from mw developers: https://phabricator.wikimedia.org/T193222#4247329 [10:45:29] if that continues to be true, varchar or varbinary doesn't matter, they will all be varbinary, although you will have to handle collation at app side [10:46:39] and since 2018, binary is the only collation supported: https://phabricator.wikimedia.org/T196092 [10:46:59] ...for new installs [10:54:01] jynus: very nice explanation :) [10:54:22] I think I confused them more than I helped [10:54:27] haha [10:54:47] but I think in realy char or binary doesn't matter, although they should be aware they will be using binary charset and collation [10:55:02] (for the raw database access) [10:55:20] I will remain supporting my last comment about consistency :) [10:56:09] * alaa_wmde haha no it was very helpful and informative @jynus thanks! [10:56:11] did you check the tables vs the definition? [10:56:52] because it may be defined as varchar on code, but after the sql runs, there is no way to know (all will be binary) [10:57:33] @marostegui we gotta go with varbinary in new schema for wikidata at least as it is our main priority .. if other/new wikibase instances might need more work on code level for collations then we will probably tackle that separately later [10:58:34] jynus: I did, and it is varbinary [10:58:45] then +1 to maintain consistency [10:58:53] alaa_wmde: but your patchset varchar, that is why I commented :) [11:10:20] Install got stuck at "11:06:37 | dbprov2001.codfw.wmnet | Polling until a Puppet sign request appears", never happened to me before [11:10:42] I can see the sign request there [11:10:57] can you loging and check if it has network or it failed or something? [11:15:01] I cannot because someone removed the documentation from the wiki [11:15:06] @marostegui yeap 👍 I meant I will go and update it too ;) [11:15:16] because "the script should be good enough" [11:15:43] jynus: sudo /usr/local/sbin/install-console $server_fqdn should be it? [11:15:48] alaa_wmde: cheers for that [11:15:53] so I guess I have to put a ticket to volans [11:16:08] jynus: what's up? [11:16:29] do you know where https://wikitech.wikimedia.org/wiki/Server_Lifecycle went? [11:16:35] jynus: it is still here: https://wikitech.wikimedia.org/wiki/Server_Lifecycle#Manual_installation [11:16:45] marostegui: thanks [11:18:52] find / -name install-console only gives one result, in your home, marostegui :-) [11:19:22] oh, it is from puppet [11:19:27] uh? [11:20:50] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Mholloway) >>! In T218302#5060982, @Marostegui wrote: >>>! In T218302#5060387, @Mhollow... [11:22:29] all sort of stuff went wrong [11:26:54] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Marostegui) >>! In T218302#5061444, @Mholloway wrote: >>>! In T218302#5060982, @Maroste... [11:31:12] jynus: the issue there is that when the puppet CSR was created the host was not having network, hence it didn't send the CSR to the puppetmater [11:31:15] error while receiving frame on eno4 (retry: 0): Network is down [11:31:36] strange [11:31:43] it did send it [11:31:46] I saw it [11:31:56] also shouldn't the host be called dbprov2001? [11:32:09] lol [11:32:12] it's dprov2001 [11:35:27] It is all manuel's faul :-D https://gerrit.wikimedia.org/r/498119 [11:35:37] I have no idea then how that got installed [11:35:57] oh, dns is right, reverse is wrong [11:37:28] if we had the delta check the linter would have catched that :( [11:38:01] no worries, I won't touch those hosts anymore [11:38:17] marostegui: it was a joke, don't get angry :-D [11:39:26] they clearly don't like me [11:39:29] sorry 0:-) [11:39:58] i actually caught an error on that dns patch and asked papaul to ammend it, but that one slipped thru [11:40:30] I make a lot of mistakes, that is why I ask you for review to you to catch them! [11:41:19] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Mholloway) Great, thanks @Marostegui! I've created the table on testwikidatawiki and w... [11:42:58] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Marostegui) Is there anything else pending on this task? I saw that {T218087} got closed [11:44:03] jynus: marostegui: I'm going to create two new wikis in ten minutes or so. I hope it's fine for you [11:44:12] Amir1: tasks? [11:44:33] Amir1: we'd need to sanitize labs and all that jazz before the views are created [11:44:35] https://phabricator.wikimedia.org/T212597 and https://phabricator.wikimedia.org/T218155 [11:44:37] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10jcrespo) I would like to review the table structure. [11:44:55] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Mholloway) 05Open→03Resolved a:03Mholloway I think it's safe to call it resolved.... [11:45:03] haha, I can postpone it [11:45:07] anything you say [11:45:20] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Mholloway) 05Resolved→03Open >>! In T218302#5061558, @jcrespo wrote: > I would like... [11:45:33] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10Marostegui) >>! In T218302#5061558, @jcrespo wrote: > I would like to review the table... [11:46:03] Amir1: We don't have the tasks for the storage creation, similar to: T207095 or at least we are not tagged, can you see them somewhere? [11:46:03] T207095: Prepare and check storage layer for vnwikimedia - https://phabricator.wikimedia.org/T207095 [11:46:23] Amir1: nevermind [11:46:25] I am blind :) [11:46:34] 10DBA, 10MediaWiki-Database, 10WikimediaEditorTasks, 10Patch-For-Review, 10Reading-Infrastructure-Team-Backlog (Kanban): Choose DB/Cluster for WikimediaEditorTasks tables - https://phabricator.wikimedia.org/T218302 (10jcrespo) a:05Mholloway→03jcrespo [11:46:42] no, what's wrong? I can't find it either [11:47:23] Amir1: Yeah, we have two with similar names [11:47:50] So T212597 has one -> T212625 [11:47:51] T212625: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 [11:47:51] T212597: Create Wikipedia Western Armenian - https://phabricator.wikimedia.org/T212597 [11:47:51] Right? [11:48:09] But for T218155 I cannot find it [11:48:09] T218155: Create Wikisource Hindi - https://phabricator.wikimedia.org/T218155 [11:49:23] let me make that [11:51:48] Amir1: Once you have created both, can you comment on the "Storage" tasks and assign them to me so they get blocked on me sanitizing everything before asking cloud to create the views? [11:52:51] 10DBA, 10Cloud-Services: Prepare and check storage layer for hi.wikisource - https://phabricator.wikimedia.org/T219374 (10Ladsgroup) [11:53:10] sure [11:53:38] 10DBA, 10Cloud-Services: Prepare and check storage layer for hi.wikisource - https://phabricator.wikimedia.org/T219374 (10Ladsgroup) a:03Marostegui [11:53:52] Amir1: are those created already? [11:54:24] marostegui: so we have three wikis to create now: nap.wikisource, hi.wikisource and hyw.wikipedia [11:55:00] npwikisource and hywiki already have tickets that first assigned to DBAs and now it's unassigned (waiting for creation) [11:55:01] Amir1: I think so yep [11:55:36] jynus: I had the patch ready to fix the typo [11:55:39] hiwikisource is waiting for you (I just made the storage ticket) [11:56:09] Amir1: I am confused sorry, assign the storage ones to me once you've actually run the creation [11:56:35] As we need the tables to be there so we can sanitize them and apply filters :) [11:56:54] sure [11:57:17] Or either comment on them once you have created them, either way works :) [11:57:52] Don't worry [11:57:55] Thanks [11:58:11] Thanks! I will grab lunch and then take a look [13:58:32] 10DBA, 10Cloud-Services: Prepare and check storage layer for hi.wikisource - https://phabricator.wikimedia.org/T219374 (10Marostegui) a:05Marostegui→03None Removing myself as the wiki isn't created yet. Assign to me once you've created the tables, so I can sanitize it before sending it to the WMCS Team [13:58:41] Amir1: ^ [14:00:28] marostegui: we have --I think-- the new hywwiki tables created iirc - although the wiki config had to be reverted for some issues [14:01:04] hauskatze: but that other ticket is for hiwikisource [14:01:15] sure [14:01:23] hauskatze: what is the hywiki one? [14:01:25] I mean, new tables were created today [14:01:30] hyw. wikipedia [14:01:31] Hywwiki [14:01:33] with two w w [14:01:34] Two w [14:01:39] heh [14:01:46] 😁🙈 [14:01:55] But I have no updates on that ticket [14:02:13] https://phabricator.wikimedia.org/T212625 that ticket is empty [14:02:27] Yeah, I need to do it, it doesn't serve any traffic for now [14:02:37] It's not live [14:02:39] Amir1: But the tables are created [14:02:45] Amir1: So we need to sanitize it [14:02:56] Yes. I comme [14:03:02] I add it [14:03:11] * marostegui confused [14:03:12] (Sorry, phone) [14:03:15] ok :) [14:03:42] So, my confusion is: hywwiki looks created (I can see the tables on s3 master) but there is no update to that ticket, if it is created, we need to sanitize it [14:03:48] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10MarcoAurelio) @Marostegui Although the wiki config had to be reverted the addWiki script was run so //I guess// the tables are now in place? [14:05:10] ^^ here's your update :P [14:06:02] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) So those tables will not be dropped + created again as part of any other process and will remain there until the wiki config is in place? [14:08:57] marostegui: https://phabricator.wikimedia.org/T212625#5061983 -- I don't know how they want to do it [14:09:47] hauskatze: Let's see what Amir1 says [14:10:20] hauskatze: I can do the sanitization and everything but if the tables will be dropped, there is no point on asking the cloud team to run the views [14:10:30] sure [14:10:45] I am going to sanitize it anyways, just in case [14:12:27] db is not on toolforge yet though [14:12:49] It is, just not the view [14:12:54] I doubt the table will be dropped [14:13:10] The issues are basically mw app/mw-config issues [14:13:34] Ah, I see [14:13:46] * hauskatze don't :P - makes some tea [14:17:14] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Ladsgroup) >>! In T212625#5061983, @Marostegui wrote: > So those tables will not be dropped + created again as part of any other process and will remain th... [14:17:58] marostegui: is it good enough ^ [14:18:05] yep [14:18:12] I am answering to that [14:21:17] 10DBA, 10Data-Services, 10Operations, 10cloud-services-team: Prepare and check storage layer for hywwiki - https://phabricator.wikimedia.org/T212625 (10Marostegui) a:03Marostegui I have sanitized `hywwiki` on db1124:3313 and triggers are now in place and filtered tables deleted. ` mysql.py -h db1124:3313... [14:21:37] Amir1: sounds good ^ [14:22:30] thanks. sure [14:22:59] the other ones you mentioned earlier, those are still not created, right? [14:46:27] 10DBA, 10Data-Services: Prepare and check storage layer for hi.wikisource - https://phabricator.wikimedia.org/T219374 (10JJMC89) [14:46:48] 10DBA: Purchase and setup remaining hosts for database backups - https://phabricator.wikimedia.org/T213406 (10Cmjohnson) [14:53:23] 10DBA, 10Operations, 10ops-eqiad: rack/setup/deploy eqiad dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T219399 (10Marostegui) p:05Triage→03Normal [14:54:29] 10DBA, 10Operations, 10ops-eqiad: rack/setup/deploy eqiad dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T219399 (10RobH) [14:56:18] 10DBA, 10Operations, 10ops-eqiad: rack/setup/deploy eqiad dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T219399 (10jcrespo) [15:00:02] 10DBA, 10Operations, 10ops-eqiad: rack/setup/deploy eqiad dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T219399 (10Cmjohnson) [16:26:59] root@dbprov2001:~# df -hT /srv [16:27:00] Filesystem Type Size Used Avail Use% Mounted on [16:27:00] /dev/mapper/tank-data xfs 11T 12G 11T 1% /srv [16:27:02] <3 [17:09:32] 10DBA, 10MediaWiki-Cache, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Marostegui: Replace parsercache keys to something more meaningful on db-XXXX.php - https://phabricator.wikimedia.org/T210725 (10aaron) FYI, I was playing around with my PHP interpreter last night using: ` lang=php marostegui: I deployed because I didn't know you were around [17:10:29] https://gerrit.wikimedia.org/r/499528 [17:13:31] 10DBA, 10MediaWiki-Cache, 10Patch-For-Review, 10Performance-Team (Radar), 10User-Marostegui: Replace parsercache keys to something more meaningful on db-XXXX.php - https://phabricator.wikimedia.org/T210725 (10jcrespo) Personally, if there is space and works, > Just wait it out for the purgeParserCache.... [18:17:29] jynus: I wasn't around, you did well! [18:20:03] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10jcrespo) I've setup dbprov2001 and sent snapshots for codfw there. We may have to think a way to coordinate dbprov2001 and dbprov2002. I... [18:20:30] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10jcrespo) [18:28:16] btw. Two days ago I discovered we have querycache table, but we also have querycachetwo, literally querycachetwo as part of core [18:46:13] Amir1: You must be new around here [18:46:45] I'm just not very attentive :P [18:47:03] This week is actually tenth year anniversary me becoming admin in fawiki [19:08:53] 10DBA, 10Operations, 10ops-codfw, 10Patch-For-Review: rack/setup/deploy codfw dedicated backup recovery/provisioning hosts - https://phabricator.wikimedia.org/T218336 (10Marostegui) I would prefer option #1 because it scales better for the future if we need more hosts and it looks cleaner in general, a cen... [22:05:07] 10DBA, 10Operations, 10ops-codfw, 10procurement: rack/setup/install (5) dedicated dump slaves - https://phabricator.wikimedia.org/T219463 (10RobH) p:05Triage→03Normal [22:05:11] 10DBA, 10Operations, 10ops-codfw, 10procurement: rack/setup/install (5) dedicated dump slaves - https://phabricator.wikimedia.org/T219463 (10RobH) [22:05:50] 10DBA, 10Operations, 10ops-codfw, 10procurement: rack/setup/install (5) dedicated dump slaves - https://phabricator.wikimedia.org/T219463 (10RobH) Any member of #dba team can provide feedback (@jcrespo or @Marostegui) and please then assign to @papaul for followup. [22:06:08] 10DBA, 10Operations, 10ops-codfw, 10procurement: rack/setup/install (1) testing host for codfw backups - https://phabricator.wikimedia.org/T219461 (10RobH)