[02:34:26] One issue I frequently have with VisualEditor is that when I paste something, the screen will then jump to another part of the page. Is this documented anywhere? [12:04:01] https://openstack-browser.toolforge.org/project/deployment-prep [12:04:02] > Unknown project 'deployment-prep'. Are you just guessing? [12:04:03] o_O? [12:04:23] that tool has really poor error handling, as you can tell [12:05:00] keystoneauth1.exceptions.http.BadRequest: Additional properties are not allowed ('enabled' was unexpected) (HTTP 400) (Request-ID: req-e201fff2-2382-4a6f-a406-ac450f9b9599) [12:05:12] so I think that broke after last week's openstack upgrade [12:06:37] thanks, reported at T396011 [12:07:06] in the meantime I got the current deployment server host name from horizon instead ^^ [12:09:23] try now? [12:11:59] looks better now, thanks! [13:51:55] is network to cloud broken for everyone or just me? I can't reach my VMs from the eqiad bastion [13:52:18] which VM? [13:53:15] taavi: rn-hcptchprxy-urldownloader-01.appservers.eqiad1.wikimedia.cloud for example [13:53:54] works for me [13:54:10] lemme try with ipv4... [13:54:18] Raine: what error are you getting? [13:55:51] no error, but I can neither ping (maybe that's WAI?) and ssh hangs on debug1: `Connecting to rn-hcptchprxy-pki-01.appservers.eqiad1.wikimedia.cloud [172.16.18.224] port 22.` [13:56:49] can you paste the full `ssh -v` log? [13:57:24] that debug log smells like your ssh config might be bypassing the bastion [13:57:54] that was ssh-ing from the bastion [13:58:10] which bastion are you using? [13:58:12] Rebooting my laptop now, just in case [14:00:21] Nvm, reboot fixed it [14:00:22] Sorry [15:57:04] Bro [15:57:05] Paws is down [15:59:12] Don't have enough space (re @pv_MoDeM: Bro [15:59:12] Paws is down) [15:59:15] if you have an error message you should say what it is. don't use unnecessarily gendered language. don't use gratuitous newlines in messages. (your message could have been all one line) [16:00:09] what's your username? [16:04:26] DucknotBest (re @jeremy_b: what's your username?) [16:04:56] Why did this happen? (re @jeremy_b: what's your username?) [16:05:28] again you haven't said what the error message was [16:05:41] and what were you doing when you ran out of space? [16:06:22] When I try to upload a file, it gives an error saying there is not enough memory. (re @jeremy_b: again you haven't said what the error message was) [16:06:45] your account was created in the last hour. how did you even know what PAWS normally behaves like? [16:07:03] a script? what sort of file and how big? (re @pv_MoDeM: When I try to upload a file, it gives an error saying there is not enough memory.) [16:08:06] 5.4 kb (re @jeremy_b: a script? what sort of file and how big?) [16:09:19] memory != space. I'm still uncertain if you've given the exact wording of error message or not. (re @pv_MoDeM: When I try to upload a file, it gives an error saying there is not enough memory.) [16:09:49] can you say the steps you followed to get this far? [16:10:03] it was all in the last hour so you should remember them all [16:10:17] Wait I send screenshot. (re @jeremy_b: can you say the steps you followed to get this far?) [16:16:09] https://tools-static.wmflabs.org/bridgebot/69d072af/file_70982.jpg [16:26:11] idk why it shows as /etc/hosts for you but I agree /home/paws is full when I look. and I got an error message seemed related but I didn't manage to save it or get it to happen again. [16:26:28] I was scared those would linkify :( [16:28:55] T396051 [16:29:11] Looks like the NFS server is full. Individual users don't have immediate quotas. Rather things are shrunk daily to 5g per user [16:29:54] What is this (re @jeremy_b: T396051) [16:30:14] see bot after (re @pv_MoDeM: What is this) [16:32:14] Do you can delete file in paws? (re @jeremy_b: see bot after) [18:27:58] I deleted a few files so far, there should be some space until someone can do a more thorough cleanup (during working hours) [22:42:24] Have we ever considered putting beta.wmflabs.org on the public suffix list? We've already put wmflabs.org itself there, but if we do the same for beta.wmflabs.org, then that would make it a much more realistic testing environment (and obsolete hacks like T355281 which put test2wiki in beta under wmcloud.org, but that still limits testing to case where you look for that intentionally instead of cross-wiki stuff just generally behaving [22:42:24] the same around login and API calls) [22:42:25] T355281: Set up some beta cluster wikis with different registrable domain - https://phabricator.wikimedia.org/T355281 [22:42:33] https://github.com/publicsuffix/list/ [22:44:20] There's certainly precedence both for companies putting dev envs there (dev.adobeaemcloud.com) and precedence for a domain and its subdomain both being considered a PSL (longest match wins). e.g. ".uk" and ".co.uk", and plethora of .gov and .jp setups. [22:44:53] pmiazga: bd808: ^ [22:49:40] Krinkle: if it would help with testing things by creating better default browser realm separation then I don't see a reason not to. I don't actually know what expected behaviors it might also break off the top of my head. Is the new SUL layer working such that complete domain separation is expected and possible? [22:50:37] There might be an argument to migrate to newer hostnames under wmcloud.org as part of the same initiative if it was undertaken. [22:58:20] bd808: less restrictions is always easier than more restrictions, so SUL will "work" either way. Our code dutifully sets cookies for *.wikipedia.beta.wmflabs.org and commons.wikimedia.beta.wmflabs.org, despite the fact that, if we wanted to, a *.beta.wmflabs.org cookie would totally work and no autologin/edgelogin anything needed. [22:58:54] but it's quite usful to be able to test stuff like that, and given the browser isn't enforcing this, regressins can slip in that we'd not catch until prod [23:00:02] theoretically such change can only be equal or "worse", not "better", in that it literally adds a restriction so out of infinite possibilities, more things will be broken, but that's a feature and therefore "better". [23:00:28] transitioning domain names isn't trivial, but also totally do-able. I'd be happy to help do that first. [23:00:37] The tough nut to track is renaming databases/wiki Ids. [23:00:42] renaming domain names is fairly straight forward. [23:02:53] the unknown part of that to me (outside my direct control/expertise) is: DNS/TLS/Varnish. [23:03:21] I imagine WMCS does not support automatically syncing beta.wmflabs and beta.wmcloud to both have the same entries, and maintaining two seems tedious [23:03:46] although it should be fine not to gain new entries in the future, so maybe that can just be left frozen. [23:04:18] if I understand https://wikitech.wikimedia.org/wiki/Nova_Resource:Redirects correctly, WMCS is supposed to redirect wmflabs to wmcloud automatically if the former doesn’t exist and the latter does? [23:04:31] (though idk if that applies to beta. maybe beta has some special treatment, idk) [23:04:33] hm.. that could come in handy yeah [23:04:55] the 3-level deep subdomains might not be supported there, but might be doable indeed. [23:05:01] TLS is also non-trivial for those redirects [23:05:43] but we can certainly do a one-time export and dump them all there [23:05:56] even if not automatic [23:06:17] bd808: is there a task for beta to wmcloud? [23:17:30] Found T289318 [23:17:30] T289318: Move *.beta.wmflabs.org to *.beta.wmcloud.org or similar? - https://phabricator.wikimedia.org/T289318 [23:32:36] OK. I've summarised a bunch of this at T289318 [23:32:36] T289318: Move *.beta.wmflabs.org to *.beta.wmcloud.org - https://phabricator.wikimedia.org/T289318 [23:36:43] Krinkle: I haven't read your on-task summary yet, but I don't think we would need to do any db renaming as long as we are just replacing wmflabs.org with wmcloud.org in the domains. [23:37:05] yep, exactly. I was just mentionig dbs as an example of something that *is* hard, unlike domain renames. [23:37:22] sorry for the confusion :) [23:37:27] keeping some redirector service from the old hosts to the new feels relatively simple to setup [23:38:41] the tls certs are all driven by hiera data I'm pretty sure [23:40:22] I think this would need a custom redirector, especially because we will want TLS there too with the old names, but that is a single small vm and some apache config. [23:52:26] I see. [23:53:02] * bd808 made some comments on the phab task [23:53:18] Another option might be to let Varnish or ATS or MW-Apache do it. We already have infra there to handle redirects. [23:53:45] then we just keep all the certs in one place indeed. [23:53:50] yeah, there is the prod redirect thing that we could try to leverage [23:54:27] acme-chief makes the TLS certs a trivial problem. you setup some hiera data and then puppet can pull the certs from it [23:54:28] https://gerrit.wikimedia.org/g/operations/puppet/+/d9c6ef7e899e19c4e443d4a0cf4a0795447e851f/modules/mediawiki/files/apache/beta/sites/beta_specific.conf [23:55:39] cool. that is basically what I was talking about needing to build, so yeah jsut some config then [23:56:59] there must be some matching varnish config too to pass requests to that apache