[15:30:13] someone working on netbox? [15:30:27] volans|off, chaomodus? [15:30:56] paravoid: are you having issues accessing it? [15:31:06] so [15:31:13] short answer is yes, but [15:31:23] I've been having these connection interrupted issues for a long time [15:31:36] yes, I remember ;) [15:31:43] now I got a couple of these again, but this time I also got [15:31:48] Static Media Failure [15:31:49] The following static media file failed to load: css/base.css [15:31:49] Check the following: [15:31:49] manage.py collectstatic was run during the most recent upgrade. This installs the most recent iteration of each static file into the static root path. [15:31:56] The HTTP service (e.g. nginx or Apache) is configured to serve files from the STATIC_ROOT path. Refer to the installation documentation for further guidance. [15:31:59] STATIC_ROOT: /srv/deployment/netbox/deploy-cache/revs/03cc2ddd9f99b70dfc0efb9eeb69d846d8047225/src/netbox/static [15:32:02] STATIC_URL: /static/ [15:32:04] The file css/base.css exists in the static root directory and is readable by the HTTP process. [15:32:07] Click here to attempt loading NetBox again. [15:32:09] i.e. got redirected to this page: https://netbox.wikimedia.org/media-failure/?filename=css/base.css [15:32:18] this after a quick refresh after an interrupted connection [15:51:28] * volans|off not at all [16:04:46] interesting [16:04:58] not worknig on it no but I have a suspicion [16:06:19] chaomodus: double check scap logs to see if collectstatic run correctly in the last deploy [16:06:32] it did on netbox1001 at least [16:06:37] but i suspect netbox2001 is the culprit [16:06:55] scap deploy-log --verbose [16:09:14] hrmph [16:09:21] you can pass a specific file [16:09:22] from the log dir [16:12:47] which log dir [16:13:45] n/m [16:13:52] volans|off: go off :P [16:16:08] paravoid: the error page didn't happen to say which instance was serving it did it? [16:18:26] no [16:18:33] okay very cool [16:45:26] chaomodus: what was your suspicion? [16:45:42] I suspected that netbox2001 is broken in some way [16:45:52] it does not appear to be the case *now* [16:46:01] and also i don't readily see how traffic gets directed to it [16:47:12] hm [16:47:55] however when api stuff emanates from nb2001 it seems to have similar problems accessing the 1001 frontend as you do, like connection interrupts [16:55:10] so the fact that a connection was interrupted and then immediately after I got the "media failure" page [16:55:22] suggests to me that maybe this is a periodic reload/restart or something like that [16:55:33] without having ssh'ed or anything, purely from the perspective of a user [17:03:25] yes [17:03:40] there isn't an auto-restart to my knowledge, although the uwsgi workers only live for so long