[05:59:25] hey legoktm. quick question: are we going to have the URL shortener soonish? [05:59:26] :) [13:22:02] o/ [14:11:56] \o_ [20:37:30] yuvipanda, do you know if there is a database table somewhere mapping wikidata entries to pages that import from them? [20:41:08] hey kjschiroo. maybe, but I don't know. ask in #wikidata maybe? [20:45:24] yuvipanda, thanks! [20:45:42] yw! sorry I can't provide a more useful answer [20:54:37] yuvipanda: do you think we can convince legoktm to push the URL shortener? :) [20:54:57] life will be much easier with a URL shortener. [21:33:56] Soooo... by switching our article quality models from RandomForest to GradientBoosting, we have (1) achieved minor but still significant increases in accuracy (and other robust evaluations) and (2) dropped memory usage sufficiently which means that we can start up more workers! [21:34:10] s/sufficiently/substantially [21:34:13] / [21:34:45] \o/ [21:35:02] This is a great Friday afternoon activity :) [21:44:40] yuvipanda. I'm getting a 504 Gateway Time-out in paws. Any easy fix? [21:44:55] uh oh. looking [21:45:00] thx [21:45:11] J-Mo which account? [21:45:17] jtmorgan [21:53:36] J-Mo check now? [21:54:04] blam, I'm in yuvipanda! [21:54:16] was the problem the result of my own actions? [21:54:40] J-Mo I'm investigating that now. [21:54:46] J-Mo were you doing something heavy when it happened? [21:54:59] I was probably doing something stupid when it happened. [21:55:25] so there are two things that could've happened - the jupyter process was hung waiting for something [21:55:26] but not too heavy. working with a pandas dataframe with maybe 15k rows and 7 columns. [21:55:29] or there is a problem in the infrastructure [21:55:35] but I have data now [21:55:40] on how much RAM / CPU you were using [21:55:42] pulling that up so I can see [21:58:42] J-Mo ok, I see a CPU spike from you which might be the reason [21:58:58] I don't fully understand the unit yet, but it has your CPU usage at 1612.48 [21:59:18] J-Mo can yu do the pandas thing again and we'll see if this happens agian? [21:59:19] *again? [21:59:38] * J-Mo saving... [22:00:28] just ran the plotting operation again, yuvipanda [22:01:14] that's all I was doing (I think) [22:02:23] J-Mo hmm, looks ok now [22:02:40] dunno, then. [22:03:17] btw, yuvipanda. do notebooks running in the background use significant memory? should I be halting those processes if I don't plan to use a notebook for a while. [22:03:19] ? [22:04:06] J-Mo they do, but in general you shouldn't worry about it. I think in the near future (~1-2 months) I'll probably institute a 'if your notebook is idle for >12-24h, it will be killed), but not yet. [22:04:21] so yah, generally - save your work, and you should be ok. [22:04:31] J-Mo oh, yeah, if you are actively working, then notebooks in the background will use memory [22:04:50] k. I'll shut 'em down if I don't plan to touch them during a given work session. [22:04:58] so if you aren't using them you should shut them down for now [22:05:02] thanks again, yuvipanda! I hope you're enjoying your Friday. [22:05:13] J-Mo sorry about the interruption in service! [22:07:08] not a problem. grateful as always that the platform exists [22:12:42] J-Mo in the future, if this happens again, and I don't respond on IRC, feel free to call me. I want to get to the bottom of this [22:12:57] got it. will do, yuvipanda [22:13:26] J-Mo thanks! [22:19:11] J-Mo aha! I found it [22:19:19] J-Mo you were using 1023 MB of RAM [22:19:23] the limit is 1024 MB [22:19:27] so you hit the RAM limit [22:19:47] and so it took a while to recover [22:19:49] * yuvipanda ponders solutions [22:19:59] I guess I could increase the RAM Limit to 2G [22:20:13] but at least I know what happene! [22:20:22] (you are currently only using 400 MiB so you're good0 [22:20:56] whew! I'll try to keep it within bounds whenever possible :) [22:21:41] J-Mo np. I can also increase it if necessary