[01:22:41] do we know whether https://en.wikipedia.org/wiki/Intellipedia ever updated to a more recent version of MediaWiki than what's shown in screenshots? [01:22:50] [14:05:28] hm, how do I tell jenkins to run a specific job, like mediawiki-quibble-vendor-sqlite-php70-docker, that'S usually only run on the gate pipeline? <-- "check sqlite" would have worked [02:48:39] legoktm: if you can tell the version from a heavily redacted print view, there are some pretty recent ones in FOIA requests [02:48:54] e.g. https://documents.theblackvault.com/documents/intellipedia/intellipedia-bilderberg.pdf [02:52:24] tgr: hmm, I don't think so [02:52:30] Maybe we should FOIA their Special:Version [03:11:23] legoktm, if it shows they're using an outdated version I imagine they'd claim it exempt [03:28:26] maybe! [11:11:26] hashar: hey. is there something like "check sqlite" for postgres? "check postgres" seems to do nothing. [11:14:49] I don't think we test postgres on our CI do we? [11:15:13] Oh, we do [11:15:14] mediawiki-quibble-vendor-postgres-php70-docker SUCCESS in 6m 51s [11:15:22] I guess no one has setup a command to do it [11:16:29] Should be doable though [11:28:03] # Let whitelisted users the ability to reenqueue a change in the test [11:28:03] # pipeline by simply commenting 'recheck' on a change. [11:28:03] - event: comment-added [11:28:03] branch: (?!^refs/meta/config) [11:28:04] comment: (?im)^Patch Set \d+:( -?Code\-Review(\+|-)?(1|2)?)?(\n\n\(\d+ comment(s)?\))?\n\n\s*recheck\.?\s*$ [11:28:06] email: *email_whitelist [11:28:21] duesen: ^ something like that needs adding to the postgres item in https://github.com/wikimedia/integration-config/blob/357cf219f01024c9a1d1610fa67ea8bdd34ee899/zuul/layout.yaml I guess [11:36:46] Though [11:36:47] comment: (?im)^Patch Set \d+:\n\n\s*check (php5?|zend|sqlite|postgres)\.?\s*$ [11:45:31] Reedy: I'm not confident to mess with that :) [11:58:26] heh [11:58:30] I'd suggest filing a task :) [14:17:21] meh. this is the downside of using a docker based development environment. i have no easy way to set up an sqlite based mediawiki instance. [14:17:52] duesen: unless there was another image set-up that way [14:18:00] well, the default mwediawiki image uses sqlite - but it's not git based, there is no easy way to pull in my patch :/ [14:19:04] KateChapman: I'm not away of an sqlite based dev image. [14:19:25] so, i'm kind of stuck. I can't debug the problem that is blocking the xml dump patch [14:19:52] my next statement was going to be if there isn't a sqlite based dev image than what I said is probably a rabbit hole [14:20:14] how often do you need to test with sqlite? [14:20:26] very rarely. [14:20:34] but now it's a blocker [14:21:25] I can either create a complete local setup, or I can wait for someone else to fix my problem [14:21:37] is it worth putting something in SOS regarding an image or way to test it? See if anyone has a solution? [14:22:05] perhaps a long shot, but also not that much effort to ask [14:22:25] well, the solution is obvious: set up a mediawiki manually. this is still the standard procedure. [14:22:26] install MW the normal way without docker? [14:22:52] yea. that means also install apacke, and mysel, and memcached, and all that crap :) [14:22:56] obvious but sounding like it takes time [14:22:57] I'm using docker to avoid that :Ü [14:23:04] But yea, that's what I'll have to do. [14:23:54] it takes time, but not terribly much. the main issue is remembering how to correctly set up apache for a local mediawiki. [14:24:04] i should be able to do this in an hour or two. [14:24:19] still. having to do this just to test a single platch is very very annoying [14:27:10] and i'll have to fidlle with xdebug again and all that... [14:27:11] duesen:Sorry, I am just joining in and was trying to follow the thread here. Is the only issue sqlite missing locally for Docker-based testing or are you talking about CI testing? [14:27:11] *sigh* [14:28:07] hknust: the issue is that the mediawiki-docker-dev image doesn't include sqlite, and has no option to set up a wiki using sqlite. [14:28:16] i just filed an issue for that [14:28:17] https://github.com/addshore/mediawiki-docker-dev/issues/87 [14:28:37] but then, mediawiki-docker-dev is not official. it's just a side project by addshore. [14:28:43] i like it a lot though [14:28:58] o/ [14:29:06] hey addshore ;) [14:29:12] long time no chat duesen :D [14:29:23] how are you? and WHERE are you? [14:29:44] also, feel like adding support for slite to mediawiki-docker-dev? [14:29:56] (also, maybe merge my pull requests?) [14:30:04] I am goo, and I am currently in the UK, will be in Berlin in 2 weeks [14:30:12] which PRs? do they need a rebase? ;) [14:30:33] just one, really: https://github.com/addshore/mediawiki-docker-dev/pull/75 [14:30:43] but that'S not what I'm stuck on today. [14:30:53] today, it's https://github.com/addshore/mediawiki-docker-dev/issues/87 [14:30:57] yup, that needs a rebase ;) [14:31:09] i'll be in berlin at the same tiem as you. [14:31:12] adding sqlite could be easy enough [14:31:23] duesen: oooooooooooh, will you be in the office at all? or not? [14:32:14] i'll be at the office, sure! [14:32:30] hm, i have no idea how do do a rebase for a pull request on github :/ [14:33:27] github workflows confuse me [14:40:26] i think i got it [14:41:28] anomie: are you around already? i'm poking at the sqlite issue, but I don't really understand the problem. my attempt to blindly copy the solution from the ticket you pointed to failed. [14:41:29] https://gerrit.wikimedia.org/r/c/mediawiki/core/+/494366 [14:41:39] * anomie looks [14:43:35] anomie: thanks. from what I understand, error code 17 always comes from stale prepared statements. there is another code (516) for schema changes conflicting with a rollback. [14:44:01] i don't see where we would be using prepared statements, though [14:47:09] duesen: I think you're wrong about thinking that error code 17 is about prepared statements. The error is basically "the database on disk was changed by some other process", at which point it discards all changes in the local session. Which includes discarding prepared statements (which we don't have) and temporary tables. [14:49:34] Unless maybe we have prepared statements in the sense that any statement might be done as "prepare, execute, discard". [14:51:24] Code 516, if I'm reading https://www.sqlite.org/rescode.html#abort_rollback correctly, is about a write that's blocked on a transaction getting aborted because the transaction it was blocked on was rolled back. [14:55:03] anomie: so, what do you think could be triggering code 17 here? even if we had "prepare, execute, discard", we should never hit that, right?... [14:55:46] duesen: The basic issue is that sqlite doesn't a server that manages different processes' accesses to the database, each process just opens the DB file directly. As far as I can tell, when one process detects that the schema was changed by a different process, it raises code 17. [14:56:14] s/doesn't a server/doesn't have a server/ [14:57:10] I suppose that also means "different connection", not just "different process". [14:57:26] maybe the trick is to simly not use the second conenction with sqlite. [14:57:35] Yeah, if the same process opens the DB twice it's the same deal. [14:57:50] does sqlite even have a "streaming" mode? I'll check that. [14:58:18] That'd be the best thing to do, assuming whatever you're trying to do works with that. Absent that, avoid doing schema changes on the second connection. [14:59:59] i'm not doign any schema changes [15:00:32] unless temp tables count [15:00:55] oh, and there's the one non-temp table. but that already exists, so no schema change would be done [15:01:12] though maybe CREATE IF NOT EXISTS triggers the flag even if the table does exist?... [15:05:47] When I look at the DBQuery log after running your test, I see it dropping and re-creating unittest_searchindex. [15:05:52] duesen: ^ [15:43:29] bpirkle: https://phabricator.wikimedia.org/T207977#4997897 was there a reason you added a link to the deprecation policy here? We are trying to figure out why we have it marked as blocked on the board [16:03:11] @KateChapman that task came up in the WikiTeq meeting and adding the link seemed like a good thing to do at the time [17:24:30] ~ 146 requests per second for a bot, is that much? [17:30:46] Krinkle: Depends what it's requesting :P [17:31:27] mediawiki raw logs in logstash shows a api-feature-usage entry for a certain bot requesting a certain API endpoint, with 2.1 Million entries in the last 4 hours [18:06:37] Krinkle: ugh. Like Reedy said "maybe" its ok, but in the past o.ri found a couple of those kind of tight loop bots and got them to do something that seemed more sane. If you want to open an investigation ticket about it and throw it at me I can make time to look [18:54:38] bd808 https://phabricator.wikimedia.org/T217697 [19:41:04] bpirkle: does the last PS of https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/454346/ looks OK (minor changes)? [19:41:52] AaronSchulz: looks good to me, thanks! [19:42:00] probably rebase again and merge then ;P [19:43:04] yep [19:43:45] Awesome, been looking forward to getting this one done for awhile. [20:11:47] bpirkle: \o/ [20:12:30] anomie: i faield to find the code that handles the special case for the searchindex table for sqlite in unit tests... [20:13:03] i'm trying to work around the issue now by skipping the streaming test for sqlite. it doesn't do streaming anyway. [20:13:48] or rather, sqlite doesn't do beffering, but DatabaseSqlite implements that on top, regardless of the bufferResults setting [20:15:56] duesen: CloneDatabase::cloneTableStructure() calls ->dropTable() and ->duplicateTableStructure(). Most of the time the ->dropTable() call will do nothing because the table doesn't exist, but not for unittest_searchindex. Then DatabaseSqlite::duplicateTableStructure() has the creation logic, see lines 1007-1014 in https://gerrit.wikimedia.org/r/c/mediawiki/core/+/494366/3/includes/libs/rdbms/database/DatabaseSqlite.php#1007 [21:22:53] anomie: oh, right, it's in DatabaseSqlit, not in the MediaWikiTestCase. Thanks! [21:23:06] my workaround isn't working, btw: https://gerrit.wikimedia.org/r/c/mediawiki/core/+/443608 [21:24:10] the failure mode is a bit less obscure, but not less confusing. I guess i'll have to create an sql based local setup afterall. [21:24:23] *sqlite based [23:24:30] anomie: mind if I backport https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/494616/ ? [23:24:39] (to wmf, that is) [23:37:53] bpirkle: Is "connection timeout" still curl-only, or does Guzzle make it work for other transports as well? [23:38:25] bpirkle: in context of https://phabricator.wikimedia.org/T137926, I'm looking at mentions of "curl" in core/includes. And notice a match for wgHTTPConnectTimeout