[07:03:11] Hello, I have a problem with the api.php. I can't login. The "GET ...api.php?action=login..." returns a json with a login token. The subsequent POST ...api.php?action=login" with the lgtoken returns the result "NeedToken". The wiki is new created, so I think , I've forgotten something. Answers in ger or en, please. [07:15:32] Jens21: Parameters must be sent in the POST body, not as query string parameters. Be sure you're sending them there. Also be sure you're sending the session cookies you received in the previous request [07:19:24] Thank you, for your answer. The parameters are in the POST Body fromatted in JSON. The cookies are also set in the header. The cookie starts with "UserName=...; mw_installer_session=..." Is that correct? [07:24:49] I think there's a missing _session cookie [07:34:12] -session=... is there as last entry of "cookie:" in the header. [07:35:40] Could it be the user? Do I need a special user to log in via the API? [07:38:58] No, every user should be able to login. If you get the NeedToken error, seems to be something that's not being sent correcty [07:39:15] Ah, you said you sent the data formatted as JSON? That may be the problem [07:40:19] The content you send should be sent as normal parameters in the post body, param=value¶m2=value, etc [07:46:22] I tried that in the post-body: lgname="..."&lgpassword="---"&lgtoken="..." No difference in the result. [07:52:15] I get an answer formatted as JSON. The last letter is an escaped . I tried both single and double backslash. Is the login token only vlid a short time period? [08:27:40] @Vulpix Maybe there are more than one problems. Thank you for help with the format of the post body. I'll try some other things tomorrow. Again: Thanks. [08:28:33] ah, don't put double quotes on the parameter values [08:28:46] it works like if it was a query string [08:29:15] That I tried already. :-) Next thing is a voodoo doll. [08:29:26] heh [08:30:09] You can try with the ApiSandbox, and inspect the traffic from your browser console [08:30:19] https://www.mediawiki.org/wiki/Special:ApiSandbox [08:30:36] (this special page should be available also on your installation) [08:31:54] Oh, he left... [08:54:41] Hello, good day. When accessing the MediaWiki 1.34.1 installer, I see the following error message in the header: "Warning: putenv () has been disabled for security reasons in /storage/ssd2/815/12958815/public_html/includes/Setup.php on line 148 " [10:35:19] Hello, good day. When accessing the MediaWiki 1.34.1 installer, I see the following error message in the header: "Warning: putenv () has been disabled for security reasons in /storage/ssd2/815/12958815/public_html/includes/Setup.php on line 148 ". [10:37:30] a8kqmaima52nua: in the php.ini file, search and remove putenv if you found in inside the "disable_functions=" [10:38:50] Gryllida I am using a hosting service, and I don't have access to the PHP.ini file. [10:43:49] ask your host to fix it [10:46:27] Is there no one that have answer to this question: Is there some trick to installing Wikibase so it doesn't screw up the search function? [10:47:01] Site search isn't working anymore (No match found for all keyword) [10:47:03] hi, Goooodman, please, wait a couple hours, do not quit it [10:47:24] Quit what? [10:47:50] Okay, i understand. [10:48:35] Goooodman, please provide clear steps and environment information (version etc ) that allow someone else to reproduce. Or what "screw up" means. [10:49:10] Okay Andre. [10:50:50] I was getting a lua error of the formed ''attempt to index wikibase, a nil value'', so I install wikibase extension --Rel 1_33 from git and that error was resolved but search doesn't work anymore. [10:51:44] Once Wikibase is disable in LocalSettings.php search works and the lua error returns [10:52:17] I have rebuild search index yet, no luck [10:53:15] what does "search doesn't work anymore" mean? [10:53:21] Which MediaWiki version is this about? [10:53:48] Version 1.33 [10:53:59] Wikibase version 1.33 [10:55:09] Search doesn't work means, when a keyword is entered, the well spins and return ''No match found' [10:55:45] What's shown in the network tab of the browser's developer tools? [10:55:48] However, if you are sure that the page exists and hit enter, it takes you there. [10:55:51] Any URL request that times out? [10:56:07] So, what is actually not working is SEARCH SUGGESTIONS [10:56:48] Nope, I'm talking of search search [10:57:30] Yes, and that does not answer my question. [10:57:46] My question was: What's shown in the network tab of the browser's developer tools? Any URL request that times out? [10:57:55] That has nothing to do with the URL that is open in your browser. [10:58:38] Oh! [10:59:08] I don't really know how to use the browser console in this case, please guide me. [11:01:02] . [11:01:53] press F12 and click "network", refresh the page and see if anything doesn't have an entry in the status column [11:03:07] Okay, let me quickly do it now. [11:06:15] See https://www.mediawiki.org/wiki/Help:Locating_broken_scripts basically though that is about "console" and not "network" [11:46:07] Hi Andre, it's Goooodman, Are you there? [11:46:27] !ask | Gooodman [11:46:27] Gooodman: Please feel free to ask your question: if anybody who knows the answer is around, they will surely reply. Don't ask for help or for attention before actually asking your question, that's just a waste of time – both yours and everybody else's. :) [11:46:28] I'm sorry, the power went off from my end. [11:47:02] Okay [11:51:43] Gooodman; See https://www.mediawiki.org/wiki/Help:Locating_broken_scripts basically though that is about "console" and not "network" [11:51:52] After installation of wikibase version 1.33 on mediawiki 1.33, keyword suggestion on site search is NOT working anymore (No error message, just 'No match found' on all keywords). [11:53:37] . [11:55:07] On the network tab of the developer console, it's all green (200 code) [13:50:55] I have QuestyCaptcha set up. My questions are displaying on my account registration form. Yet when I give an incorrect answer it still allow the account creation to go through. Any suggestions? [14:01:42] hello, I'm noticing a very strange problem using SphinxSearch on my 1.34 MW: when I add a new template to a page and update the sphinx index, the page ocmpletely disappears from search results [14:01:59] if I remove the template, and reindex, the page doesn't reappear [14:02:16] this is really bad, I'm running out pf pages to test lol [14:21:40] ok it doesn't have anything to do with templates [14:21:57] just updating the page causes it to disappear from search [14:44:28] hm, never used SphinxSearch [14:45:04] lavamind: can you try !debug and see what the logs say? [14:45:10] !debug | lavamind [14:45:10] lavamind: For information on debugging (including viewing errors), see https://mediawiki.org/wiki/Manual:How_to_debug . A list of related configuration variables is at https://mediawiki.org/wiki/Manual:Configuration_settings#Debug.2Flogging Also see https://mediawiki.org/wiki/Manual:Errors_and_symptoms [14:46:08] saper: I think it could be an issue with sphinx itself, the search works for page that are in the index [14:53:05] I don't know why an incremental reindex is needed from cron as suggested on the extension page [15:05:59] saper: I think I'm figuring it out [15:06:29] I notice the edits since updating this wiki are different in the database [15:06:51] new edits are missing the rev_text_id entry in the revision table_ [15:07:11] which I totally don't understand since the history page works fine for both old and new pages [15:08:05] ohhhh "(deprecated in 1.31) " [15:08:08] https://www.mediawiki.org/wiki/Manual:Revision_table#rev_text_id [15:08:41] wow so the whole database structure changed [15:14:04] wow so the referencing system for revisions is completely new [15:30:55] My MW 1.34.0 site has just been destroyed again. This time it seems they've dicked with PHP. [15:32:31] CentOS 7.4, Apache 2.4.34-8.el7.1, [15:34:09] rh-php72-php 7.2.24-1.el7 [15:34:49] I had Apache set up for complete security and an A+ rating on Qualys. Anyone have any idea how they've done it? [15:36:05] https://unofficial-tesla-tech.com [15:40:12] Hey all, following up on a really frustrating problem that I've asked here about before as well as on the SMW IRC and the mailing list to no avail, regarding deadlocks causing the SMW rebuildData.php script to error out occasionally. Really I'd like to initially just determine if it's a data problem, a mediawiki configuration problem, or a mysql [15:40:13] (AWS Aurora) problem, so I can dig in from there. Can anyone tell from this error backtrace and mysql "show engine innodb status" deadlock info whether this is an MW, SMW, or MySQL issue? https://gist.github.com/justinclloyd/91d54bbe88fa738ead1d26fdabed9099 [15:40:37] quantum: qualys just checks that your SSL certificate is set up correctly, is 100% meaningless for a site security perspective [15:40:53] granted qualys *does* also provide an actual security scanning service, but it is extremely expensive [15:41:38] are you sure your site was compromised? If so, how did you first discover it? [15:41:56] Skizzerz: Its tests aren't the only ones I've passed. Just a single service. I've done many other things to make Apache secure. [15:41:59] tracing back to where a compromise came from is often difficult if the attacker is good at covering their traces [15:42:08] lavamind: I was going to ask you which version are you on... [15:42:26] Have a look at it. Blank white. [15:42:33] that doesn't mean your site was compromised [15:42:36] !blank | quantum [15:42:36] quantum: A blank page or HTTP 500 error usually indicates a fatal PHP error. For information on debugging (including viewing errors), see . [15:42:37] it just means you have a PHP error [15:43:56] Not even any source code. Yes I tried that.. When I put php diag code in LocalSettings, the page is still blank. So php is messed up. Oh sure I can find that problem and fix it, but HOW dod they do this, is the question? [15:45:02] My site was compromised. I had up info that would be a problem for several parties. [15:46:01] you haven't demonstrated any actual evidence of a compromise [15:46:09] I wouldn't be so fast to jump to conclusions [15:46:27] check your php.ini and enable error logging, then check the error logs [15:46:36] I do enterprise infosec for my day-job. I know from a compromise. [15:46:54] quantum: https://www.mediawiki.org/wiki/Manual:Errors_and_symptoms#You_see_a_Blank_Page - a blank page does not imply that anything was compromised. [15:47:12] then you'll know that you should immediately take that server offline, right? [15:47:54] Like I say, I can fix the problem. I guess I now have my answer now about any known compromises. [15:48:12] chances are extremely high nothing was compromised and it's just a syntax error or something in a file [15:48:23] They didn't get in to the system further than apache user. [15:48:27] ... or a php upgrade [15:48:37] quantum: what does the error log say? [15:48:38] Ok there was supposedly a fix for the rebuildData.php issue almost two years ago that doesn't seem to be working, at least in my case. https://github.com/SemanticMediaWiki/SemanticMediaWiki/pull/3487 [15:48:38] I hadn't touched the site. [15:49:04] or what, outside of experiencing a blank page, leads you to believe that a compromise occurred? [15:49:18] Alright... [15:49:48] You're right Skizzerz, I'm imagining all this. [15:49:50] did you see site data floating around in hacker forums? do you have reports of spam originating from your server? etc. [15:49:55] yes, I strongly believe that you are [15:50:14] heh [15:50:41] I also strongly believe that you're very inexperienced or completely awful at your infosec day job, but that's another matter [15:55:17] lavamind: congratulations on fighting gerrit \o/ [15:58:00] justinl: definitely an SMW issue, but beyond that I can't help much (not experienced with SMW) [15:59:45] Yeah, narrowing it down right now is the best approach and based on what I found in the rebuildData.php pull request I linked above, and to which I've replied (if anyone sees it), hopefully that will be a start down the path to actually fixing it. So frustrating especially now that I have the automated job running to cleanup outdated properties [15:59:45] that breaks frequently due to these deadlocks. [16:00:54] saper: mouahaha [16:00:57] saper: https://www.mediawiki.org/wiki/Topic:Vkmeh7v01ai6p3ka [16:01:07] fixed it, t'was a sphinx config issue [16:01:43] editing pages caused them to switch to the new slot/content revision schema, thus dropping from the index [16:01:55] lavamind: maybe this should be filed on the phabricator to get noticed [16:01:59] had to adjust the sql_query fro sphinx [16:02:39] saper: ugh, do I need to create a new account there to ? :p [16:03:02] no, you can login with the mediawiki.org OR the gerrit one [16:03:11] isn't that fantastic [16:03:27] haha it's amazing [16:03:32] https://phabricator.wikimedia.org/project/view/467/ not much in there yet [16:04:10] I think the bus factor for that particular extension is a solid 1 :/ [16:04:32] fear not, we have some where it is 0 [16:04:47] haha well yeah obviously [16:05:15] but honestly, I wouldn't be using that extension if the default search was in any way shape or form workable [16:05:47] its pretty much the loudest complaint we've had at first [16:09:05] Dumb question: what are the main issues with default search, other than the TitleKey case issue I ran into recently regarding searching the Template namespace? Is there good reason to switch to something else? [16:10:04] justinl: the text search is lacking, any slight variation of the word, such as plural form, will hide the result [16:10:29] Ok so basically it's really strict. [16:12:45] lavamind: I'd rather be using PostgreSQL and some text index on the tables [16:13:03] I do want to look into elasticsearch one of these days, mainly for varnish/apache logs but it seems like CirrusSearch is fairly popular so ES may help in multiple ways. [16:16:34] anyway, sphinxsearch is an adequate solution... when it works [16:16:42] saper: as requested: https://phabricator.wikimedia.org/T250403 [16:17:50] I always wonder how lucene made such a big career under the elastic brand ... :) [16:24:53] So I gather this https://www.mediawiki.org/wiki/Extension:MobileFrontend is what makes the WMF wikis mobile editions. My wikis could really use this [16:26:22] Do I gather correctly that after installing Extension:MobileFrontend the mobile-friendly site will have the same URL as the web version? (unlike with WMF wikis where there is the en.m.wikipedia.org) [16:36:06] jukebohi: Yes, you're right [16:36:28] Ok. Thanks Vulpix. I gonna backup and snapshot and install on wikis [16:37:02] This may cause issues if you have some frontend caching layer like varnish or squid. Otherwise it should be fine [16:51:06] @jukebohi We just implemented it in our wikis and there's definitely a bit of work that may need to be done. Read the docs for the extensions, make sure you install the MinervaNeue skin as well, and we did need to update our Varnish config to handle X-WAP headers as per those docs. [16:51:45] justinl: okey.. I'm not using Varnish, just tiny wikis with low traffic [16:51:51] We're also seeing an occasional bug with it where sometimes users, after logging out and being anonymous again, get served the mobile skin despite not having that set as their skin. [16:51:55] ok [16:52:44] saper: should one amend a commit or push a new commit for a gerrit change request ? [17:02:50] always amend for gerrit [17:03:51] pushing a second commit will create a second distinct change on gerrit (which would be dependent on the first one being merged before the new one can be) [17:04:06] lavamind: ^ [17:09:26] lavamind: amend and force push [17:21:49] saper: ack! [17:25:58] as Skizzerz said [17:26:57] saper: obviously I don't have the permissions to force push [17:27:08] I submitted the patch using git review -R [17:27:29] do I repeat that command once the commit is amended ? [17:28:10] not obvious what the right command from looking at git help review [17:28:12] "git pusa -f gerrit HEAD:refs/for/master" should do [17:28:20] s,pusa,push, [17:28:55] I remember getting voting rights for the OpenStack board because I have contributed something to git-review :/ [17:29:51] saper: thanks, it worked [17:29:52] (last time I have used it the remote was called "gerrit") [17:30:12] yep got mail, great. hope CI agrees. [17:30:22] I usually just re-run git review after amending [17:30:34] same difference probably [17:31:00] not sure why CI is failing though 15:53:42 rsync: failed to set times on "/cache/.": Operation not permitted (1) [17:31:05] after working with gerrit I started liking this patchset model... unlike pull requests where people stack commits endlessly [17:31:47] saper: GH and GL allow one to collapse commits when accepting an PR [17:32:14] not sure for GH but I know it exists on GL [17:34:41] lavamind: rsync is fine, it failed because of the whitespace I have commented on, search for Generic.WhiteSpace.DisallowSpaceIndent.SpacesUsed [17:34:44] in the full log [17:37:54] alright [17:43:28] I'm running runJobs.php on one of our wikis that got a whole bunch (over 150k at its highest) recently due to some template updates and I just saw a slew of warnings `PHP Warning: count(): Parameter must be an array or an object that implements Countable in /var/www/sites/gw2w-fr/languages/Language.php on line 620` [17:43:28] (https://github.com/wikimedia/mediawiki/blob/REL1_34/languages/Language.php#L620) [17:44:04] Is this a wiki/data issue to be investigated or is it okay to ignore? [17:59:17] justinl: looks like there might be a corrupt localisation file. [17:59:26] namespaceGenderAliases is supposed to be an array [17:59:29] if it isn't that's a bug [18:03:38] Are you referring to the `$IP/cache/*.cdb` files? [18:58:02] justinl: no, the Messages*.php files [18:58:17] Volker_E: changes are live - https://en.wikipedia.org/sdfsd [18:59:14] Krinkle: oh nice, thanks! [19:01:57] Krinkle, looks like all of those files on my live servers match those that were originally downloaded from the MW github repo, so they're probably not corrupt. [19:02:39] Not a huge deal right now, if I see this cropping up more, I'll put together what info I can and follow up on it. Thanks for the direction! :) [19:02:42] justinl: would be nice to figure out which language the bug is in. [19:02:57] maybe disable extensions and try with different site/user language if possible [19:03:06] It's our French language wiki so probably that one if any. [19:03:10] or dump the value and language code with var_dump or something and see what it is. [19:03:15] thanks :) [19:03:40] This is a live wiki used by a lot of people at all hours, so I'd have to try to repro the issue on my dev wikis and test there. [19:03:53] I can try the dumping though [19:49:22] justinl: I would hope it's not influenced by any content or user acounts, so a dev wiki with similar settings and extensions should get you there [19:56:18] It's possible it could be content-related, given how much our editors have had to do recently to fix a number of templates that had issues, along with various fixes and updates needed due to our major version upgrade a few weeks ago. [19:57:16] Our dev wiki database dates back to the day of the upgrade, March 25, while a lot of content, templates, etc. have changed since then, so I also need to backport my databases from live to dev to get them synced up again. [19:57:44] I am, however, running the runJobs.php script against our dev french wiki to see if I can repro the error. [20:02:24] Hi guys, sorry for the stupid question, but where can I see all namespaces available on the wiki? [20:04:38] jimbeambam: Special:AllPages (the dropdown), or the api [20:07:45] Vulpix thank you. And how does this work over the API? Do you need to code some kind of script? [20:08:12] you can use the API help and API sandbox [20:08:35] https://www.mediawiki.org/wiki/Special:ApiSandbox#action=query&format=json&meta=siteinfo&siprop=namespaces%7Cnamespacealiases [20:09:20] (for general use searching for queries) [20:09:38] thank you [21:00:16] I have a question about seting up a job runner service similar to the one described on https://www.mediawiki.org/wiki/Manual:Job_queue. Since I have 5 wikis that would need such a service, my current thought would be to stand up a beefy-enough dedicated web server built the same as my current wiki web servers (though just php and apache, no [21:00:16] varnish) with 5 instances of the shell script running via systemd, each one running a different wiki's runJobs.php script, since each script needs its wiki's LocalSettings.php. Is there anything critical from that general design that I'm overlooking? [21:01:08] well, you don't really need that to be a webserver [21:01:33] also, I'm not sure how you have your wikis set up [21:01:56] True, I guess it just needs PHP, but it also needs the mediawiki code and the wikis' LocalSettings.php files. [21:02:15] yes [21:02:23] I mean [21:02:30] you could have several copies of mediawiki code [21:02:36] or you could share most of that [21:02:43] and only have a few conditionals on LocalSettings.php [21:02:55] runJobs.php has a --wiki parameter that you can use [21:02:58] My setup is a bit complicated, but basically I have 4 web servers behind a load balancer, all running Varnish, Apache, and PHP, with a /var/www/sites directory that has a subdirectory for each wiki, e.g. /var/www/sites/gw2w-en, that is a complete copy of the MW code. [21:03:35] so all web servers may serve any wiki [21:04:15] I have it that way, all web servers serve all wikis, I have 7 Apache vhosts, one per wiki, on each server. [21:04:27] it was not clear to me [21:04:41] yes, that would work [21:04:42] Yeah there's a lot to tell. It's a kinda complex setup due to our needs. [21:04:52] you could as well run the jobs for each wiki serially [21:05:24] Not always. Sometimes we get 10s of thousands of jobs created for a wiki that can take hours to days to complete. [21:05:32] e.g. while true; do for wiki in /var/www/sites/*; do (cd $wiki; php maintenance/runJobs.php ); done; sleep 1; done [21:05:36] I have the same codetree for each wiki, but LocalSettings includes a different file based on the hostname. runJobs has a --server http://example.org/ for this purpose [21:06:43] (my LocalSettings is basically include Config/*php and Config/hostname/*php) [21:06:50] I've thought about that Shaun and would like to do that but it would take a major redesign of my LocalSettings.php and Apache vhost configs, all which are currently managed by Salt and that I recently did a major revamp of to improve how Salt handled multiple mediawiki sites and apache vhosts. [21:07:19] it shouldn't be that different [21:07:41] but it's up to you [21:08:15] justinl: you could use --maxjobs or --maxtime to ensure a single wiki doesn't block the jobs of the others [21:08:25] Plato's loop would have the same end result. but most the maint scripts accept --help for useful tips [21:09:19] another option would be spin new machines for running jobs, then destroying them [21:09:35] Yeah I've been looking at the args again and the --wiki arg did get me thinking again about moving to the "wiki family" model of a single main LocalSettings.php that loaded the correct config based on the hostname requested. [21:10:12] I'm not versed yet with automating EC2 instance creation, still have a good amount of work to do to get there. [21:10:32] it depends on your model [21:10:51] another option would be to launch jobs on the servers when they are idle [21:11:17] Hey guys another question. If you have for example a template with some {{DUMMY}} is it possible to check if it was edited before publishing the page? To check if it's not still {{DUMMY}?} [21:11:23] Yeah, it's a model I've built up over the past 8 years almost of building and maintaining these wikis, so it's really ingrained and I have to be careful when making any large architectural or config changes to consider nuances of our environment. [21:12:02] sure [21:12:02] Currently we set $wgJobRunRate to 0.1 on our two largest wikis and 0.5 on the three smaller ones (doesn't really get used on 2 tiny wikis) [21:12:37] as you have shell access, I would recommend to simply run a job script [21:12:43] But when you get 150k jobs in the queue, that can still take days to clear when relying on the job run rate setting. :P [21:13:02] Plus SMW complicates matters for us even more. [21:13:11] !hss [21:13:11] https://upload.wikimedia.org/wikipedia/mediawiki/6/69/Hesaidsemanticga2.jpg [21:13:48] Yeah...but we're heavily reliant on it. Almost 10 million properties set in our largest wiki. [21:13:50] justinl: iirc legoktm made a systemd unit file for running the job queue on a regular basis, you could possibly adapt that [21:14:33] it starts up, runs X jobs and waits until it runs that many, then shots down to recycle the process and free up memory to mitigate any memory leaks [21:15:17] bonus is that jobs are run pretty much instantaneously as they come in so long as you're not producing faster than it can process [21:15:18] I was looking at the one on the job queue mw docs page, I have something similar I use to run multiple instances of carbon-cache on my graphite cache servers, I'd just adapt that to run multiple mwjobrunner.sh scripts or whatever with the @wikiname being passed as the arg to the script to tell it which directory to use to run the runJobs script [21:15:44] that systemd template was mine :) [21:16:03] ah [21:16:07] We do get a lot of jobs sometimes. I'm still clearing out last nights 150k, down to about 60k right now. [21:16:07] sorry for the miscredit then :) [21:16:34] I copied Nikerabbit's systemd template lol [21:17:36] I just knew someone made a template, just kinda assumed legoktm since iirc you maintain the apt repo for debian (or used to), and that would be a part of it? [21:17:39] Separately I have another script that I now run nightly, which may not be enough, to run the SMW rebuildData.php to delete outdated properties, so I'd probably leverage a job runner system to handle that work as well. [21:17:40] *shrug* [21:18:20] https://salsa.debian.org/mediawiki-team/mediawiki/-/blob/master/debian/mediawiki.mediawiki-jobrunner.service [21:18:29] That's the one I use [21:19:02] anyway, keep the maximum number of jobs you run for any given runJobs.php execution reasonably low. The more jobs run by a single process, the slower each subsequent job will get eventually because mediawiki likes caching lots of things in memory, and so running tens or hundreds of thousands of jobs will quickly exhaust your RAM (and make PHP slower because it can't use CPU caches as efficiently) [21:19:07] Ah, I was talking about https://www.mediawiki.org/wiki/Manual:Job_queue#Simple_service_to_run_jobs [21:22:21] I already have a basic script that I'm fleshing out that just takes a $wiki "name" and runs two runJobs.php commands like those in the job queue page you linked. [21:23:06] That's the one I was intending to invoke from the systemd service file, templatized using %i to pass the wiki "name" into the script since that is used to find the right wiki directory under /var/www/sites. [21:23:40] It also sets --maxjobs and --procs based on various criteria. [21:30:01] So I will look into the merging of my wikis into a single LocalSettings.php again (I'm doing a major overhaul/redesign of my wiki architecture this year, probably splitting off Varnish onto its own servers, among many other changes). Also even considering moving from Apache to Nginx, but only if I find any real compelling reasons to do so, [21:30:02] performance and/or management-wise. I figured this made it a good time to switch to the job runner idea. I'd been considering moving php to a dedicated set of servers as well, but I'm not sure now if that's the best idea. [21:41:29] Appreciate the discussion everyone! Logging off from work for the day, have a great evening! [21:41:44] justinl: Just let us know where to send the invoice ;) [21:42:20] Seriously I couldn't probably use some Professional Services. I'm sure you'd probably find something in my setup or config to make you choke. [21:42:21] ;) [21:42:27] s/couldn't/could/