Search

Hi

I run a site which relies heavily on External Data Sources and I'm having a recurring problem where the sym_cache table gets very large and locks the server up when it performs a re-index.

As observed by my host:

The SQL server instance showed that the site was performing a very large re-index, which caused your table spaces to lock; this is normal during a cache refresh. When the table spaces lock the database goes into exclusive access mode, which causes pending requests to wait, thus increase server load.

As I understand it the cache table is optimised every time there is a cache "miss" (or when a request for a cached object finds stale data) with the optimise() function being called via clean(). I assume that it is the External DS's which are triggering this unless anything else within Symphony does?

I have taken the following steps to try and use the sym_cache table more efficiently:

  1. Where possible I am bypassing the DS cache altogether using public $dsParamCACHE = '0' - I assume this works
  2. I have installed CacheLite to cache whole pages instead where appropriate.
  3. Where I am using the cache I have the timeout set to one day public $dsParamCACHE = '1440' - not sure if this is optimal give the scenario?
  4. I avoid using the cache where the DS URL varies each time it is requested (for example with search queries).

Despite these steps the issue still persists. Does anyone have any advice on how to manage the problem? Ideally it would be great to to keep the size of the cache table down to a minimum.

Also is it possible determine which DS is causing the table to optimise? Any help would be much appreciated.

Jean-Luc

Note - I am using Symphony 2.2.5.

To immediately solve your issue, I would comment out the optimise function.

After deleting a large part of a table, or making many changes to a table with variable-length rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns). Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file. After extensive changes to a table, this statement may also improve performance of statements that use the table, sometimes significantly.

From the manual, this is the reason we run OPTIMISE, to try and keep things snappy. We could potentially only fire the optimise() script if the clean() function affected a significant amount of rows, as the manual hints that this is when the best performance benefits are found.

However, in your situation it's unlikely to help if you are using a large number of Dynamic DS's, as it may be removing a couple of rows everytime there is a cache miss.

I'd comment out optimise() in your scenario and instead add the optimise statement to your weekly database maintenance/backup script so that it runs less often (and when you say so).

Thanks for the reply @brendo, so you would literally just commented out like so:

public function clean(){
    $this->Database->query("DELETE FROM `sym_cache` WHERE UNIX_TIMESTAMP() > `expiry`");
    //$this->__optimise();
}

Running the OPTIMIZE TABLEon a cron should be straight forward.

Cheers!

JL

Create an account or sign in to comment.

Symphony • Open Source XSLT CMS

Server Requirements

  • PHP 5.3-5.6 or 7.0-7.3
  • PHP's LibXML module, with the XSLT extension enabled (--with-xsl)
  • MySQL 5.5 or above
  • An Apache or Litespeed webserver
  • Apache's mod_rewrite module or equivalent

Compatible Hosts

Sign in

Login details