[Solved] Problems when a large sym_cache table optimizes - looking for advice.
This is an open discussion with 2 replies, filed under Troubleshooting.
Search
To immediately solve your issue, I would comment out the optimise
function.
After deleting a large part of a table, or making many changes to a table with variable-length rows (tables that have VARCHAR, VARBINARY, BLOB, or TEXT columns). Deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. You can use OPTIMIZE TABLE to reclaim the unused space and to defragment the data file. After extensive changes to a table, this statement may also improve performance of statements that use the table, sometimes significantly.
From the manual, this is the reason we run OPTIMISE
, to try and keep things snappy. We could potentially only fire the optimise()
script if the clean()
function affected a significant amount of rows, as the manual hints that this is when the best performance benefits are found.
However, in your situation it's unlikely to help if you are using a large number of Dynamic DS's, as it may be removing a couple of rows everytime there is a cache miss.
I'd comment out optimise()
in your scenario and instead add the optimise statement to your weekly database maintenance/backup script so that it runs less often (and when you say so).
Thanks for the reply @brendo, so you would literally just commented out like so:
public function clean(){ $this->Database->query("DELETE FROM `sym_cache` WHERE UNIX_TIMESTAMP() > `expiry`"); //$this->__optimise(); }
Running the OPTIMIZE TABLE
on a cron should be straight forward.
Cheers!
JL
Create an account or sign in to comment.
Hi
I run a site which relies heavily on External Data Sources and I'm having a recurring problem where the
sym_cache
table gets very large and locks the server up when it performs a re-index.As observed by my host:
As I understand it the cache table is optimised every time there is a cache "miss" (or when a request for a cached object finds stale data) with the
optimise()
function being called viaclean()
. I assume that it is the External DS's which are triggering this unless anything else within Symphony does?I have taken the following steps to try and use the
sym_cache
table more efficiently:public $dsParamCACHE = '0'
- I assume this workspublic $dsParamCACHE = '1440'
- not sure if this is optimal give the scenario?Despite these steps the issue still persists. Does anyone have any advice on how to manage the problem? Ideally it would be great to to keep the size of the cache table down to a minimum.
Also is it possible determine which DS is causing the table to
optimise
? Any help would be much appreciated.Jean-Luc
Note - I am using Symphony 2.2.5.