Menu:
statement-caching:: Caching of Prepared Statements and Stored Programs
MySQL uses several strategies that cache information in memory buffers to increase performance.
File: manual.info.tmp, Node: innodb-buffer-pool-optimization, Next: myisam-key-cache, Prev: buffering-caching, Up: buffering-caching
*note 'InnoDB': innodb-storage-engine. maintains a storage area called the buffer pool for caching data and indexes in memory. Knowing how the 'InnoDB' buffer pool works, and taking advantage of it to keep frequently accessed data in memory, is an important aspect of MySQL tuning.
For an explanation of the inner workings of the 'InnoDB' buffer pool, an overview of its LRU replacement algorithm, and general configuration information, see *note innodb-buffer-pool::.
For additional 'InnoDB' buffer pool configuration and tuning information, see these sections:
*note innodb-performance-read_ahead::
*note innodb-buffer-pool-flushing::
*note innodb-performance-midpoint_insertion::
*note innodb-multiple-buffer-pools::
*note innodb-preload-buffer-pool::
*note innodb-buffer-pool-resize::
File: manual.info.tmp, Node: myisam-key-cache, Next: query-cache, Prev: innodb-buffer-pool-optimization, Up: buffering-caching
Menu:
key-cache-restructuring:: Restructuring a Key Cache
To minimize disk I/O, the 'MyISAM' storage engine exploits a strategy that is used by many database management systems. It employs a cache mechanism to keep the most frequently accessed table blocks in memory:
For index blocks, a special structure called the key cache (or key buffer) is maintained. The structure contains a number of block buffers where the most-used index blocks are placed.
For data blocks, MySQL uses no special cache. Instead it relies on the native operating system file system cache.
This section first describes the basic operation of the 'MyISAM' key cache. Then it discusses features that improve key cache performance and that enable you to better control cache operation:
Multiple sessions can access the cache concurrently.
You can set up multiple key caches and assign table indexes to specific caches.
To control the size of the key cache, use the 'key_buffer_size' system variable. If this variable is set equal to zero, no key cache is used. The key cache also is not used if the 'key_buffer_size' value is too small to allocate the minimal number of block buffers (8).
When the key cache is not operational, index files are accessed using only the native file system buffering provided by the operating system. (In other words, table index blocks are accessed using the same strategy as that employed for table data blocks.)
An index block is a contiguous unit of access to the 'MyISAM' index files. Usually the size of an index block is equal to the size of nodes of the index B-tree. (Indexes are represented on disk using a B-tree data structure. Nodes at the bottom of the tree are leaf nodes. Nodes above the leaf nodes are nonleaf nodes.)
All block buffers in a key cache structure are the same size. This size can be equal to, greater than, or less than the size of a table index block. Usually one these two values is a multiple of the other.
When data from any table index block must be accessed, the server first checks whether it is available in some block buffer of the key cache. If it is, the server accesses data in the key cache rather than on disk. That is, it reads from the cache or writes into it rather than reading from or writing to disk. Otherwise, the server chooses a cache block buffer containing a different table index block (or blocks) and replaces the data there by a copy of required table index block. As soon as the new index block is in the cache, the index data can be accessed.
If it happens that a block selected for replacement has been modified, the block is considered 'dirty.' In this case, prior to being replaced, its contents are flushed to the table index from which it came.
Usually the server follows an LRU (Least Recently Used) strategy: When choosing a block for replacement, it selects the least recently used index block. To make this choice easier, the key cache module maintains all used blocks in a special list (LRU chain) ordered by time of use. When a block is accessed, it is the most recently used and is placed at the end of the list. When blocks need to be replaced, blocks at the beginning of the list are the least recently used and become the first candidates for eviction.
The 'InnoDB' storage engine also uses an LRU algorithm, to manage its buffer pool. See *note innodb-buffer-pool::.
File: manual.info.tmp, Node: shared-key-cache, Next: multiple-key-caches, Prev: myisam-key-cache, Up: myisam-key-cache
8.10.2.1 Shared Key Cache Access ................................
Threads can access key cache buffers simultaneously, subject to the following conditions:
A buffer that is not being updated can be accessed by multiple sessions.
A buffer that is being updated causes sessions that need to use it to wait until the update is complete.
Multiple sessions can initiate requests that result in cache block replacements, as long as they do not interfere with each other (that is, as long as they need different index blocks, and thus cause different cache blocks to be replaced).
Shared access to the key cache enables the server to improve throughput significantly.
File: manual.info.tmp, Node: multiple-key-caches, Next: midpoint-insertion, Prev: shared-key-cache, Up: myisam-key-cache
8.10.2.2 Multiple Key Caches ............................
Shared access to the key cache improves performance but does not eliminate contention among sessions entirely. They still compete for control structures that manage access to the key cache buffers. To reduce key cache access contention further, MySQL also provides multiple key caches. This feature enables you to assign different table indexes to different key caches.
Where there are multiple key caches, the server must know which cache to use when processing queries for a given 'MyISAM' table. By default, all 'MyISAM' table indexes are cached in the default key cache. To assign table indexes to a specific key cache, use the note 'CACHE INDEX': cache-index. statement (see note cache-index::). For example, the following statement assigns indexes from the tables 't1', 't2', and 't3' to the key cache named 'hot_cache':
mysql> CACHE INDEX t1, t2, t3 IN hot_cache;
+---------+--------------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+---------+--------------------+----------+----------+
| test.t1 | assign_to_keycache | status | OK |
| test.t2 | assign_to_keycache | status | OK |
| test.t3 | assign_to_keycache | status | OK |
+---------+--------------------+----------+----------+
The key cache referred to in a note 'CACHE INDEX': cache-index. statement can be created by setting its size with a note 'SET GLOBAL': set-variable. parameter setting statement or by using server startup options. For example:
mysql> SET GLOBAL keycache1.key_buffer_size=128*1024;
To destroy a key cache, set its size to zero:
mysql> SET GLOBAL keycache1.key_buffer_size=0;
You cannot destroy the default key cache. Any attempt to do this is ignored:
mysql> SET GLOBAL key_buffer_size = 0;
mysql> SHOW VARIABLES LIKE 'key_buffer_size';
+-----------------+---------+
| Variable_name | Value |
+-----------------+---------+
| key_buffer_size | 8384512 |
+-----------------+---------+
Key cache variables are structured system variables that have a name and components. For 'keycache1.key_buffer_size', 'keycache1' is the cache variable name and 'key_buffer_size' is the cache component. See *note structured-system-variables::, for a description of the syntax used for referring to structured key cache system variables.
By default, table indexes are assigned to the main (default) key cache created at the server startup. When a key cache is destroyed, all indexes assigned to it are reassigned to the default key cache.
For a busy server, you can use a strategy that involves three key caches:
A 'hot' key cache that takes up 20% of the space allocated for all key caches. Use this for tables that are heavily used for searches but that are not updated.
A 'cold' key cache that takes up 20% of the space allocated for all key caches. Use this cache for medium-sized, intensively modified tables, such as temporary tables.
A 'warm' key cache that takes up 60% of the key cache space. Employ this as the default key cache, to be used by default for all other tables.
One reason the use of three key caches is beneficial is that access to one key cache structure does not block access to the others. Statements that access tables assigned to one cache do not compete with statements that access tables assigned to another cache. Performance gains occur for other reasons as well:
The hot cache is used only for retrieval queries, so its contents are never modified. Consequently, whenever an index block needs to be pulled in from disk, the contents of the cache block chosen for replacement need not be flushed first.
For an index assigned to the hot cache, if there are no queries requiring an index scan, there is a high probability that the index blocks corresponding to nonleaf nodes of the index B-tree remain in the cache.
An update operation most frequently executed for temporary tables is performed much faster when the updated node is in the cache and need not be read from disk first. If the size of the indexes of the temporary tables are comparable with the size of cold key cache, the probability is very high that the updated node is in the cache.
The note 'CACHE INDEX': cache-index. statement sets up an association between a table and a key cache, but the association is lost each time the server restarts. If you want the association to take effect each time the server starts, one way to accomplish this is to use an option file: Include variable settings that configure your key caches, and an 'init_file' system variable that names a file containing note 'CACHE INDEX': cache-index. statements to be executed. For example:
key_buffer_size = 4G
hot_cache.key_buffer_size = 2G
cold_cache.key_buffer_size = 2G
init_file=/PATH/TO/DATA-DIRECTORY/mysqld_init.sql
The statements in 'mysqld_init.sql' are executed each time the server starts. The file should contain one SQL statement per line. The following example assigns several tables each to 'hot_cache' and 'cold_cache':
CACHE INDEX db1.t1, db1.t2, db2.t3 IN hot_cache
CACHE INDEX db1.t4, db2.t5, db2.t6 IN cold_cache
File: manual.info.tmp, Node: midpoint-insertion, Next: index-preloading, Prev: multiple-key-caches, Up: myisam-key-cache
8.10.2.3 Midpoint Insertion Strategy ....................................
By default, the key cache management system uses a simple LRU strategy for choosing key cache blocks to be evicted, but it also supports a more sophisticated method called the midpoint insertion strategy.
When using the midpoint insertion strategy, the LRU chain is divided into two parts: a hot sublist and a warm sublist. The division point between two parts is not fixed, but the key cache management system takes care that the warm part is not 'too short,' always containing at least 'key_cache_division_limit' percent of the key cache blocks. 'key_cache_division_limit' is a component of structured key cache variables, so its value is a parameter that can be set per cache.
When an index block is read from a table into the key cache, it is placed at the end of the warm sublist. After a certain number of hits (accesses of the block), it is promoted to the hot sublist. At present, the number of hits required to promote a block (3) is the same for all index blocks.
A block promoted into the hot sublist is placed at the end of the list. The block then circulates within this sublist. If the block stays at the beginning of the sublist for a long enough time, it is demoted to the warm sublist. This time is determined by the value of the 'key_cache_age_threshold' component of the key cache.
The threshold value prescribes that, for a key cache containing N blocks, the block at the beginning of the hot sublist not accessed within the last 'N * key_cache_age_threshold / 100' hits is to be moved to the beginning of the warm sublist. It then becomes the first candidate for eviction, because blocks for replacement always are taken from the beginning of the warm sublist.
The midpoint insertion strategy enables you to keep more-valued blocks always in the cache. If you prefer to use the plain LRU strategy, leave the 'key_cache_division_limit' value set to its default of 100.
The midpoint insertion strategy helps to improve performance when execution of a query that requires an index scan effectively pushes out of the cache all the index blocks corresponding to valuable high-level B-tree nodes. To avoid this, you must use a midpoint insertion strategy with the 'key_cache_division_limit' set to much less than 100. Then valuable frequently hit nodes are preserved in the hot sublist during an index scan operation as well.
File: manual.info.tmp, Node: index-preloading, Next: key-cache-block-size, Prev: midpoint-insertion, Up: myisam-key-cache
8.10.2.4 Index Preloading .........................
If there are enough blocks in a key cache to hold blocks of an entire index, or at least the blocks corresponding to its nonleaf nodes, it makes sense to preload the key cache with index blocks before starting to use it. Preloading enables you to put the table index blocks into a key cache buffer in the most efficient way: by reading the index blocks from disk sequentially.
Without preloading, the blocks are still placed into the key cache as needed by queries. Although the blocks stay in the cache, because there are enough buffers for all of them, they are fetched from disk in random order, and not sequentially.
To preload an index into a cache, use the *note 'LOAD INDEX INTO CACHE': load-index. statement. For example, the following statement preloads nodes (index blocks) of indexes of the tables 't1' and 't2':
mysql> LOAD INDEX INTO CACHE t1, t2 IGNORE LEAVES;
+---------+--------------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+---------+--------------+----------+----------+
| test.t1 | preload_keys | status | OK |
| test.t2 | preload_keys | status | OK |
+---------+--------------+----------+----------+
The 'IGNORE LEAVES' modifier causes only blocks for the nonleaf nodes of the index to be preloaded. Thus, the statement shown preloads all index blocks from 't1', but only blocks for the nonleaf nodes from 't2'.
If an index has been assigned to a key cache using a *note 'CACHE INDEX': cache-index. statement, preloading places index blocks into that cache. Otherwise, the index is loaded into the default key cache.
File: manual.info.tmp, Node: key-cache-block-size, Next: key-cache-restructuring, Prev: index-preloading, Up: myisam-key-cache
8.10.2.5 Key Cache Block Size .............................
It is possible to specify the size of the block buffers for an individual key cache using the 'key_cache_block_size' variable. This permits tuning of the performance of I/O operations for index files.
The best performance for I/O operations is achieved when the size of read buffers is equal to the size of the native operating system I/O buffers. But setting the size of key nodes equal to the size of the I/O buffer does not always ensure the best overall performance. When reading the big leaf nodes, the server pulls in a lot of unnecessary data, effectively preventing reading other leaf nodes.
To control the size of blocks in the '.MYI' index file of 'MyISAM' tables, use the '--myisam-block-size' option at server startup.
File: manual.info.tmp, Node: key-cache-restructuring, Prev: key-cache-block-size, Up: myisam-key-cache
8.10.2.6 Restructuring a Key Cache ..................................
A key cache can be restructured at any time by updating its parameter values. For example:
mysql> SET GLOBAL cold_cache.key_buffer_size=4*1024*1024;
If you assign to either the 'key_buffer_size' or 'key_cache_block_size' key cache component a value that differs from the component's current value, the server destroys the cache's old structure and creates a new one based on the new values. If the cache contains any dirty blocks, the server saves them to disk before destroying and re-creating the cache. Restructuring does not occur if you change other key cache parameters.
When restructuring a key cache, the server first flushes the contents of any dirty buffers to disk. After that, the cache contents become unavailable. However, restructuring does not block queries that need to use indexes assigned to the cache. Instead, the server directly accesses the table indexes using native file system caching. File system caching is not as efficient as using a key cache, so although queries execute, a slowdown can be anticipated. After the cache has been restructured, it becomes available again for caching indexes assigned to it, and the use of file system caching for the indexes ceases.
File: manual.info.tmp, Node: query-cache, Next: statement-caching, Prev: myisam-key-cache, Up: buffering-caching
Menu:
query-cache-status-and-maintenance:: Query Cache Status and Maintenance
Note:
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.
The query cache stores the text of a *note 'SELECT': select. statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries. This is a typical situation for many Web servers that generate many dynamic pages based on database content.
The query cache does not return stale data. When tables are modified, any relevant entries in the query cache are flushed.
Note:
The query cache does not work in an environment where you have multiple *note 'mysqld': mysqld. servers updating the same 'MyISAM' tables.
The query cache is used for prepared statements under the conditions described in *note query-cache-operation::.
Note:
The query cache is not supported for partitioned tables, and is automatically disabled for queries involving partitioned tables. The query cache cannot be enabled for such queries.
Some performance data for the query cache follows. These results were generated by running the MySQL benchmark suite on a Linux Alpha 2x500MHz system with 2GB RAM and a 64MB query cache.
If all the queries you are performing are simple (such as selecting a row from a table with one row), but still differ so that the queries cannot be cached, the overhead for having the query cache active is 13%. This could be regarded as the worst case scenario. In real life, queries tend to be much more complicated, so the overhead normally is significantly lower.
Searches for a single row in a single-row table are 238% faster with the query cache than without it. This can be regarded as close to the minimum speedup to be expected for a query that is cached.
To disable the query cache at server startup, set the 'query_cache_size' system variable to 0. By disabling the query cache code, there is no noticeable overhead.
The query cache offers the potential for substantial performance improvement, but do not assume that it does so under all circumstances. With some query cache configurations or server workloads, you might actually see a performance decrease:
Be cautious about sizing the query cache excessively large, which increases the overhead required to maintain the cache, possibly beyond the benefit of enabling it. Sizes in tens of megabytes are usually beneficial. Sizes in the hundreds of megabytes might not be.
Server workload has a significant effect on query cache efficiency. A query mix consisting almost entirely of a fixed set of note 'SELECT': select. statements is much more likely to benefit from enabling the cache than a mix in which frequent note 'INSERT': insert. statements cause continual invalidation of results in the cache. In some cases, a workaround is to use the 'SQL_NO_CACHE' option to prevent results from even entering the cache for note 'SELECT': select. statements that use frequently modified tables. (See note query-cache-in-select::.)
To verify that enabling the query cache is beneficial, test the operation of your MySQL server with the cache enabled and disabled. Then retest periodically because query cache efficiency may change as server workload changes.
File: manual.info.tmp, Node: query-cache-operation, Next: query-cache-in-select, Prev: query-cache, Up: query-cache
8.10.3.1 How the Query Cache Operates .....................................
Note:
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.
This section describes how the query cache works when it is operational. *note query-cache-configuration::, describes how to control whether it is operational.
Incoming queries are compared to those in the query cache before parsing, so the following two queries are regarded as different by the query cache:
SELECT * FROM TBL_NAME
Select * from TBL_NAME
Queries must be exactly the same (byte for byte) to be seen as identical. In addition, query strings that are identical may be treated as different for other reasons. Queries that use different databases, different protocol versions, or different default character sets are considered different queries and are cached separately.
The cache is not used for queries of the following types:
Queries that are a subquery of an outer query
Queries executed within the body of a stored function, trigger, or event
Before a query result is fetched from the query cache, MySQL checks whether the user has *note 'SELECT': select. privilege for all databases and tables involved. If this is not the case, the cached result is not used.
If a query result is returned from query cache, the server increments the 'Qcache_hits' status variable, not 'Com_select'. See *note query-cache-status-and-maintenance::.
If a table changes, all cached queries that use the table become invalid and are removed from the cache. This includes queries that use 'MERGE' tables that map to the changed table. A table can be changed by many types of statements, such as note 'INSERT': insert, note 'UPDATE': update, note 'DELETE': delete, note 'TRUNCATE TABLE': truncate-table, note 'ALTER TABLE': alter-table, note 'DROP TABLE': drop-table, or *note 'DROP DATABASE': drop-database.
The query cache also works within transactions when using 'InnoDB' tables.
The result from a *note 'SELECT': select. query on a view is cached.
The query cache works for 'SELECT SQL_CALC_FOUND_ROWS ...' queries and stores a value that is returned by a following 'SELECT FOUND_ROWS()' query. 'FOUND_ROWS()' returns the correct value even if the preceding query was fetched from the cache because the number of found rows is also stored in the cache. The 'SELECT FOUND_ROWS()' query itself cannot be cached.
Prepared statements that are issued using the binary protocol using 'mysql_stmt_prepare()' (https://dev.mysql.com/doc/c-api/5.7/en/mysql-stmt-prepare.html) and 'mysql_stmt_execute()' (https://dev.mysql.com/doc/c-api/5.7/en/mysql-stmt-execute.html) (see C API Prepared Statement Interface (https://dev.mysql.com/doc/c-api/5.7/en/c-api-prepared-statement-interface.html)), are subject to limitations on caching. Comparison with statements in the query cache is based on the text of the statement after expansion of '?' parameter markers. The statement is compared only with other cached statements that were executed using the binary protocol. That is, for query cache purposes, prepared statements issued using the binary protocol are distinct from prepared statements issued using the text protocol (see *note sql-prepared-statements::).
A query cannot be cached if it uses any of the following functions:
'AES_DECRYPT()'
'AES_ENCRYPT()'
'BENCHMARK()'
'CONNECTION_ID()'
'CONVERT_TZ()'
'CURDATE()'
'CURRENT_DATE()'
'CURRENT_TIME()'
'CURRENT_TIMESTAMP()'
'CURRENT_USER()'
'CURTIME()'
'DATABASE()'
'ENCRYPT()' with one parameter
'FOUND_ROWS()'
'GET_LOCK()'
'IS_FREE_LOCK()'
'IS_USED_LOCK()'
'LAST_INSERT_ID()'
'LOAD_FILE()'
'MASTER_POS_WAIT()'
'NOW()'
'PASSWORD()'
'RAND()'
'RANDOM_BYTES()'
'RELEASE_ALL_LOCKS()'
'RELEASE_LOCK()'
'SLEEP()'
'SYSDATE()'
'UNIX_TIMESTAMP()' with no parameters
'USER()'
'UUID()'
'UUID_SHORT()'
A query also is not cached under these conditions:
It refers to loadable functions or stored functions.
It refers to user variables or local stored program variables.
It refers to tables in the 'mysql', 'INFORMATION_SCHEMA', or 'performance_schema' database.
It refers to any partitioned tables.
It is of any of the following forms:
SELECT ... LOCK IN SHARE MODE
SELECT ... FOR UPDATE
SELECT ... INTO OUTFILE ...
SELECT ... INTO DUMPFILE ...
SELECT * FROM ... WHERE autoincrement_col IS NULL
The last form is not cached because it is used as the ODBC workaround for obtaining the last insert ID value. See the Connector/ODBC section of *note connectors-apis::.
Statements within transactions that use 'SERIALIZABLE' isolation level also cannot be cached because they use 'LOCK IN SHARE MODE' locking.
It uses 'TEMPORARY' tables.
It does not use any tables.
It generates warnings.
The user has a column-level privilege for any of the involved tables.
File: manual.info.tmp, Node: query-cache-in-select, Next: query-cache-configuration, Prev: query-cache-operation, Up: query-cache
8.10.3.2 Query Cache SELECT Options ...................................
Note:
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.
Two query cache-related options may be specified in *note 'SELECT': select. statements:
'SQL_CACHE'
The query result is cached if it is cacheable and the value of the 'query_cache_type' system variable is 'ON' or 'DEMAND'.
'SQL_NO_CACHE'
The server does not use the query cache. It neither checks the query cache to see whether the result is already cached, nor does it cache the query result.
Examples:
SELECT SQL_CACHE id, name FROM customer;
SELECT SQL_NO_CACHE id, name FROM customer;
File: manual.info.tmp, Node: query-cache-configuration, Next: query-cache-status-and-maintenance, Prev: query-cache-in-select, Up: query-cache
8.10.3.3 Query Cache Configuration ..................................
Note:
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.
The 'have_query_cache' server system variable indicates whether the query cache is available:
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
When using a standard MySQL binary, this value is always 'YES', even if query caching is disabled.
Several other system variables control query cache operation. These can be set in an option file or on the command line when starting note 'mysqld': mysqld. The query cache system variables all have names that begin with 'query_cache_'. They are described briefly in note server-system-variables::, with additional configuration information given here.
To set the size of the query cache, set the 'query_cache_size' system variable. Setting it to 0 disables the query cache, as does setting 'query_cache_type=0'. By default, the query cache is disabled. This is achieved using a default size of 1M, with a default for 'query_cache_type' of 0.
To reduce overhead significantly, start the server with 'query_cache_type=0' if you do not intend to use the query cache.
Note:
When using the Windows Configuration Wizard to install or configure MySQL, the default value for 'query_cache_size' is configured automatically for you based on the different configuration types available. When using the Windows Configuration Wizard, the query cache may be enabled (that is, set to a nonzero value) due to the selected configuration. The query cache is also controlled by the setting of the 'query_cache_type' variable. Check the values of these variables as set in your 'my.ini' file after configuration has taken place.
When you set 'query_cache_size' to a nonzero value, keep in mind that the query cache needs a minimum size of about 40KB to allocate its structures. (The exact size depends on system architecture.) If you set the value too small, you'll get a warning, as in this example:
mysql> SET GLOBAL query_cache_size = 40000;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> SHOW WARNINGS\G
*************************** 1. row ***************************
Level: Warning
Code: 1282
Message: Query cache failed to set size 39936;
new query cache size is 0
mysql> SET GLOBAL query_cache_size = 41984;
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| query_cache_size | 41984 |
+------------------+-------+
For the query cache to actually be able to hold any query results, its size must be set larger:
mysql> SET GLOBAL query_cache_size = 1000000;
Query OK, 0 rows affected (0.04 sec)
mysql> SHOW VARIABLES LIKE 'query_cache_size';
+------------------+--------+
| Variable_name | Value |
+------------------+--------+
| query_cache_size | 999424 |
+------------------+--------+
1 row in set (0.00 sec)
The 'query_cache_size' value is aligned to the nearest 1024 byte block. The value reported may therefore be different from the value that you assign.
If the query cache size is greater than 0, the 'query_cache_type' variable influences how it works. This variable can be set to the following values:
A value of '0' or 'OFF' prevents caching or retrieval of cached results.
A value of '1' or 'ON' enables caching except of those statements that begin with 'SELECT SQL_NO_CACHE'.
A value of '2' or 'DEMAND' causes caching of only those statements that begin with 'SELECT SQL_CACHE'.
If 'query_cache_size' is 0, you should also set 'query_cache_type' variable to 0. In this case, the server does not acquire the query cache mutex at all, which means that the query cache cannot be enabled at runtime and there is reduced overhead in query execution.
Setting the 'GLOBAL' 'query_cache_type' value determines query cache behavior for all clients that connect after the change is made. Individual clients can control cache behavior for their own connection by setting the 'SESSION' 'query_cache_type' value. For example, a client can disable use of the query cache for its own queries like this:
mysql> SET SESSION query_cache_type = OFF;
If you set 'query_cache_type' at server startup (rather than at runtime with a *note 'SET': set-variable. statement), only the numeric values are permitted.
To control the maximum size of individual query results that can be cached, set the 'query_cache_limit' system variable. The default value is 1MB.
Be careful not to set the size of the cache too large. Due to the need for threads to lock the cache during updates, you may see lock contention issues with a very large cache.
Note:
You can set the maximum size that can be specified for the query cache at runtime with the *note 'SET': set-variable. statement by using the '--maximum-query_cache_size=32M' option on the command line or in the configuration file.
When a query is to be cached, its result (the data sent to the client) is stored in the query cache during result retrieval. Therefore the data usually is not handled in one big chunk. The query cache allocates blocks for storing this data on demand, so when one block is filled, a new block is allocated. Because memory allocation operation is costly (timewise), the query cache allocates blocks with a minimum size given by the 'query_cache_min_res_unit' system variable. When a query is executed, the last result block is trimmed to the actual data size so that unused memory is freed. Depending on the types of queries your server executes, you might find it helpful to tune the value of 'query_cache_min_res_unit':
The default value of 'query_cache_min_res_unit' is 4KB. This should be adequate for most cases.
If you have a lot of queries with small results, the default block size may lead to memory fragmentation, as indicated by a large number of free blocks. Fragmentation can force the query cache to prune (delete) queries from the cache due to lack of memory. In this case, decrease the value of 'query_cache_min_res_unit'. The number of free blocks and queries removed due to pruning are given by the values of the 'Qcache_free_blocks' and 'Qcache_lowmem_prunes' status variables.
If most of your queries have large results (check the 'Qcache_total_blocks' and 'Qcache_queries_in_cache' status variables), you can increase performance by increasing 'query_cache_min_res_unit'. However, be careful to not make it too large (see the previous item).
File: manual.info.tmp, Node: query-cache-status-and-maintenance, Prev: query-cache-configuration, Up: query-cache
8.10.3.4 Query Cache Status and Maintenance ...........................................
Note:
The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.
To check whether the query cache is present in your MySQL server, use the following statement:
mysql> SHOW VARIABLES LIKE 'have_query_cache';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| have_query_cache | YES |
+------------------+-------+
You can defragment the query cache to better utilize its memory with the 'FLUSH QUERY CACHE' statement. The statement does not remove any queries from the cache.
The 'RESET QUERY CACHE' statement removes all query results from the query cache. The 'FLUSH TABLES' statement also does this.
To monitor query cache performance, use *note 'SHOW STATUS': show-status. to view the cache status variables:
mysql> SHOW STATUS LIKE 'Qcache%';
+-------------------------+--------+
| Variable_name | Value |
+-------------------------+--------+
| Qcache_free_blocks | 36 |
| Qcache_free_memory | 138488 |
| Qcache_hits | 79570 |
| Qcache_inserts | 27087 |
| Qcache_lowmem_prunes | 3114 |
| Qcache_not_cached | 22989 |
| Qcache_queries_in_cache | 415 |
| Qcache_total_blocks | 912 |
+-------------------------+--------+
Descriptions of each of these variables are given in *note server-status-variables::. Some uses for them are described here.
The total number of *note 'SELECT': select. queries is given by this formula:
Com_select
+ Qcache_hits
+ queries with errors found by parser
The 'Com_select' value is given by this formula:
Qcache_inserts
+ Qcache_not_cached
+ queries with errors found during the column-privileges check
The query cache uses variable-length blocks, so 'Qcache_total_blocks' and 'Qcache_free_blocks' may indicate query cache memory fragmentation. After 'FLUSH QUERY CACHE', only a single free block remains.
Every cached query requires a minimum of two blocks (one for the query text and one or more for the query results). Also, every table that is used by a query requires one block. However, if two or more queries use the same table, only one table block needs to be allocated.
The information provided by the 'Qcache_lowmem_prunes' status variable can help you tune the query cache size. It counts the number of queries that have been removed from the cache to free up memory for caching new queries. The query cache uses a least recently used (LRU) strategy to decide which queries to remove from the cache. Tuning information is given in *note query-cache-configuration::.
File: manual.info.tmp, Node: statement-caching, Prev: query-cache, Up: buffering-caching
For certain statements that a client might execute multiple times during a session, the server converts the statement to an internal structure and caches that structure to be used during execution. Caching enables the server to perform more efficiently because it avoids the overhead of reconverting the statement should it be needed again during the session. Conversion and caching occurs for these statements:
Prepared statements, both those processed at the SQL level (using the *note 'PREPARE': prepare. statement) and those processed using the binary client/server protocol (using the 'mysql_stmt_prepare()' (https://dev.mysql.com/doc/c-api/5.7/en/mysql-stmt-prepare.html) C API function). The 'max_prepared_stmt_count' system variable controls the total number of statements the server caches. (The sum of the number of prepared statements across all sessions.)
Stored programs (stored procedures and functions, triggers, and events). In this case, the server converts and caches the entire program body. The 'stored_program_cache' system variable indicates the approximate number of stored programs the server caches per session.
The server maintains caches for prepared statements and stored programs on a per-session basis. Statements cached for one session are not accessible to other sessions. When a session ends, the server discards any statements cached for it.
When the server uses a cached internal statement structure, it must take care that the structure does not go out of date. Metadata changes can occur for an object used by the statement, causing a mismatch between the current object definition and the definition as represented in the internal statement structure. Metadata changes occur for DDL statements such as those that create, drop, alter, rename, or truncate tables, or that analyze, optimize, or repair tables. Table content changes (for example, with note 'INSERT': insert. or note 'UPDATE': update.) do not change metadata, nor do *note 'SELECT': select. statements.
Here is an illustration of the problem. Suppose that a client prepares this statement:
PREPARE s1 FROM 'SELECT * FROM t1';
The 'SELECT *' expands in the internal structure to the list of columns in the table. If the set of columns in the table is modified with 'ALTER TABLE', the prepared statement goes out of date. If the server does not detect this change the next time the client executes 's1', the prepared statement returns incorrect results.
To avoid problems caused by metadata changes to tables or views referred to by the prepared statement, the server detects these changes and automatically reprepares the statement when it is next executed. That is, the server reparses the statement and rebuilds the internal structure. Reparsing also occurs after referenced tables or views are flushed from the table definition cache, either implicitly to make room for new entries in the cache, or explicitly due to 'FLUSH TABLES'.
Similarly, if changes occur to objects used by a stored program, the server reparses affected statements within the program.
The server also detects metadata changes for objects in expressions. These might be used in statements specific to stored programs, such as 'DECLARE CURSOR' or flow-control statements such as note 'IF': if, note 'CASE': case, and *note 'RETURN': return.
To avoid reparsing entire stored programs, the server reparses affected statements or expressions within a program only as needed. Examples:
Suppose that metadata for a table or view is changed. Reparsing occurs for a 'SELECT ' within the program that accesses the table or view, but not for a 'SELECT ' that does not access the table or view.
When a statement is affected, the server reparses it only partially if possible. Consider this *note 'CASE': case. statement:
CASE CASE_EXPR
WHEN WHEN_EXPR1 ...
WHEN WHEN_EXPR2 ...
WHEN WHEN_EXPR3 ...
...
END CASE
If a metadata change affects only 'WHEN WHEN_EXPR3', that expression is reparsed. CASE_EXPR and the other 'WHEN' expressions are not reparsed.
Reparsing uses the default database and SQL mode that were in effect for the original conversion to internal form.
The server attempts reparsing up to three times. An error occurs if all attempts fail.
Reparsing is automatic, but to the extent that it occurs, diminishes prepared statement and stored program performance.
For prepared statements, the 'Com_stmt_reprepare' status variable tracks the number of repreparations.
File: manual.info.tmp, Node: locking-issues, Next: optimizing-server, Prev: buffering-caching, Up: optimization