Menu:
optimizing-innodb-many-tables:: Optimizing InnoDB for Systems with Many Tables
*note 'InnoDB': innodb-storage-engine. is the storage engine that MySQL customers typically use in production databases where reliability and concurrency are important. 'InnoDB' is the default storage engine in MySQL. This section explains how to optimize database operations for 'InnoDB' tables.
File: manual.info.tmp, Node: optimizing-innodb-storage-layout, Next: optimizing-innodb-transaction-management, Prev: optimizing-innodb, Up: optimizing-innodb
Once your data reaches a stable size, or a growing table has increased by tens or some hundreds of megabytes, consider using the 'OPTIMIZE TABLE' statement to reorganize the table and compact any wasted space. The reorganized tables require less disk I/O to perform full table scans. This is a straightforward technique that can improve performance when other techniques such as improving index usage or tuning application code are not practical.
'OPTIMIZE TABLE' copies the data part of the table and rebuilds the indexes. The benefits come from improved packing of data within indexes, and reduced fragmentation within the tablespaces and on disk. The benefits vary depending on the data in each table. You may find that there are significant gains for some and not for others, or that the gains decrease over time until you next optimize the table. This operation can be slow if the table is large or if the indexes being rebuilt do not fit into the buffer pool. The first run after adding a lot of data to a table is often much slower than later runs.
In 'InnoDB', having a long 'PRIMARY KEY' (either a single column with a lengthy value, or several columns that form a long composite value) wastes a lot of disk space. The primary key value for a row is duplicated in all the secondary index records that point to the same row. (See *note innodb-index-types::.) Create an 'AUTO_INCREMENT' column as the primary key if your primary key is long, or index a prefix of a long 'VARCHAR' column instead of the entire column.
Use the note 'VARCHAR': char. data type instead of note 'CHAR': char. to store variable-length strings or for columns with many 'NULL' values. A *note 'CHAR(N)': char. column always takes N characters to store data, even if the string is shorter or its value is 'NULL'. Smaller tables fit better in the buffer pool and reduce disk I/O.
When using 'COMPACT' row format (the default 'InnoDB' format) and variable-length character sets, such as 'utf8' or 'sjis', *note 'CHAR(N)': char. columns occupy a variable amount of space, but still at least N bytes.
For tables that are big, or contain lots of repetitive text or numeric data, consider using 'COMPRESSED' row format. Less disk I/O is required to bring data into the buffer pool, or to perform full table scans. Before making a permanent decision, measure the amount of compression you can achieve by using 'COMPRESSED' versus 'COMPACT' row format.
File: manual.info.tmp, Node: optimizing-innodb-transaction-management, Next: innodb-performance-ro-txn, Prev: optimizing-innodb-storage-layout, Up: optimizing-innodb
To optimize 'InnoDB' transaction processing, find the ideal balance between the performance overhead of transactional features and the workload of your server. For example, an application might encounter performance issues if it commits thousands of times per second, and different performance issues if it commits only every 2-3 hours.
The default MySQL setting 'AUTOCOMMIT=1' can impose performance limitations on a busy database server. Where practical, wrap several related data change operations into a single transaction, by issuing 'SET AUTOCOMMIT=0' or a 'START TRANSACTION' statement, followed by a 'COMMIT' statement after making all the changes.
'InnoDB' must flush the log to disk at each transaction commit if that transaction made modifications to the database. When each change is followed by a commit (as with the default autocommit setting), the I/O throughput of the storage device puts a cap on the number of potential operations per second.
Alternatively, for transactions that consist only of a single note 'SELECT': select. statement, turning on 'AUTOCOMMIT' helps 'InnoDB' to recognize read-only transactions and optimize them. See note innodb-performance-ro-txn:: for requirements.
Avoid performing rollbacks after inserting, updating, or deleting huge numbers of rows. If a big transaction is slowing down server performance, rolling it back can make the problem worse, potentially taking several times as long to perform as the original data change operations. Killing the database process does not help, because the rollback starts again on server startup.
To minimize the chance of this issue occurring:
* Increase the size of the buffer pool so that all the data
change changes can be cached rather than immediately written
to disk.
* Set 'innodb_change_buffering=all' so that update and delete
operations are buffered in addition to inserts.
* Consider issuing 'COMMIT' statements periodically during the
big data change operation, possibly breaking a single delete
or update into multiple statements that operate on smaller
numbers of rows.
To get rid of a runaway rollback once it occurs, increase the buffer pool so that the rollback becomes CPU-bound and runs fast, or kill the server and restart with 'innodb_force_recovery=3', as explained in *note innodb-recovery::.
This issue is expected to be infrequent with the default setting 'innodb_change_buffering=all', which allows update and delete operations to be cached in memory, making them faster to perform in the first place, and also faster to roll back if needed. Make sure to use this parameter setting on servers that process long-running transactions with many inserts, updates, or deletes.
If you can afford the loss of some of the latest committed transactions if an unexpected exit occurs, you can set the 'innodb_flush_log_at_trx_commit' parameter to 0. 'InnoDB' tries to flush the log once per second anyway, although the flush is not guaranteed. Also, set the value of 'innodb_support_xa' to 0, which reduces the number of disk flushes due to synchronizing on disk data and the binary log.
Note:
'innodb_support_xa' is deprecated; expect it to be removed in a future release. As of MySQL 5.7.10, 'InnoDB' support for two-phase commit in XA transactions is always enabled and disabling 'innodb_support_xa' is no longer permitted.
When rows are modified or deleted, the rows and associated undo logs are not physically removed immediately, or even immediately after the transaction commits. The old data is preserved until transactions that started earlier or concurrently are finished, so that those transactions can access the previous state of modified or deleted rows. Thus, a long-running transaction can prevent 'InnoDB' from purging data that was changed by a different transaction.
When rows are modified or deleted within a long-running transaction, other transactions using the 'READ COMMITTED' and 'REPEATABLE READ' isolation levels have to do more work to reconstruct the older data if they read those same rows.
When a long-running transaction modifies a table, queries against that table from other transactions do not make use of the covering index technique. Queries that normally could retrieve all the result columns from a secondary index, instead look up the appropriate values from the table data.
If secondary index pages are found to have a 'PAGE_MAX_TRX_ID' that is too new, or if records in the secondary index are delete-marked, 'InnoDB' may need to look up records using a clustered index.
File: manual.info.tmp, Node: innodb-performance-ro-txn, Next: optimizing-innodb-logging, Prev: optimizing-innodb-transaction-management, Up: optimizing-innodb
'InnoDB' can avoid the overhead associated with setting up the transaction ID ('TRX_ID' field) for transactions that are known to be read-only. A transaction ID is only needed for a transaction that might perform write operations or locking reads such as 'SELECT ... FOR UPDATE'. Eliminating unnecessary transaction IDs reduces the size of internal data structures that are consulted each time a query or data change statement constructs a read view.
'InnoDB' detects read-only transactions when:
The transaction is started with the *note 'START TRANSACTION READ ONLY': commit. statement. In this case, attempting to make changes to the database (for 'InnoDB', 'MyISAM', or other types of tables) causes an error, and the transaction continues in read-only state:
ERROR 1792 (25006): Cannot execute statement in a READ ONLY transaction.
You can still make changes to session-specific temporary tables in a read-only transaction, or issue locking queries for them, because those changes and locks are not visible to any other transaction.
The 'autocommit' setting is turned on, so that the transaction is guaranteed to be a single statement, and the single statement making up the transaction is a 'non-locking' *note 'SELECT': select. statement. That is, a 'SELECT' that does not use a 'FOR UPDATE' or 'LOCK IN SHARED MODE' clause.
The transaction is started without the 'READ ONLY' option, but no updates or statements that explicitly lock rows have been executed yet. Until updates or explicit locks are required, a transaction stays in read-only mode.
Thus, for a read-intensive application such as a report generator, you can tune a sequence of 'InnoDB' queries by grouping them inside note 'START TRANSACTION READ ONLY': commit. and note 'COMMIT': commit, or by turning on the 'autocommit' setting before running the 'SELECT' statements, or simply by avoiding any data change statements interspersed with the queries.
For information about note 'START TRANSACTION': commit. and 'autocommit', see note commit::.
Note:
Transactions that qualify as auto-commit, non-locking, and read-only (AC-NL-RO) are kept out of certain internal 'InnoDB' data structures and are therefore not listed in *note 'SHOW ENGINE INNODB STATUS': show-engine. output.
File: manual.info.tmp, Node: optimizing-innodb-logging, Next: optimizing-innodb-bulk-data-loading, Prev: innodb-performance-ro-txn, Up: optimizing-innodb
Consider the following guidelines for optimizing redo logging:
Make your redo log files big, even as big as the buffer pool. When 'InnoDB' has written the redo log files full, it must write the modified contents of the buffer pool to disk in a checkpoint. Small redo log files cause many unnecessary disk writes. Although historically big redo log files caused lengthy recovery times, recovery is now much faster and you can confidently use large redo log files.
The size and number of redo log files are configured using the 'innodb_log_file_size' and 'innodb_log_files_in_group' configuration options. For information about modifying an existing redo log file configuration, see *note innodb-redo-log-file-reconfigure::.
Consider increasing the size of the log buffer. A large log buffer enables large transactions to run without a need to write the log to disk before the transactions commit. Thus, if you have transactions that update, insert, or delete many rows, making the log buffer larger saves disk I/O. Log buffer size is configured using the 'innodb_log_buffer_size' configuration option.
Configure the 'innodb_log_write_ahead_size' configuration option to avoid 'read-on-write'. This option defines the write-ahead block size for the redo log. Set 'innodb_log_write_ahead_size' to match the operating system or file system cache block size. Read-on-write occurs when redo log blocks are not entirely cached to the operating system or file system due to a mismatch between write-ahead block size for the redo log and operating system or file system cache block size.
Valid values for 'innodb_log_write_ahead_size' are multiples of the 'InnoDB' log file block size (2^n). The minimum value is the 'InnoDB' log file block size (512). Write-ahead does not occur when the minimum value is specified. The maximum value is equal to the 'innodb_page_size' value. If you specify a value for 'innodb_log_write_ahead_size' that is larger than the 'innodb_page_size' value, the 'innodb_log_write_ahead_size' setting is truncated to the 'innodb_page_size' value.
Setting the 'innodb_log_write_ahead_size' value too low in relation to the operating system or file system cache block size results in read-on-write. Setting the value too high may have a slight impact on 'fsync' performance for log file writes due to several blocks being written at once.
File: manual.info.tmp, Node: optimizing-innodb-bulk-data-loading, Next: optimizing-innodb-queries, Prev: optimizing-innodb-logging, Up: optimizing-innodb
These performance tips supplement the general guidelines for fast inserts in *note insert-optimization::.
When importing data into 'InnoDB', turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it with note 'SET autocommit': commit. and note 'COMMIT': commit. statements:
SET autocommit=0;
... SQL IMPORT STATEMENTS ...
COMMIT;
The note 'mysqldump': mysqldump. option '--opt' creates dump files that are fast to import into an 'InnoDB' table, even without wrapping them with the note 'SET autocommit': commit. and *note 'COMMIT': commit. statements.
If you have 'UNIQUE' constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:
SET unique_checks=0;
... SQL IMPORT STATEMENTS ...
SET unique_checks=1;
For big tables, this saves a lot of disk I/O because 'InnoDB' can use its change buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.
If you have 'FOREIGN KEY' constraints in your tables, you can speed up table imports by turning off the foreign key checks for the duration of the import session:
SET foreign_key_checks=0;
... SQL IMPORT STATEMENTS ...
SET foreign_key_checks=1;
For big tables, this can save a lot of disk I/O.
Use the multiple-row *note 'INSERT': insert. syntax to reduce communication overhead between the client and the server if you need to insert many rows:
INSERT INTO yourtable VALUES (1,2), (5,5), ...;
This tip is valid for inserts into any table, not just 'InnoDB' tables.
When doing bulk inserts into tables with auto-increment columns, set 'innodb_autoinc_lock_mode' to 2 instead of the default value 1. See *note innodb-auto-increment-handling:: for details.
When performing bulk inserts, it is faster to insert rows in 'PRIMARY KEY' order. 'InnoDB' tables use a clustered index, which makes it relatively fast to use data in the order of the 'PRIMARY KEY'. Performing bulk inserts in 'PRIMARY KEY' order is particularly important for tables that do not fit entirely within the buffer pool.
For optimal performance when loading data into an 'InnoDB' 'FULLTEXT' index, follow this set of steps:
Define a column 'FTS_DOC_ID' at table creation time, of type 'BIGINT UNSIGNED NOT NULL', with a unique index named 'FTS_DOC_ID_INDEX'. For example:
CREATE TABLE t1 (
FTS_DOC_ID BIGINT unsigned NOT NULL AUTO_INCREMENT,
title varchar(255) NOT NULL DEFAULT '',
text mediumtext NOT NULL,
PRIMARY KEY (`FTS_DOC_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on t1(FTS_DOC_ID);
Load the data into the table.
Create the 'FULLTEXT' index after the data is loaded.
Note:
When adding 'FTS_DOC_ID' column at table creation time, ensure that the 'FTS_DOC_ID' column is updated when the 'FULLTEXT' indexed column is updated, as the 'FTS_DOC_ID' must increase monotonically with each note 'INSERT': insert. or note 'UPDATE': update. If you choose not to add the 'FTS_DOC_ID' at table creation time and have 'InnoDB' manage DOC IDs for you, 'InnoDB' adds the 'FTS_DOC_ID' as a hidden column with the next *note 'CREATE FULLTEXT INDEX': create-index. call. This approach, however, requires a table rebuild which can impact performance.
File: manual.info.tmp, Node: optimizing-innodb-queries, Next: optimizing-innodb-ddl-operations, Prev: optimizing-innodb-bulk-data-loading, Up: optimizing-innodb
To tune queries for 'InnoDB' tables, create an appropriate set of indexes on each table. See *note mysql-indexes:: for details. Follow these guidelines for 'InnoDB' indexes:
Because each 'InnoDB' table has a primary key (whether you request one or not), specify a set of primary key columns for each table, columns that are used in the most important and time-critical queries.
Do not specify too many or too long columns in the primary key, because these column values are duplicated in each secondary index. When an index contains unnecessary data, the I/O to read this data and memory to cache it reduce the performance and scalability of the server.
Do not create a separate secondary index for each column, because each query can only make use of one index. Indexes on rarely tested columns or columns with only a few different values might not be helpful for any queries. If you have many queries for the same table, testing different combinations of columns, try to create a small number of concatenated indexes rather than a large number of single-column indexes. If an index contains all the columns needed for the result set (known as a covering index), the query might be able to avoid reading the table data at all.
If an indexed column cannot contain any 'NULL' values, declare it as 'NOT NULL' when you create the table. The optimizer can better determine which index is most effective to use for a query, when it knows whether each column contains 'NULL' values.
You can optimize single-query transactions for 'InnoDB' tables, using the technique in *note innodb-performance-ro-txn::.
File: manual.info.tmp, Node: optimizing-innodb-ddl-operations, Next: optimizing-innodb-diskio, Prev: optimizing-innodb-queries, Up: optimizing-innodb
Many DDL operations on tables and indexes ('CREATE', 'ALTER', and 'DROP' statements) can be performed online. See *note innodb-online-ddl:: for details.
Online DDL support for adding secondary indexes means that you can generally speed up the process of creating and loading a table and associated indexes by creating the table without secondary indexes, then adding secondary indexes after the data is loaded.
Use note 'TRUNCATE TABLE': truncate-table. to empty a table, not 'DELETE FROM TBL_NAME'. Foreign key constraints can make a 'TRUNCATE' statement work like a regular 'DELETE' statement, in which case a sequence of commands like note 'DROP TABLE': drop-table. and *note 'CREATE TABLE': create-table. might be fastest.
Because the primary key is integral to the storage layout of each 'InnoDB' table, and changing the definition of the primary key involves reorganizing the whole table, always set up the primary key as part of the *note 'CREATE TABLE': create-table. statement, and plan ahead so that you do not need to 'ALTER' or 'DROP' the primary key afterward.
File: manual.info.tmp, Node: optimizing-innodb-diskio, Next: optimizing-innodb-configuration-variables, Prev: optimizing-innodb-ddl-operations, Up: optimizing-innodb
If you follow best practices for database design and tuning techniques for SQL operations, but your database is still slow due to heavy disk I/O activity, consider these disk I/O optimizations. If the Unix 'top' tool or the Windows Task Manager shows that the CPU usage percentage with your workload is less than 70%, your workload is probably disk-bound.
Increase buffer pool size
When table data is cached in the 'InnoDB' buffer pool, it can be accessed repeatedly by queries without requiring any disk I/O. Specify the size of the buffer pool with the 'innodb_buffer_pool_size' option. This memory area is important enough that it is typically recommended that 'innodb_buffer_pool_size' is configured to 50 to 75 percent of system memory. For more information see, *note memory-use::.
Adjust the flush method
In some versions of GNU/Linux and Unix, flushing files to disk with the Unix 'fsync()' call (which 'InnoDB' uses by default) and similar methods is surprisingly slow. If database write performance is an issue, conduct benchmarks with the 'innodb_flush_method' parameter set to 'O_DSYNC'.
Use a noop or deadline I/O scheduler with native AIO on Linux
'InnoDB' uses the asynchronous I/O subsystem (native AIO) on Linux to perform read-ahead and write requests for data file pages. This behavior is controlled by the 'innodb_use_native_aio' configuration option, which is enabled by default. With native AIO, the type of I/O scheduler has greater influence on I/O performance. Generally, noop and deadline I/O schedulers are recommended. Conduct benchmarks to determine which I/O scheduler provides the best results for your workload and environment. For more information, see *note innodb-linux-native-aio::.
Use direct I/O on Solaris 10 for x86_64 architecture
When using the 'InnoDB' storage engine on Solaris 10 for x86_64 architecture (AMD Opteron), use direct I/O for 'InnoDB'-related files to avoid degradation of 'InnoDB' performance. To use direct I/O for an entire UFS file system used for storing 'InnoDB'-related files, mount it with the 'forcedirectio' option; see 'mount_ufs(1M)'. (The default on Solaris 10/x86_64 is not to use this option.) To apply direct I/O only to 'InnoDB' file operations rather than the whole file system, set 'innodb_flush_method = O_DIRECT'. With this setting, 'InnoDB' calls 'directio()' instead of 'fcntl()' for I/O to data files (not for I/O to log files).
Use raw storage for data and log files with Solaris 2.6 or later
When using the 'InnoDB' storage engine with a large 'innodb_buffer_pool_size' value on any release of Solaris 2.6 and up and any platform (sparc/x86/x64/amd64), conduct benchmarks with 'InnoDB' data files and log files on raw devices or on a separate direct I/O UFS file system, using the 'forcedirectio' mount option as described previously. (It is necessary to use the mount option rather than setting 'innodb_flush_method' if you want direct I/O for the log files.) Users of the Veritas file system VxFS should use the 'convosync=direct' mount option.
Do not place other MySQL data files, such as those for 'MyISAM' tables, on a direct I/O file system. Executables or libraries must not be placed on a direct I/O file system.
Use additional storage devices
Additional storage devices could be used to set up a RAID configuration. For related information, see *note disk-issues::.
Alternatively, 'InnoDB' tablespace data files and log files can be placed on different physical disks. For more information, refer to the following sections:
* *note innodb-init-startup-configuration::
* *note innodb-create-table-external::
* *note general-tablespaces-creating::
* *note innodb-migration::
Consider non-rotational storage
Non-rotational storage generally provides better performance for random I/O operations; and rotational storage for sequential I/O operations. When distributing data and log files across rotational and non-rotational storage devices, consider the type of I/O operations that are predominantly performed on each file.
Random I/O-oriented files typically include file-per-table and general tablespace data files, undo tablespace files, and temporary tablespace files. Sequential I/O-oriented files include 'InnoDB' system tablespace files (due to doublewrite buffering and change buffering) and log files such as binary log files and redo log files.
Review settings for the following configuration options when using non-rotational storage:
* 'innodb_checksum_algorithm'
The 'crc32' option uses a faster checksum algorithm and is
recommended for fast storage systems.
* 'innodb_flush_neighbors'
Optimizes I/O for rotational storage devices. Disable it for
non-rotational storage or a mix of rotational and
non-rotational storage.
* 'innodb_io_capacity'
The default setting of 200 is generally sufficient for a
lower-end non-rotational storage device. For higher-end,
bus-attached devices, consider a higher setting such as 1000.
* 'innodb_io_capacity_max'
The default value of 2000 is intended for workloads that use
non-rotational storage. For a high-end, bus-attached
non-rotational storage device, consider a higher setting such
as 2500.
* 'innodb_log_compressed_pages'
If redo logs are on non-rotational storage, consider disabling
this option to reduce logging. See Disable logging of
compressed pages.
* 'innodb_log_file_size'
If redo logs are on non-rotational storage, configure this
option to maximize caching and write combining.
* 'innodb_page_size'
Consider using a page size that matches the internal sector
size of the disk. Early-generation SSD devices often have a
4KB sector size. Some newer devices have a 16KB sector size.
The default 'InnoDB' page size is 16KB. Keeping the page size
close to the storage device block size minimizes the amount of
unchanged data that is rewritten to disk.
* 'binlog_row_image'
If binary logs are on non-rotational storage and all tables
have primary keys, consider setting this option to 'minimal'
to reduce logging.
Ensure that TRIM support is enabled for your operating system. It is typically enabled by default.
Increase I/O capacity to avoid backlogs
If throughput drops periodically because of 'InnoDB' checkpoint operations, consider increasing the value of the 'innodb_io_capacity' configuration option. Higher values cause more frequent flushing, avoiding the backlog of work that can cause dips in throughput.
Lower I/O capacity if flushing does not fall behind
If the system is not falling behind with 'InnoDB' flushing operations, consider lowering the value of the 'innodb_io_capacity' configuration option. Typically, you keep this option value as low as practical, but not so low that it causes periodic drops in throughput as mentioned in the preceding bullet. In a typical scenario where you could lower the option value, you might see a combination like this in the output from *note 'SHOW ENGINE INNODB STATUS': show-engine.:
* History list length low, below a few thousand.
* Insert buffer merges close to rows inserted.
* Modified pages in buffer pool consistently well below
'innodb_max_dirty_pages_pct' of the buffer pool. (Measure at
a time when the server is not doing bulk inserts; it is normal
during bulk inserts for the modified pages percentage to rise
significantly.)
* 'Log sequence number - Last checkpoint' is at less than 7/8 or
ideally less than 6/8 of the total size of the 'InnoDB' log
files.
Store system tablespace files on Fusion-io devices
You can take advantage of a doublewrite buffer-related I/O optimization by storing system tablespace files ('ibdata files') on Fusion-io devices that support atomic writes. In this case, doublewrite buffering ('innodb_doublewrite') is automatically disabled and Fusion-io atomic writes are used for all data files. This feature is only supported on Fusion-io hardware and is only enabled for Fusion-io NVMFS on Linux. To take full advantage of this feature, an 'innodb_flush_method' setting of 'O_DIRECT' is recommended.
Note:
Because the doublewrite buffer setting is global, doublewrite buffering is also disabled for data files residing on non-Fusion-io hardware.
Disable logging of compressed pages
When using the 'InnoDB' table compression feature, images of re-compressed pages are written to the redo log when changes are made to compressed data. This behavior is controlled by 'innodb_log_compressed_pages', which is enabled by default to prevent corruption that can occur if a different version of the 'zlib' compression algorithm is used during recovery. If you are certain that the 'zlib' version is not subject to change, disable 'innodb_log_compressed_pages' to reduce redo log generation for workloads that modify compressed data.
File: manual.info.tmp, Node: optimizing-innodb-configuration-variables, Next: optimizing-innodb-many-tables, Prev: optimizing-innodb-diskio, Up: optimizing-innodb
Different settings work best for servers with light, predictable loads, versus servers that are running near full capacity all the time, or that experience spikes of high activity.
Because the 'InnoDB' storage engine performs many of its optimizations automatically, many performance-tuning tasks involve monitoring to ensure that the database is performing well, and changing configuration options when performance drops. See *note innodb-performance-schema:: for information about detailed 'InnoDB' performance monitoring.
The main configuration steps you can perform include:
Enabling 'InnoDB' to use high-performance memory allocators on systems that include them. See *note innodb-performance-use_sys_malloc::.
Controlling the types of data change operations for which 'InnoDB' buffers the changed data, to avoid frequent small disk writes. See *note innodb-change-buffer-configuration::. Because the default is to buffer all types of data change operations, only change this setting if you need to reduce the amount of buffering.
Turning the adaptive hash indexing feature on and off using the 'innodb_adaptive_hash_index' option. See *note innodb-adaptive-hash:: for more information. You might change this setting during periods of unusual activity, then restore it to its original setting.
Setting a limit on the number of concurrent threads that 'InnoDB' processes, if context switching is a bottleneck. See *note innodb-performance-thread_concurrency::.
Controlling the amount of prefetching that 'InnoDB' does with its read-ahead operations. When the system has unused I/O capacity, more read-ahead can improve the performance of queries. Too much read-ahead can cause periodic drops in performance on a heavily loaded system. See *note innodb-performance-read_ahead::.
Increasing the number of background threads for read or write operations, if you have a high-end I/O subsystem that is not fully utilized by the default values. See *note innodb-performance-multiple_io_threads::.
Controlling how much I/O 'InnoDB' performs in the background. See *note innodb-configuring-io-capacity::. You might scale back this setting if you observe periodic drops in performance.
Controlling the algorithm that determines when 'InnoDB' performs certain types of background writes. See *note innodb-buffer-pool-flushing::. The algorithm works for some types of workloads but not others, so might turn off this setting if you observe periodic drops in performance.
Taking advantage of multicore processors and their cache memory configuration, to minimize delays in context switching. See *note innodb-performance-spin_lock_polling::.
Preventing one-time operations such as table scans from interfering with the frequently accessed data stored in the 'InnoDB' buffer cache. See *note innodb-performance-midpoint_insertion::.
Adjusting log files to a size that makes sense for reliability and crash recovery. 'InnoDB' log files have often been kept small to avoid long startup times after a crash. Optimizations introduced in MySQL 5.5 speed up certain steps of the crash recovery process. In particular, scanning the redo log and applying the redo log are faster due to improved algorithms for memory management. If you have kept your log files artificially small to avoid long startup times, you can now consider increasing log file size to reduce the I/O that occurs due recycling of redo log records.
Configuring the size and number of instances for the 'InnoDB' buffer pool, especially important for systems with multi-gigabyte buffer pools. See *note innodb-multiple-buffer-pools::.
Increasing the maximum number of concurrent transactions, which dramatically improves scalability for the busiest databases. See *note innodb-undo-logs::.
Moving purge operations (a type of garbage collection) into a background thread. See *note innodb-purge-configuration::. To effectively measure the results of this setting, tune the other I/O-related and thread-related configuration settings first.
Reducing the amount of switching that 'InnoDB' does between concurrent threads, so that SQL operations on a busy server do not queue up and form a 'traffic jam'. Set a value for the 'innodb_thread_concurrency' option, up to approximately 32 for a high-powered modern system. Increase the value for the 'innodb_concurrency_tickets' option, typically to 5000 or so. This combination of options sets a cap on the number of threads that 'InnoDB' processes at any one time, and allows each thread to do substantial work before being swapped out, so that the number of waiting threads stays low and operations can complete without excessive context switching.
File: manual.info.tmp, Node: optimizing-innodb-many-tables, Prev: optimizing-innodb-configuration-variables, Up: optimizing-innodb
If you have configured *note non-persistent optimizer statistics: innodb-statistics-estimation. (a non-default configuration), 'InnoDB' computes index cardinality values for a table the first time that table is accessed after startup, instead of storing such values in the table. This step can take significant time on systems that partition the data into many tables. Since this overhead only applies to the initial table open operation, to 'warm up' a table for later use, access it immediately after startup by issuing a statement such as 'SELECT 1 FROM TBL_NAME LIMIT 1'.
Optimizer statistics are persisted to disk by default, enabled by the 'innodb_stats_persistent' configuration option. For information about persistent optimizer statistics, see *note innodb-persistent-stats::.
File: manual.info.tmp, Node: optimizing-myisam, Next: optimizing-memory-tables, Prev: optimizing-innodb, Up: optimization