21.6 Management of NDB Cluster

Managing an NDB Cluster involves a number of tasks, the first of which is to configure and start NDB Cluster. This is covered in note mysql-cluster-configuration::, and note mysql-cluster-programs::.

The next few sections cover the management of a running NDB Cluster.

For information about security issues relating to management and deployment of an NDB Cluster, see *note mysql-cluster-security::.

There are essentially two methods of actively managing a running NDB Cluster. The first of these is through the use of commands entered into the management client whereby cluster status can be checked, log levels changed, backups started and stopped, and nodes stopped and started. The second method involves studying the contents of the cluster log 'ndb_NODE_ID_cluster.log'; this is usually found in the management server's 'DataDir' directory, but this location can be overridden using the 'LogDestination' option. (Recall that NODE_ID represents the unique identifier of the node whose activity is being logged.) The cluster log contains event reports generated by *note 'ndbd': mysql-cluster-programs-ndbd. It is also possible to send cluster log entries to a Unix system log.

Some aspects of the cluster's operation can be also be monitored from an SQL node using the *note 'SHOW ENGINE NDB STATUS': show-engine. statement.

More detailed information about NDB Cluster operations is available in real time through an SQL interface using the note 'ndbinfo': mysql-cluster-ndbinfo. database. For more information, see note mysql-cluster-ndbinfo::.

NDB statistics counters provide improved monitoring using the *note 'mysql': mysql. client. These counters, implemented in the NDB kernel, relate to operations performed by or affecting 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) objects, such as starting, closing, and aborting transactions; primary key and unique key operations; table, range, and pruned scans; blocked threads waiting for various operations to complete; and data and events sent and received by NDB Cluster. The counters are incremented by the NDB kernel whenever NDB API calls are made or data is sent to or received by the data nodes.

note 'mysqld': mysqld. exposes the NDB API statistics counters as system status variables, which can be identified from the prefix common to all of their names ('Ndb_api_'). The values of these variables can be read in the note 'mysql': mysql. client from the output of a note 'SHOW STATUS': show-status. statement, or by querying either the note 'SESSION_STATUS': information-schema-status-table. table or the note 'GLOBAL_STATUS': information-schema-status-table. table (in the 'INFORMATION_SCHEMA' database). By comparing the values of the status variables before and after the execution of an SQL statement that acts on note 'NDB': mysql-cluster. tables, you can observe the actions taken on the NDB API level that correspond to this statement, which can be beneficial for monitoring and performance tuning of NDB Cluster.

MySQL Cluster Manager provides an advanced command-line interface that simplifies many otherwise complex NDB Cluster management tasks, such as starting, stopping, or restarting an NDB Cluster with a large number of nodes. The MySQL Cluster Manager client also supports commands for getting and setting the values of most node configuration parameters as well as *note 'mysqld': mysqld. server options and variables relating to NDB Cluster. See MySQL Cluster Manager 1.4.8 User Manual (https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/), for more information.

 File: manual.info.tmp, Node: mysql-cluster-mgm-client-commands, Next: mysql-cluster-logs-ndb-messages, Prev: mysql-cluster-management, Up: mysql-cluster-management

21.6.1 Commands in the NDB Cluster Management Client

In addition to the central configuration file, a cluster may also be controlled through a command-line interface available through the management client *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. This is the primary administrative interface to a running cluster.

Commands for the event logs are given in note mysql-cluster-event-reports::; commands for creating backups and restoring from them are provided in note mysql-cluster-backup::.

Using ndb_mgm with MySQL Cluster Manager

MySQL Cluster Manager handles starting and stopping processes and tracks their states internally, so it is not necessary to use note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. for these tasks for an NDB Cluster that is under MySQL Cluster Manager control. it is recommended not to use the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. command-line client that comes with the NDB Cluster distribution to perform operations that involve starting or stopping nodes. These include but are not limited to the 'START', 'STOP', 'RESTART', and 'SHUTDOWN' commands. For more information, see MySQL Cluster Manager Process Commands (https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/mcm-process-commands.html).

The management client has the following basic commands. In the listing that follows, NODE_ID denotes either a data node ID or the keyword 'ALL', which indicates that the command should be applied to all of the cluster's data nodes.

Additional commands

A number of other commands available in the *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client are described elsewhere, as shown in the following list:

 File: manual.info.tmp, Node: mysql-cluster-logs-ndb-messages, Next: mysql-cluster-event-reports, Prev: mysql-cluster-mgm-client-commands, Up: mysql-cluster-management

21.6.2 NDB Cluster Log Messages

This section contains information about the messages written to the cluster log in response to different cluster log events. It provides additional, more specific information on *note 'NDB': mysql-cluster. transporter errors.

 File: manual.info.tmp, Node: mysql-cluster-logs-cluster-log, Next: mysql-cluster-log-startup-messages, Prev: mysql-cluster-logs-ndb-messages, Up: mysql-cluster-logs-ndb-messages

21.6.2.1 NDB Cluster: Messages in the Cluster Log .................................................

The following table lists the most common note 'NDB': mysql-cluster. cluster log messages. For information about the cluster log, log events, and event types, see note mysql-cluster-event-reports::. These log messages also correspond to log event types in the MGM API; see The Ndb_logevent_type Type (https://dev.mysql.com/doc/ndbapi/en/mgm-types.html#mgm-ndb-logevent-type), for related information of interest to Cluster API developers.

Common NDB cluster log messages

Log Message Description Event Name Event PrioritySeverity Type

'Node MGM_NODE_ID: Node DATA_NODE_ID Connected'

The data 'Connected' 'Connection' 8 'INFO' node having
node ID NODE_ID has connected to the management server (node MGM_NODE_ID).

'Node MGM_NODE_ID: Node DATA_NODE_ID Disconnected'

The data 'Disconnected' 'Connection' 8 'ALERT' node having
node ID DATA_NODE_ID has disconnected from the management server (node MGM_NODE_ID).

'Node DATA_NODE_ID: Communication to Node API_NODE_ID closed'

The API node 'CommunicationClosed''Connection' 8 'INFO' or SQL node
having node ID API_NODE_ID is no longer communicating with data node DATA_NODE_ID.

'Node DATA_NODE_ID: Communication to Node API_NODE_ID opened'

The API node 'CommunicationOpened''Connection' 8 'INFO' or SQL node
having node ID API_NODE_ID is now communicating with data node DATA_NODE_ID.

'Node MGM_NODE_ID: Node API_NODE_ID: API VERSION'

The API node 'ConnectedApiVersion''Connection' 8 'INFO' having node
ID API_NODE_ID has connected to management node MGM_NODE_ID using *note 'NDB': mysql-cluster. API version VERSION (generally the same as the MySQL version number).

'Node NODE_ID: Global checkpoint GCI started'

A global 'GlobalCheckpointStarted''Checkpoint'9 'INFO' checkpoint
with the ID GCI has been started; node NODE_ID is the master responsible for this global checkpoint.

'Node NODE_ID: Global checkpoint GCI completed'

The global 'GlobalCheckpointCompleted''Checkpoint'10 'INFO' checkpoint
having the ID GCI has been completed; node NODE_ID was the master responsible for this global checkpoint.

'Node NODE_ID: Local checkpoint LCP started. Keep GCI = CURRENT_GCI oldest restorable GCI = OLD_GCI'

The local 'LocalCheckpointStarted''Checkpoint'7 'INFO' checkpoint
having sequence ID LCP has been started on node NODE_ID. The most recent GCI that can be used has the index CURRENT_GCI, and the oldest GCI from which the cluster can be restored has the index OLD_GCI.

'Node NODE_ID: Local checkpoint LCP completed'

The local 'LocalCheckpointCompleted''Checkpoint'8 'INFO' checkpoint
having sequence ID LCP on node NODE_ID has been completed.

'Node NODE_ID: Local Checkpoint stopped in CALCULATED_KEEP_GCI'

The node was 'LCPStoppedInCalcKeepGci''Checkpoint'0 'ALERT' unable to
determine the most recent usable GCI.

'Node NODE_ID: Table ID = TABLE_ID, fragment ID = FRAGMENT_ID has completed LCP on Node NODE_ID maxGciStarted: STARTED_GCI maxGciCompleted: COMPLETED_GCI'

A table 'LCPFragmentCompleted''Checkpoint'11 'INFO' fragment has
been checkpointed to disk on node NODE_ID. The GCI in progress has the index STARTED_GCI, and the most recent GCI to have been completed has the index COMPLETED_GCI.

'Node NODE_ID: ACC Blocked NUM_1 and TUP Blocked NUM_2 times last second'

Undo logging 'UndoLogBlocked''Checkpoint' 7 'INFO' is blocked
because the log buffer is close to overflowing.

'Node NODE_ID: Start initiated VERSION'

Data node 'NDBStartStarted''StartUp' 1 'INFO' NODE_ID,
running *note 'NDB': mysql-cluster. version VERSION, is beginning its startup process.

'Node NODE_ID: Started VERSION'

Data node 'NDBStartCompleted''StartUp' 1 'INFO' NODE_ID,
running *note 'NDB': mysql-cluster. version VERSION, has started successfully.

'Node NODE_ID: STTORRY received after restart finished'

The node has 'STTORRYRecieved''StartUp' 15 'INFO' received a
signal indicating that a cluster restart has completed.

'Node NODE_ID: Start phase PHASE completed (TYPE)'

The node has 'StartPhaseCompleted''StartUp' 4 'INFO' completed
start phase PHASE of a TYPE start. For a listing of start phases, see *note mysql-cluster-start-phases::. (TYPE is one of 'initial', 'system', 'node', 'initial node', or ''.)

'Node NODE_ID: CM_REGCONF president = PRESIDENT_ID, own Node = OWN_ID, our dynamic id = DYNAMIC_ID'

Node 'CM_REGCONF' 'StartUp' 3 'INFO' PRESIDENT_ID
has been selected as 'president'. OWN_ID and DYNAMIC_ID should always be the same as the ID (NODE_ID) of the reporting node.

'Node NODE_ID: CM_REGREF from Node PRESIDENT_ID to our Node NODE_ID. Cause = CAUSE'

The 'CM_REGREF' 'StartUp' 8 'INFO' reporting
node (ID NODE_ID) was unable to accept node PRESIDENT_ID as president. The CAUSE of the problem is given as one of 'Busy', 'Election with wait = false', 'Not president', 'Election without selecting new candidate', or 'No such cause'.

'Node NODE_ID: We are Node OWN_ID with dynamic ID DYNAMIC_ID, our left neighbor is Node ID_1, our right is Node ID_2'

The node has 'FIND_NEIGHBOURS''StartUp' 8 'INFO' discovered
its neighboring nodes in the cluster (node ID_1 and node ID_2). NODE_ID, OWN_ID, and DYNAMIC_ID should always be the same; if they are not, this indicates a serious misconfiguration of the cluster nodes.

'Node NODE_ID: TYPE shutdown initiated'

The node has 'NDBStopStarted''StartUp' 1 'INFO' received a
shutdown signal. The TYPE of shutdown is either 'Cluster' or 'Node'.

'Node NODE_ID: Node shutdown completed '[', ACTION']['Initiated by signal SIGNAL.']

The node has 'NDBStopCompleted''StartUp' 1 'INFO' been shut
down. This report may include an ACTION, which if present is one of 'restarting', 'no start', or 'initial'. The report may also include a reference to an *note 'NDB': mysql-cluster. Protocol SIGNAL; for possible signals, refer to Operations and Signals (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-ndb-protocol-operations-signals.html).

'Node NODE_ID: Forced node shutdown completed '[', action']'.' ['Occurred during startphase START_PHASE.'][' Initiated by SIGNAL.'] ['Caused by error ERROR_CODE: 'ERROR_MESSAGE(ERROR_CLASSIFICATION). ERROR_STATUS'.' ['(extra info EXTRA_CODE)']]

The node has 'NDBStopForced''StartUp' 1 'ALERT' been
forcibly shut down. The ACTION (one of 'restarting', 'no start', or 'initial') subsequently being taken, if any, is also reported. If the shutdown occurred while the node was starting, the report includes the START_PHASE during which the node failed. If this was a result of a SIGNAL sent to the node, this information is also provided (see Operations and Signals (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-ndb-protocol-operations-signals.html), for more information). If the error causing the failure is known, this is also included; for more information about *note 'NDB': mysql-cluster. error messages and classifications, see NDB Cluster API Errors (https://dev.mysql.com/doc/ndb-internals/en/ndb-errors.html).

'Node NODE_ID: Node shutdown aborted'

The node 'NDBStopAborted''StartUp' 1 'INFO' shutdown
process was aborted by the user.

'Node NODE_ID: StartLog: [GCI Keep: KEEP_POS LastCompleted: LAST_POS NewestRestorable: RESTORE_POS]'

This reports 'StartREDOLog' 'StartUp' 4 'INFO' global
checkpoints referenced during a node start. The redo log prior to KEEP_POS is dropped. LAST_POS is the last global checkpoint in which data node the participated; RESTORE_POS is the global checkpoint which is actually used to restore all data nodes.

STARTUP_MESSAGE [Listed separately; see below.]

There are a 'StartReport' 'StartUp' 4 'INFO' number of
possible startup messages that can be logged under different circumstances. These are listed separately; see *note mysql-cluster-log-startup-messages::.

'Node NODE_ID: Node restart completed copy of dictionary information'

Copying of 'NR_CopyDict' 'NodeRestart' 8 'INFO' data
dictionary information to the restarted node has been completed.

'Node NODE_ID: Node restart completed copy of distribution information'

Copying of 'NR_CopyDistr' 'NodeRestart' 8 'INFO' data
distribution information to the restarted node has been completed.

'Node NODE_ID: Node restart starting to copy the fragments to Node NODE_ID'

Copy of 'NR_CopyFragsStarted''NodeRestart'8 'INFO' fragments to
starting data node NODE_ID has begun

'Node NODE_ID: Table ID = TABLE_ID, fragment ID = FRAGMENT_ID have been copied to Node NODE_ID'

Fragment 'NR_CopyFragDone''NodeRestart' 10 'INFO' FRAGMENT_ID
from table TABLE_ID has been copied to data node NODE_ID

'Node NODE_ID: Node restart completed copying the fragments to Node NODE_ID'

Copying of 'NR_CopyFragsCompleted''NodeRestart'8 'INFO' all table
fragments to restarting data node NODE_ID has been completed

'Node NODE_ID: Node NODE1_ID completed failure of Node NODE2_ID'

Data node 'NodeFailCompleted''NodeRestart' 8 'ALERT' NODE1_ID has
detected the failure of data node NODE2_ID

'All nodes completed failure of Node NODE_ID'

All 'NodeFailCompleted''NodeRestart' 8 'ALERT' (remaining)
data nodes have detected the failure of data node NODE_ID

'Node failure of NODE_IDBLOCK completed'

The failure 'NodeFailCompleted''NodeRestart' 8 'ALERT' of data node
NODE_ID has been detected in the BLOCK*note 'NDB': mysql-cluster. kernel block, where block is 1 of 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html), 'DBDICT' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbdict.html), 'DBDIH' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbdih.html), or 'DBLQH' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html); for more information, see NDB Kernel Blocks (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks.html)

'Node MGM_NODE_ID: Node DATA_NODE_ID has failed. The Node state at failure was STATE_CODE'

A data node 'NODE_FAILREP' 'NodeRestart' 8 'ALERT' has failed.
Its state at the time of failure is described by an arbitration state code STATE_CODE: possible state code values can be found in the file 'include/kernel/signaldata/ArbitSignalData.hpp'.

'President restarts arbitration thread [state=STATE_CODE]' or 'Prepare arbitrator node NODE_ID [ticket=TICKET_ID]' or 'Receive arbitrator node NODE_ID [ticket=TICKET_ID]' or 'Started arbitrator node NODE_ID [ticket=TICKET_ID]' or 'Lost arbitrator node NODE_ID - process failure [state=STATE_CODE]' or 'Lost arbitrator node NODE_ID - process exit [state=STATE_CODE]' or 'Lost arbitrator node NODE_ID - ERROR_MESSAGE [state=STATE_CODE]'

This is a 'ArbitState' 'NodeRestart' 6 'INFO' report on
the current state and progress of arbitration in the cluster. NODE_ID is the node ID of the management node or SQL node selected as the arbitrator. STATE_CODE is an arbitration state code, as found in 'include/kernel/signaldata/ArbitSignalData.hpp'. When an error has occurred, an ERROR_MESSAGE, also defined in 'ArbitSignalData.hpp', is provided. TICKET_ID is a unique identifier handed out by the arbitrator when it is selected to all the nodes that participated in its selection; this is used to ensure that each node requesting arbitration was one of the nodes that took part in the selection process.

'Arbitration check lost - less than 1/2 nodes left' or 'Arbitration check won - all node groups and more than 1/2 nodes left' or 'Arbitration check won - node group majority' or 'Arbitration check lost - missing node group' or 'Network partitioning - arbitration required' or 'Arbitration won - positive reply from node NODE_ID' or 'Arbitration lost - negative reply from node NODE_ID' or 'Network partitioning - no arbitrator available' or 'Network partitioning - no arbitrator configured' or 'Arbitration failure - ERROR_MESSAGE [state=STATE_CODE]'

This message 'ArbitResult' 'NodeRestart' 2 'ALERT' reports on
the result of arbitration. In the event of arbitration failure, an ERROR_MESSAGE and an arbitration STATE_CODE are provided; definitions for both of these are found in 'include/kernel/signaldata/ArbitSignalData.hpp'.

'Node NODE_ID: GCP Take over started'

This node is 'GCP_TakeoverStarted''NodeRestart'7 'INFO' attempting
to assume responsibility for the next global checkpoint (that is, it is becoming the master node)

'Node NODE_ID: GCP Take over completed'

This node 'GCP_TakeoverCompleted''NodeRestart'7 'INFO' has become
the master, and has assumed responsibility for the next global checkpoint

'Node NODE_ID: LCP Take over started'

This node is 'LCP_TakeoverStarted''NodeRestart'7 'INFO' attempting
to assume responsibility for the next set of local checkpoints (that is, it is becoming the master node)

'Node NODE_ID: LCP Take over completed'

This node 'LCP_TakeoverCompleted''NodeRestart'7 'INFO' has become
the master, and has assumed responsibility for the next set of local checkpoints

'Node NODE_ID: Trans. Count = TRANSACTIONS, Commit Count = COMMITS, Read Count = READS, Simple Read Count = SIMPLE_READS, Write Count = WRITES, AttrInfo Count = ATTRINFO_OBJECTS, Concurrent Operations = CONCURRENT_OPERATIONS, Abort Count = ABORTS, Scans = SCANS, Range scans = RANGE_SCANS'

This report 'TransReportCounters''Statistic' 8 'INFO' of
transaction activity is given approximately once every 10 seconds

'Node NODE_ID: Operations=OPERATIONS'

Number of 'OperationReportCounters''Statistic'8 'INFO' operations
performed by this node, provided approximately once every 10 seconds

'Node NODE_ID: Table with ID = TABLE_ID created'

A table 'TableCreated' 'Statistic' 7 'INFO' having the
table ID shown has been created

'Node NODE_ID: Mean loop Counter in doJob last 8192 times = COUNT'

           'JobStatistic' 'Statistic'        9           'INFO'
                                                         

'Mean send size to Node = NODE_ID last 4096 sends = BYTES bytes'

This node is 'SendBytesStatistic''Statistic' 9 'INFO' sending an
average of BYTES bytes per send to node NODE_ID

'Mean receive size to Node = NODE_ID last 4096 sends = BYTES bytes'

This node is 'ReceiveBytesStatistic''Statistic'9 'INFO' receiving an
average of BYTES of data each time it receives data from node NODE_ID

'Node NODE_ID: Data usage is DATA_MEMORY_PERCENTAGE% (DATA_PAGES_USED 32K pages of total DATA_PAGES_TOTAL)' / 'Node NODE_ID: Index usage is INDEX_MEMORY_PERCENTAGE% (INDEX_PAGES_USED 8K pages of total INDEX_PAGES_TOTAL) '

This report 'MemoryUsage' 'Statistic' 5 'INFO' is generated
when a 'DUMP 1000' (https://dev.mysql.com/doc/ndb-internals/en/dump-command-1000.html) command is issued in the cluster management client

'Node NODE1_ID: Transporter to node NODE2_ID reported error ERROR_CODE: ERROR_MESSAGE'

A 'TransporterError''Error' 2 'ERROR' transporter
error occurred while communicating with node NODE2_ID; for a listing of transporter error codes and messages, see NDB Transporter Errors (https://dev.mysql.com/doc/ndb-internals/en/ndb-transporter-errors.html), in MySQL NDB Cluster Internals Manual (https://dev.mysql.com/doc/ndb-internals/en/)

'Node NODE1_ID: Transporter to node NODE2_ID reported error ERROR_CODE: ERROR_MESSAGE'

A warning of 'TransporterWarning''Error' 8 'WARNING' a potential
transporter problem while communicating with node NODE2_ID; for a listing of transporter error codes and messages, see NDB Transporter Errors (https://dev.mysql.com/doc/ndb-internals/en/ndb-transporter-errors.html), for more information

'Node NODE1_ID: Node NODE2_ID missed heartbeat HEARTBEAT_ID'

This node 'MissedHeartbeat''Error' 8 'WARNING' missed a
heartbeat from node NODE2_ID

'Node NODE1_ID: Node NODE2_ID declared dead due to missed heartbeat'

This node 'DeadDueToHeartbeat''Error' 8 'ALERT' has missed
at least 3 heartbeats from node NODE2_ID, and so has declared that node 'dead'

'Node NODE1_ID: Node Sent Heartbeat to node = NODE2_ID'

This node 'SentHeartbeat''Info' 12 'INFO' has sent a
heartbeat to node NODE2_ID

(NDB 7.5.0 and earlier:) 'Node NODE_ID: Event buffer status: used=BYTES_USED (PERCENT_USED%) alloc=BYTES_ALLOCATED (PERCENT_AVAILABLE%) max=BYTES_AVAILABLE apply_epoch=LATEST_RESTORABLE_EPOCH latest_epoch=LATEST_EPOCH'

This report 'EventBufferStatus''Info' 7 'INFO' is seen
during heavy event buffer usage, for example, when many updates are being applied in a relatively short period of time; the report shows the number of bytes and the percentage of event buffer memory used, the bytes allocated and percentage still available, and the latest and latest restorable epochs

(NDB 7.5.1 and later:) 'Node NODE_ID: Event buffer status (OBJECT_ID): used=BYTES_USED (PERCENT_USED% of alloc) alloc=BYTES_ALLOCATED max=BYTES_AVAILABLE latest_consumed_epoch=LATEST_CONSUMED_EPOCH latest_buffered_epoch=LATEST_BUFFERED_EPOCH report_reason=REPORT_REASON'

This report 'EventBufferStatus2''Info' 7 'INFO' is seen
during heavy event buffer usage, for example, when many updates are being applied in a relatively short period of time; the report shows the number of bytes and the percentage of event buffer memory used, the bytes allocated and percentage still available, and the latest buffered and consumed epochs; for more information, see *note mysql-cluster-logs-event-buffer::

'Node NODE_ID: Entering single user mode', 'Node NODE_ID: Entered single user mode Node API_NODE_ID has exclusive access', 'Node NODE_ID: Entering single user mode'

These 'SingleUser' 'Info' 7 'INFO' reports are
written to the cluster log when entering and exiting single user mode; API_NODE_ID is the node ID of the API or SQL having exclusive access to the cluster (for more information, see *note mysql-cluster-single-user-mode::); the message 'Unknown single user report API_NODE_ID' indicates an error has taken place and should never be seen in normal operation

'Node NODE_ID: Backup BACKUP_ID started from node MGM_NODE_ID'

A backup has 'BackupStarted''Backup' 7 'INFO' been started
using the management node having MGM_NODE_ID; this message is also displayed in the cluster management client when the note 'START BACKUP': mysql-cluster-backup-using-management-client. command is issued; for more information, see note mysql-cluster-backup-using-management-client::

'Node NODE_ID: Backup BACKUP_ID started from node MGM_NODE_ID completed. StartGCP: START_GCP StopGCP: STOP_GCP #Records: RECORDS #LogRecords: LOG_RECORDS Data: DATA_BYTES bytes Log: LOG_BYTES bytes'

The backup 'BackupCompleted''Backup' 7 'INFO' having the
ID BACKUP_ID has been completed; for more information, see *note mysql-cluster-backup-using-management-client::

'Node NODE_ID: Backup request from MGM_NODE_ID failed to start. Error: ERROR_CODE'

The backup 'BackupFailedToStart''Backup' 7 'ALERT' failed to
start; for error codes, see MGM API Errors (https://dev.mysql.com/doc/ndbapi/en/mgm-errors.html)

'Node NODE_ID: Backup BACKUP_ID started from MGM_NODE_ID has been aborted. Error: ERROR_CODE'

The backup 'BackupAborted''Backup' 7 'ALERT' was
terminated after starting, possibly due to user intervention

 File: manual.info.tmp, Node: mysql-cluster-log-startup-messages, Next: mysql-cluster-logs-event-buffer, Prev: mysql-cluster-logs-cluster-log, Up: mysql-cluster-logs-ndb-messages

21.6.2.2 NDB Cluster Log Startup Messages .........................................

Possible startup messages with descriptions are provided in the following list:

 File: manual.info.tmp, Node: mysql-cluster-logs-event-buffer, Next: mysql-cluster-ndb-transporter-errors, Prev: mysql-cluster-log-startup-messages, Up: mysql-cluster-logs-ndb-messages

21.6.2.3 Event Buffer Reporting in the Cluster Log ..................................................

'NDB' uses one or more memory buffers for events received from the data nodes. There is one such buffer for each 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) object subscribing to table events, which means that there are usually two buffers for each *note 'mysqld': mysqld. performing binary logging (one buffer for schema events, and one for data events). Each buffer contains epochs made up of events. These events consist of operation types (insert, update, delete) and row data (before and after images plus metadata).

'NDB' generates messages in the cluster log to describe the state of these buffers. Although these reports appear in the cluster log, they refer to buffers on API nodes (unlike most other cluster log messages, which are generated by data nodes). These messages and the data structures underlying them were changed significantly in NDB 7.5.1, with the addition of the 'NDB_LE_EventBufferStatus2' event type and the 'ndb_logevent_EventBufferStatus2' data structure (see The Ndb_logevent_type Type (https://dev.mysql.com/doc/ndbapi/en/mgm-types.html#mgm-ndb-logevent-type)). The remainder of this discussion focuses on the implementation based on 'NDB_LE_EventBufferStatus2'.

Event buffer logging reports in the cluster log use the format shown here:

 Node NODE_ID: Event buffer status (OBJECT_ID):
 used=BYTES_USED (PERCENT_USED% of alloc)
 alloc=BYTES_ALLOCATED (PERCENT_ALLOC% of max) max=BYTES_AVAILABLE
 latest_consumed_epoch=LATEST_CONSUMED_EPOCH
 latest_buffered_epoch=LATEST_BUFFERED_EPOCH
 report_reason=REPORT_REASON

The fields making up this report are listed here, with descriptions:

The 'latest_consumed_epoch' and 'latest_buffered_epoch' fields correspond, respectively, to the 'apply_gci' and 'latest_gci' fields of the old-style event buffer logging messages used prior to NDB 7.5.1.

Possible reasons for reporting are described in the following list:

 File: manual.info.tmp, Node: mysql-cluster-ndb-transporter-errors, Prev: mysql-cluster-logs-event-buffer, Up: mysql-cluster-logs-ndb-messages

21.6.2.4 NDB Cluster: NDB Transporter Errors ............................................

This section lists error codes, names, and messages that are written to the cluster log in the event of transporter errors.

  1. 0x00

    TE_NO_ERROR

    'No error'

  2. 0x01

    TE_ERROR_CLOSING_SOCKET

    'Error found during closing of socket'

  3. 0x02

    TE_ERROR_IN_SELECT_BEFORE_ACCEPT

    'Error found before accept. The transporter will retry'

  4. 0x03

    TE_INVALID_MESSAGE_LENGTH

    'Error found in message (invalid message length)'

  5. 0x04

    TE_INVALID_CHECKSUM

    'Error found in message (checksum)'

  6. 0x05

    TE_COULD_NOT_CREATE_SOCKET

    'Error found while creating socket(can't create socket)'

  7. 0x06

    TE_COULD_NOT_BIND_SOCKET

    'Error found while binding server socket'

  8. 0x07

    TE_LISTEN_FAILED

    'Error found while listening to server socket'

  9. 0x08

    TE_ACCEPT_RETURN_ERROR

    'Error found during accept(accept return error)'

  10. 0x0b

    TE_SHM_DISCONNECT

    'The remote node has disconnected'

  11. 0x0c

    TE_SHM_IPC_STAT

    'Unable to check shm segment'

  12. 0x0d

    TE_SHM_UNABLE_TO_CREATE_SEGMENT

    'Unable to create shm segment'

  13. 0x0e

    TE_SHM_UNABLE_TO_ATTACH_SEGMENT

    'Unable to attach shm segment'

  14. 0x0f

    TE_SHM_UNABLE_TO_REMOVE_SEGMENT

    'Unable to remove shm segment'

  15. 0x10

    TE_TOO_SMALL_SIGID

    'Sig ID too small'

  16. 0x11

    TE_TOO_LARGE_SIGID

    'Sig ID too large'

  17. 0x12

    TE_WAIT_STACK_FULL

    'Wait stack was full'

  18. 0x13

    TE_RECEIVE_BUFFER_FULL

    'Receive buffer was full'

  19. 0x14

    TE_SIGNAL_LOST_SEND_BUFFER_FULL

    'Send buffer was full,and trying to force send fails'

  20. 0x15

    TE_SIGNAL_LOST

    'Send failed for unknown reason(signal lost)'

  21. 0x16

    TE_SEND_BUFFER_FULL

    'The send buffer was full, but sleeping for a while solved'

  22. 0x21

    TE_SHM_IPC_PERMANENT

    'Shm ipc Permanent error'

Note:

Transporter error codes 0x17 through 0x20 and 0x22 are reserved for SCI connections, which are not supported in this version of NDB Cluster, and so are not included here.

 File: manual.info.tmp, Node: mysql-cluster-event-reports, Next: mysql-cluster-start-phases, Prev: mysql-cluster-logs-ndb-messages, Up: mysql-cluster-management

21.6.3 Event Reports Generated in NDB Cluster

In this section, we discuss the types of event logs provided by NDB Cluster, and the types of events that are logged.

NDB Cluster provides two types of event log:

Each reportable event can be distinguished according to three different criteria:

The cluster log can be filtered on these properties using the NDB management client note 'CLUSTERLOG': mysql-cluster-logging-management-commands. command. This command affects the cluster log only, and has no effect on the node logs; debug logging in one or more node logs can be turned on and off using the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. 'NODELOG DEBUG' command.

The format used in the cluster log is as shown here:

 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Data usage is 2%(60 32K pages of total 2560)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Index usage is 1%(24 8K pages of total 2336)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 1: Resource 0 min: 0 max: 639 curr: 0
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Data usage is 2%(76 32K pages of total 2560)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Index usage is 1%(24 8K pages of total 2336)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 2: Resource 0 min: 0 max: 639 curr: 0
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Data usage is 2%(58 32K pages of total 2560)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Index usage is 1%(25 8K pages of total 2336)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 3: Resource 0 min: 0 max: 639 curr: 0
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Data usage is 2%(74 32K pages of total 2560)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Index usage is 1%(25 8K pages of total 2336)
 2007-01-26 19:35:55 [MgmSrvr] INFO     -- Node 4: Resource 0 min: 0 max: 639 curr: 0
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 4: Node 9 Connected
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 1: Node 9 Connected
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 1: Node 9: API 5.7.44-ndb-7.5.36
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 2: Node 9 Connected
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 2: Node 9: API 5.7.44-ndb-7.5.36
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 3: Node 9 Connected
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 3: Node 9: API 5.7.44-ndb-7.5.36
 2007-01-26 19:39:42 [MgmSrvr] INFO     -- Node 4: Node 9: API 5.7.44-ndb-7.5.36
 2007-01-26 19:59:22 [MgmSrvr] ALERT    -- Node 2: Node 7 Disconnected
 2007-01-26 19:59:22 [MgmSrvr] ALERT    -- Node 2: Node 7 Disconnected

Each line in the cluster log contains the following information:

For additional information, see *note mysql-cluster-log-events::.

 File: manual.info.tmp, Node: mysql-cluster-logging-management-commands, Next: mysql-cluster-log-events, Prev: mysql-cluster-event-reports, Up: mysql-cluster-event-reports

21.6.3.1 NDB Cluster Logging Management Commands ................................................

*note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. supports a number of management commands related to the cluster log and node logs. In the listing that follows, NODE_ID denotes either a storage node ID or the keyword 'ALL', which indicates that the command should be applied to all of the cluster's data nodes.

The following table describes the default setting (for all data nodes) of the cluster log category threshold. If an event has a priority with a value lower than or equal to the priority threshold, it is reported in the cluster log.

Note:

Events are reported per data node, and that the threshold can be set to different values on different nodes.

Cluster log categories, with default threshold setting

Category Default threshold (All data nodes)

'STARTUP' '7'

'SHUTDOWN' '7'

'STATISTICS' '7'

'CHECKPOINT' '7'

'NODERESTART' '7'

'CONNECTION' '8'

'ERROR' '15'

'INFO' '7'

'BACKUP' '15'

'CONGESTION' '7'

'SCHEMA' '7'

The 'STATISTICS' category can provide a great deal of useful data. See *note mysql-cluster-log-statistics::, for more information.

Thresholds are used to filter events within each category. For example, a 'STARTUP' event with a priority of 3 is not logged unless the threshold for 'STARTUP' is set to 3 or higher. Only events with priority 3 or lower are sent if the threshold is 3.

The following table shows the event severity levels.

Note:

These correspond to Unix 'syslog' levels, except for 'LOG_EMERG' and 'LOG_NOTICE', which are not used or mapped.

Event severity levels

Severity Severity Description Level Value

1

'ALERT' A condition that should be corrected immediately, such as a corrupted system database

2

'CRITICAL' Critical conditions, such as device errors or insufficient resources

3

'ERROR' Conditions that should be corrected, such as configuration errors

4

'WARNING' Conditions that are not errors, but that might require special handling

5

'INFO' Informational messages

6

'DEBUG' Debugging messages used for *note 'NDBCLUSTER': mysql-cluster. development

Event severity levels can be turned on or off (using 'CLUSTERLOG FILTER'--see above). If a severity level is turned on, then all events with a priority less than or equal to the category thresholds are logged. If the severity level is turned off then no events belonging to that severity level are logged.

Important:

Cluster log levels are set on a per note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd, per subscriber basis. This means that, in an NDB Cluster with multiple management servers, using a 'CLUSTERLOG' command in an instance of note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. connected to one management server affects only logs generated by that management server but not by any of the others. This also means that, should one of the management servers be restarted, only logs generated by that management server are affected by the resetting of log levels caused by the restart.

 File: manual.info.tmp, Node: mysql-cluster-log-events, Next: mysql-cluster-log-statistics, Prev: mysql-cluster-logging-management-commands, Up: mysql-cluster-event-reports

21.6.3.2 NDB Cluster Log Events ...............................

An event report reported in the event logs has the following format:

 DATETIME [STRING] SEVERITY -- MESSAGE

For example:

 09:19:30 2005-07-24 [NDB] INFO -- Node 4 Start phase 4 completed

This section discusses all reportable events, ordered by category and severity level within each category.

In the event descriptions, GCP and LCP mean 'Global Checkpoint' and 'Local Checkpoint', respectively.

CONNECTION Events

These events are associated with connections between Cluster nodes.

Events associated with connections between cluster nodes

Event PrioritySeverityDescription Level

'Connected'

8 'INFO' Data nodes connected

'Disconnected'

8 'ALERT' Data nodes disconnected

'CommunicationClosed'

8 'INFO' SQL node or data node connection closed

'CommunicationOpened'

8 'INFO' SQL node or data node connection open

'ConnectedApiVersion'

8 'INFO' Connection using API version

CHECKPOINT Events

The logging messages shown here are associated with checkpoints.

Events associated with checkpoints

Event PrioritySeverityDescription Level

'GlobalCheckpointStarted'

9 'INFO' Start of GCP: REDO log is written to disk

'GlobalCheckpointCompleted'

10 'INFO' GCP finished

'LocalCheckpointStarted'

7 'INFO' Start of LCP: data written to disk

'LocalCheckpointCompleted'

7 'INFO' LCP completed normally

'LCPStoppedInCalcKeepGci'

0 'ALERT' LCP stopped

'LCPFragmentCompleted'

11 'INFO' LCP on a fragment has been completed

'UndoLogBlocked'

7 'INFO' UNDO logging blocked; buffer near overflow

'RedoStatus'

7 'INFO' Redo status

STARTUP Events

The following events are generated in response to the startup of a node or of the cluster and of its success or failure. They also provide information relating to the progress of the startup process, including information concerning logging activities.

Events relating to the startup of a node or cluster

Event PrioritySeverityDescription Level

'NDBStartStarted'

1 'INFO' Data node start phases initiated (all nodes starting)

'NDBStartCompleted'

1 'INFO' Start phases completed, all data nodes

'STTORRYRecieved'

15 'INFO' Blocks received after completion of restart

'StartPhaseCompleted'

4 'INFO' Data node start phase X completed

'CM_REGCONF'

3 'INFO' Node has been successfully included into the cluster; shows the node, managing node, and dynamic ID

'CM_REGREF'

8 'INFO' Node has been refused for inclusion in the cluster; cannot be included in cluster due to misconfiguration, inability to establish communication, or other problem

'FIND_NEIGHBOURS'

8 'INFO' Shows neighboring data nodes

'NDBStopStarted'

1 'INFO' Data node shutdown initiated

'NDBStopCompleted'

1 'INFO' Data node shutdown complete

'NDBStopForced'

1 'ALERT' Forced shutdown of data node

'NDBStopAborted'

1 'INFO' Unable to shut down data node normally

'StartREDOLog'

4 'INFO' New redo log started; GCI keep X, newest restorable GCI Y

'StartLog'

10 'INFO' New log started; log part X, start MB Y, stop MB Z

'UNDORecordsExecuted'

15 'INFO' Undo records executed

'StartReport'

4 'INFO' Report started

'LogFileInitStatus'

7 'INFO' Log file initialization status

'LogFileInitCompStatus'

7 'INFO' Log file completion status

'StartReadLCP'

10 'INFO' Start read for local checkpoint

'ReadLCPComplete'

10 'INFO' Read for local checkpoint completed

'RunRedo'

8 'INFO' Running the redo log

'RebuildIndex'

10 'INFO' Rebuilding indexes

NODERESTART Events

The following events are generated when restarting a node and relate to the success or failure of the node restart process.

Events relating to restarting a node

Event PrioritySeverityDescription Level

'NR_CopyDict'

7 'INFO' Completed copying of dictionary information

'NR_CopyDistr'

7 'INFO' Completed copying distribution information

'NR_CopyFragsStarted'

7 'INFO' Starting to copy fragments

'NR_CopyFragDone'

10 'INFO' Completed copying a fragment

'NR_CopyFragsCompleted'

7 'INFO' Completed copying all fragments

'NodeFailCompleted'

8 'ALERT' Node failure phase completed

'NODE_FAILREP'

8 'ALERT' Reports that a node has failed

'ArbitState'

6 'INFO' Report whether an arbitrator is found or not; there are seven different possible outcomes when seeking an arbitrator, listed here:

                                 * 
                                   Management
                                   server
                                   restarts
                                   arbitration
                                   thread
                                   [state=X]
                              
                                 * 
                                   Prepare
                                   arbitrator
                                   node
                                   X
                                   [ticket=Y]
                              
                                 * 
                                   Receive
                                   arbitrator
                                   node
                                   X
                                   [ticket=Y]
                              
                                 * 
                                   Started
                                   arbitrator
                                   node
                                   X
                                   [ticket=Y]
                              
                                 * 
                                   Lost
                                   arbitrator
                                   node
                                   X
                                   -
                                   process
                                   failure
                                   [state=Y]
                              
                                 * 
                                   Lost
                                   arbitrator
                                   node
                                   X
                                   -
                                   process
                                   exit
                                   [state=Y]
                              
                                 * 
                                   Lost
                                   arbitrator
                                   node
                                   X
                                   <error
                                   msg>
                                   [state=Y]
                              

'ArbitResult'

2 'ALERT' Report arbitrator results; there are eight different possible results for arbitration attempts, listed here:

                                 * 
                                   Arbitration
                                   check
                                   failed:
                                   less
                                   than
                                   1/2
                                   nodes
                                   left
                              
                                 * 
                                   Arbitration
                                   check
                                   succeeded:
                                   node
                                   group
                                   majority
                              
                                 * 
                                   Arbitration
                                   check
                                   failed:
                                   missing
                                   node
                                   group
                              
                                 * 
                                   Network
                                   partitioning:
                                   arbitration
                                   required
                              
                                 * 
                                   Arbitration
                                   succeeded:
                                   affirmative
                                   response
                                   from
                                   node
                                   X
                              
                                 * 
                                   Arbitration
                                   failed:
                                   negative
                                   response
                                   from
                                   node
                                   X
                              
                                 * 
                                   Network
                                   partitioning:
                                   no
                                   arbitrator
                                   available
                              
                                 * 
                                   Network
                                   partitioning:
                                   no
                                   arbitrator
                                   configured
                              

'GCP_TakeoverStarted'

7 'INFO' GCP takeover started

'GCP_TakeoverCompleted'

7 'INFO' GCP takeover complete

'LCP_TakeoverStarted'

7 'INFO' LCP takeover started

'LCP_TakeoverCompleted'

7 'INFO' LCP takeover complete (state = X)

'ConnectCheckStarted'

6 'INFO' Connection check started

'ConnectCheckCompleted'

6 'INFO' Connection check completed

'NodeFailRejected'

6 'ALERT' Node failure phase failed

STATISTICS Events

The following events are of a statistical nature. They provide information such as numbers of transactions and other operations, amount of data sent or received by individual nodes, and memory usage.

Events of a statistical nature

Event PrioritySeverityDescription Level

'TransReportCounters'

8 'INFO' Report transaction statistics, including numbers of transactions, commits, reads, simple reads, writes, concurrent operations, attribute information, and aborts

'OperationReportCounters'

8 'INFO' Number of operations

'TableCreated'

7 'INFO' Report number of tables created

'JobStatistic'

9 'INFO' Mean internal job scheduling statistics

'ThreadConfigLoop'

9 'INFO' Number of thread configuration loops

'SendBytesStatistic'

9 'INFO' Mean number of bytes sent to node X

'ReceiveBytesStatistic'

9 'INFO' Mean number of bytes received from node X

'MemoryUsage'

5 'INFO' Data and index memory usage (80%, 90%, and 100%)

'MTSignalStatistics'

9 'INFO' Multithreaded signals

SCHEMA Events

These events relate to NDB Cluster schema operations.

Events relating to NDB Cluster schema operations

Event PrioritySeverityDescription Level

'CreateSchemaObject'

8 'INFO' Schema objected created

'AlterSchemaObject'

8 'INFO' Schema object updated

'DropSchemaObject'

8 'INFO' Schema object dropped

ERROR Events

These events relate to Cluster errors and warnings. The presence of one or more of these generally indicates that a major malfunction or failure has occurred.

Events relating to cluster errors and warnings

Event PrioritySeverityDescription Level

'TransporterError'

2 'ERROR' Transporter error

'TransporterWarning'

8 'WARNING'Transporter warning

'MissedHeartbeat'

8 'WARNING'Node X missed heartbeat number Y

'DeadDueToHeartbeat'

8 'ALERT' Node X declared 'dead' due to missed heartbeat

'WarningEvent'

2 'WARNING'General warning event

'SubscriptionStatus'

4 'WARNING'Change in subscription status

INFO Events

These events provide general information about the state of the cluster and activities associated with Cluster maintenance, such as logging and heartbeat transmission.

Information events

Event PrioritySeverityDescription Level

'SentHeartbeat'

12 'INFO' Sent heartbeat

'CreateLogBytes'

11 'INFO' Create log: Log part, log file, size in MB

'InfoEvent'

2 'INFO' General informational event

'EventBufferStatus'

7 'INFO' Event buffer status

'EventBufferStatus2'

7 'INFO' Improved event buffer status information; added in NDB 7.5.1

Note:

'SentHeartbeat' events are available only if NDB Cluster was compiled with 'VM_TRACE' enabled.

SINGLEUSER Events

These events are associated with entering and exiting single user mode.

Events relating to single user mode

Event PrioritySeverityDescription Level

'SingleUser'

7 'INFO' Entering or exiting single user mode

BACKUP Events

These events provide information about backups being created or restored.

Backup events

Event PrioritySeverityDescription Level

'BackupStarted'

7 'INFO' Backup started

'BackupStatus'

7 'INFO' Backup status

'BackupCompleted'

7 'INFO' Backup completed

'BackupFailedToStart'

7 'ALERT' Backup failed to start

'BackupAborted'

7 'ALERT' Backup aborted by user

'RestoreStarted'

7 'INFO' Started restoring from backup

'RestoreMetaData'

7 'INFO' Restoring metadata

'RestoreData'

7 'INFO' Restoring data

'RestoreLog'

7 'INFO' Restoring log files

'RestoreCompleted'

7 'INFO' Completed restoring from backup

'SavedEvent'

7 'INFO' Event saved

 File: manual.info.tmp, Node: mysql-cluster-log-statistics, Prev: mysql-cluster-log-events, Up: mysql-cluster-event-reports

21.6.3.3 Using CLUSTERLOG STATISTICS in the NDB Cluster Management Client .........................................................................

The note 'NDB': mysql-cluster. management client's note 'CLUSTERLOG STATISTICS': mysql-cluster-logging-management-commands. command can provide a number of useful statistics in its output. Counters providing information about the state of the cluster are updated at 5-second reporting intervals by the transaction coordinator (TC) and the local query handler (LQH), and written to the cluster log.

Transaction coordinator statistics

Each transaction has one transaction coordinator, which is chosen by one of the following methods:

Note:

You can determine which TC selection method is used for transactions started from a given SQL node using the 'ndb_optimized_node_selection' system variable.

All operations within the same transaction use the same transaction coordinator, which reports the following statistics:

Local query handler statistics (Operations)

There is 1 cluster event per local query handler block (that is, 1 per data node process). Operations are recorded in the LQH where the data they are operating on resides.

Note:

A single transaction may operate on data stored in multiple LQH blocks.

The 'Operations' statistic provides the number of local operations performed by this LQH block in the last reporting interval, and includes all types of read and write operations (insert, update, write, and delete operations). This also includes operations used to replicate writes. For example, in a cluster having two fragment replicas, the write to the primary fragment replica is recorded in the primary LQH, and the write to the backup is recorded in the backup LQH. Unique key operations may result in multiple local operations; however, this does not include local operations generated as a result of a table scan or ordered index scan, which are not counted.

Process scheduler statistics

In addition to the statistics reported by the transaction coordinator and local query handler, each *note 'ndbd': mysql-cluster-programs-ndbd. process has a scheduler which also provides useful metrics relating to the performance of an NDB Cluster. This scheduler runs in an infinite loop; during each loop the scheduler performs the following tasks:

  1. Read any incoming messages from sockets into a job buffer.

  2. Check whether there are any timed messages to be executed; if so, put these into the job buffer as well.

  3. Execute (in a loop) any messages in the job buffer.

  4. Send any distributed messages that were generated by executing the messages in the job buffer.

  5. Wait for any new incoming messages.

Process scheduler statistics include the following:

To cause all cluster log statistics to be logged, you can use the following command in the *note 'NDB': mysql-cluster. management client:

 ndb_mgm> ALL CLUSTERLOG STATISTICS=15

Note:

Setting the threshold for 'STATISTICS' to 15 causes the cluster log to become very verbose, and to grow quite rapidly in size, in direct proportion to the number of cluster nodes and the amount of activity in the NDB Cluster.

For more information about NDB Cluster management client commands relating to logging and reporting, see *note mysql-cluster-logging-management-commands::.

 File: manual.info.tmp, Node: mysql-cluster-start-phases, Next: mysql-cluster-rolling-restart, Prev: mysql-cluster-event-reports, Up: mysql-cluster-management

21.6.4 Summary of NDB Cluster Start Phases

This section provides a simplified outline of the steps involved when NDB Cluster data nodes are started. More complete information can be found in NDB Cluster Start Phases (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-start-phases.html), in the ''NDB' Internals Guide'.

These phases are the same as those reported in the output from the 'NODE_ID STATUS' command in the management client (see note mysql-cluster-mgm-client-commands::). These start phases are also reported in the 'start_phase' column of the note 'ndbinfo.nodes': mysql-cluster-ndbinfo-nodes. table.

Start types

There are several different startup types and modes, as shown in the following list:

Setup and initialization (phase -1)

Prior to startup, each data node (*note 'ndbd': mysql-cluster-programs-ndbd. process) must be initialized. Initialization consists of the following steps:

  1. Obtain a node ID

  2. Fetch configuration data

  3. Allocate ports to be used for inter-node communications

  4. Allocate memory according to settings obtained from the configuration file

When a data node or SQL node first connects to the management node, it reserves a cluster node ID. To make sure that no other node allocates the same node ID, this ID is retained until the node has managed to connect to the cluster and at least one note 'ndbd': mysql-cluster-programs-ndbd. reports that this node is connected. This retention of the node ID is guarded by the connection between the node in question and note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd.

After each data node has been initialized, the cluster startup process can proceed. The stages which the cluster goes through during this process are listed here:

After this process is completed for an initial start or system restart, transaction handling is enabled. For a node restart or initial node restart, completion of the startup process means that the node may now act as a transaction coordinator.

 File: manual.info.tmp, Node: mysql-cluster-rolling-restart, Next: mysql-cluster-single-user-mode, Prev: mysql-cluster-start-phases, Up: mysql-cluster-management

21.6.5 Performing a Rolling Restart of an NDB Cluster

This section discusses how to perform a rolling restart of an NDB Cluster installation, so called because it involves stopping and starting (or restarting) each node in turn, so that the cluster itself remains operational. This is often done as part of a rolling upgrade or rolling downgrade, where high availability of the cluster is mandatory and no downtime of the cluster as a whole is permissible. Where we refer to upgrades, the information provided here also generally applies to downgrades as well.

There are a number of reasons why a rolling restart might be desirable. These are described in the next few paragraphs.

Configuration change

To make a change in the cluster's configuration, such as adding an SQL node to the cluster, or setting a configuration parameter to a new value.

NDB Cluster software upgrade or downgrade

To upgrade the cluster to a newer version of the NDB Cluster software (or to downgrade it to an older version). This is usually referred to as a 'rolling upgrade' (or 'rolling downgrade', when reverting to an older version of NDB Cluster).

Change on node host

To make changes in the hardware or operating system on which one or more NDB Cluster node processes are running.

System reset (cluster reset)

To reset the cluster because it has reached an undesirable state. In such cases it is often desirable to reload the data and metadata of one or more data nodes. This can be done in any of three ways:

Resource Recovery

To free memory previously allocated to a table by successive note 'INSERT': insert. and note 'DELETE': delete. operations, for re-use by other NDB Cluster tables.

The process for performing a rolling restart may be generalized as follows:

  1. Stop all cluster management nodes (note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. processes), reconfigure them, then restart them. (See note mysql-cluster-rolling-restart-multiple-ndb-mgmd::.)

  2. Stop, reconfigure, then restart each cluster data node (*note 'ndbd': mysql-cluster-programs-ndbd. process) in turn.

    Some node configuration parameters can be updated by issuing 'RESTART' for each of the data nodes in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client following the previous step. Other parameters require that the data node be stopped completely using the management client 'STOP' command, then started again from a system shell by invoking the note 'ndbd': mysql-cluster-programs-ndbd. or note 'ndbmtd': mysql-cluster-programs-ndbmtd. executable as appropriate. (A shell command such as note 'kill': kill. can also be used on most Unix systems to stop a data node process, but the 'STOP' command is preferred and usually simpler.)

    Note:

    On Windows, you can also use 'SC STOP' and 'SC START' commands, 'NET STOP' and 'NET START' commands, or the Windows Service Manager to stop and start nodes which have been installed as Windows services (see *note mysql-cluster-install-windows-service::).

    The type of restart required is indicated in the documentation for each node configuration parameter. See *note mysql-cluster-config-file::.

  3. Stop, reconfigure, then restart each cluster SQL node (*note 'mysqld': mysqld. process) in turn.

NDB Cluster supports a somewhat flexible order for upgrading nodes. When upgrading an NDB Cluster, you may upgrade API nodes (including SQL nodes) before upgrading the management nodes, data nodes, or both. In other words, you are permitted to upgrade the API and SQL nodes in any order. This is subject to the following provisions:

It is not possible for any API node to perform schema operations (such as data definition statements) during a node restart. Due in part to this limitation, schema operations are also not supported during an online upgrade or downgrade. In addition, it is not possible to perform native backups while an upgrade or downgrade is ongoing.

Rolling restarts with multiple management servers

When performing a rolling restart of an NDB Cluster with multiple management nodes, you should keep in mind that note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. checks to see if any other management node is running, and, if so, tries to use that node's configuration data. To keep this from occurring, and to force note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. to re-read its configuration file, perform the following steps:

  1. Stop all NDB Cluster *note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. processes.

  2. Update all 'config.ini' files.

  3. Start a single *note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. with '--reload', '--initial', or both options as desired.

  4. If you started the first note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. with the '--initial' option, you must also start any remaining note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. processes using '--initial'.

    Regardless of any other options used when starting the first note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd, you should not start any remaining note 'ndb_mgmd': mysql-cluster-programs-ndb-mgmd. processes after the first one using '--reload'.

  5. Complete the rolling restarts of the data nodes and API nodes as normal.

When performing a rolling restart to update the cluster's configuration, you can use the 'config_generation' column of the note 'ndbinfo.nodes': mysql-cluster-ndbinfo-nodes. table to keep track of which data nodes have been successfully restarted with the new configuration. See note mysql-cluster-ndbinfo-nodes::.

 File: manual.info.tmp, Node: mysql-cluster-single-user-mode, Next: mysql-cluster-online-add-node, Prev: mysql-cluster-rolling-restart, Up: mysql-cluster-management

21.6.6 NDB Cluster Single User Mode

Single user mode enables the database administrator to restrict access to the database system to a single API node, such as a MySQL server (SQL node) or an instance of *note 'ndb_restore': mysql-cluster-programs-ndb-restore. When entering single user mode, connections to all other API nodes are closed gracefully and all running transactions are aborted. No new transactions are permitted to start.

Once the cluster has entered single user mode, only the designated API node is granted access to the database.

You can use the 'ALL STATUS' command in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client to see when the cluster has entered single user mode. You can also check the 'status' column of the note 'ndbinfo.nodes': mysql-cluster-ndbinfo-nodes. table (see *note mysql-cluster-ndbinfo-nodes::, for more information).

Example:

 ndb_mgm> ENTER SINGLE USER MODE 5

After this command has executed and the cluster has entered single user mode, the API node whose node ID is '5' becomes the cluster's only permitted user.

The node specified in the preceding command must be an API node; attempting to specify any other type of node is rejected.

Note:

When the preceding command is invoked, all transactions running on the designated node are aborted, the connection is closed, and the server must be restarted.

The command 'EXIT SINGLE USER MODE' changes the state of the cluster's data nodes from single user mode to normal mode. API nodes--such as MySQL Servers--waiting for a connection (that is, waiting for the cluster to become ready and available), are again permitted to connect. The API node denoted as the single-user node continues to run (if still connected) during and after the state change.

Example:

 ndb_mgm> EXIT SINGLE USER MODE

There are two recommended ways to handle a node failure when running in single user mode:

 File: manual.info.tmp, Node: mysql-cluster-online-add-node, Next: mysql-cluster-backup, Prev: mysql-cluster-single-user-mode, Up: mysql-cluster-management

21.6.7 Adding NDB Cluster Data Nodes Online

This section describes how to add NDB Cluster data nodes 'online'--that is, without needing to shut down the cluster completely and restart it as part of the process.

Important:

Currently, you must add new data nodes to an NDB Cluster as part of a new node group. In addition, it is not possible to change the number of fragment replicas (or the number of nodes per node group) online.

 File: manual.info.tmp, Node: mysql-cluster-online-add-node-remarks, Next: mysql-cluster-online-add-node-basics, Prev: mysql-cluster-online-add-node, Up: mysql-cluster-online-add-node

21.6.7.1 Adding NDB Cluster Data Nodes Online: General Issues .............................................................

This section provides general information about the behavior of and current limitations in adding NDB Cluster nodes online.

Redistribution of Data

The ability to add new nodes online includes a means to reorganize note 'NDBCLUSTER': mysql-cluster. table data and indexes so that they are distributed across all data nodes, including the new ones, by means of the note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table-partition-operations. statement. Table reorganization of both in-memory and Disk Data tables is supported. This redistribution does not currently include unique indexes (only ordered indexes are redistributed).

The redistribution for note 'NDBCLUSTER': mysql-cluster. tables already existing before the new data nodes were added is not automatic, but can be accomplished using simple SQL statements in note 'mysql': mysql. or another MySQL client application. However, all data and indexes added to tables created after a new node group has been added are distributed automatically among all cluster data nodes, including those added as part of the new node group.

Partial starts

It is possible to add a new node group without all of the new data nodes being started. It is also possible to add a new node group to a degraded cluster--that is, a cluster that is only partially started, or where one or more data nodes are not running. In the latter case, the cluster must have enough nodes running to be viable before the new node group can be added.

Effects on ongoing operations

Normal DML operations using NDB Cluster data are not prevented by the creation or addition of a new node group, or by table reorganization. However, it is not possible to perform DDL concurrently with table reorganization--that is, no other DDL statements can be issued while an *note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table. statement is executing. In addition, during the execution of 'ALTER TABLE ... REORGANIZE PARTITION' (or the execution of any other DDL statement), it is not possible to restart cluster data nodes.

Failure handling

Failures of data nodes during node group creation and table reorganization are handled as shown in the following table:

Data node failure handling during node group creation and table reorganization

Failure during Failure in 'Old' Failure in 'New' System Failure data node data node

Node group creation

Table reorganization

Dropping node groups

The *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client supports a 'DROP NODEGROUP' command, but it is possible to drop a node group only when no data nodes in the node group contain any data. Since there is currently no way to 'empty' a specific data node or node group, this command works only the following two cases:

  1. After issuing 'CREATE NODEGROUP' in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client, but before issuing any note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table. statements in the *note 'mysql': mysql. client.

  2. After dropping all note 'NDBCLUSTER': mysql-cluster. tables using note 'DROP TABLE': drop-table.

    *note 'TRUNCATE TABLE': truncate-table. does not work for this purpose because the data nodes continue to store the table definitions.

 File: manual.info.tmp, Node: mysql-cluster-online-add-node-basics, Next: mysql-cluster-online-add-node-example, Prev: mysql-cluster-online-add-node-remarks, Up: mysql-cluster-online-add-node

21.6.7.2 Adding NDB Cluster Data Nodes Online: Basic procedure ..............................................................

In this section, we list the basic steps required to add new data nodes to an NDB Cluster. This procedure applies whether you are using note 'ndbd': mysql-cluster-programs-ndbd. or note 'ndbmtd': mysql-cluster-programs-ndbmtd. binaries for the data node processes. For a more detailed example, see *note mysql-cluster-online-add-node-example::.

Assuming that you already have a running NDB Cluster, adding data nodes online requires the following steps:

  1. Edit the cluster configuration 'config.ini' file, adding new '[ndbd]' sections corresponding to the nodes to be added. In the case where the cluster uses multiple management servers, these changes need to be made to all 'config.ini' files used by the management servers.

    You must be careful that node IDs for any new data nodes added in the 'config.ini' file do not overlap node IDs used by existing nodes. In the event that you have API nodes using dynamically allocated node IDs and these IDs match node IDs that you want to use for new data nodes, it is possible to force any such API nodes to 'migrate', as described later in this procedure.

  2. Perform a rolling restart of all NDB Cluster management servers.

    Important:

    All management servers must be restarted with the '--reload' or '--initial' option to force the reading of the new configuration.

  3. Perform a rolling restart of all existing NDB Cluster data nodes. It is not necessary (or usually even desirable) to use '--initial' when restarting the existing data nodes.

    If you are using API nodes with dynamically allocated IDs matching any node IDs that you wish to assign to new data nodes, you must restart all API nodes (including SQL nodes) before restarting any of the data nodes processes in this step. This causes any API nodes with node IDs that were previously not explicitly assigned to relinquish those node IDs and acquire new ones.

  4. Perform a rolling restart of any SQL or API nodes connected to the NDB Cluster.

  5. Start the new data nodes.

    The new data nodes may be started in any order. They can also be started concurrently, as long as they are started after the rolling restarts of all existing data nodes have been completed, and before proceeding to the next step.

  6. Execute one or more 'CREATE NODEGROUP' commands in the NDB Cluster management client to create the new node group or node groups to which the new data nodes belong.

  7. Redistribute the cluster's data among all data nodes, including the new ones. Normally this is done by issuing an note 'ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION': alter-table. statement in the note 'mysql': mysql. client for each *note 'NDBCLUSTER': mysql-cluster. table.

    Exception: For tables created using the 'MAX_ROWS' option, this statement does not work; instead, use 'ALTER TABLE ... ALGORITHM=INPLACE MAX_ROWS=...' to reorganize such tables. You should also bear in mind that using 'MAX_ROWS' to set the number of partitions in this fashion is deprecated in NDB 7.5.4 and later, where you should use 'PARTITION_BALANCE' instead; see *note create-table-ndb-comment-options::, for more information.

    Note:

    This needs to be done only for tables already existing at the time the new node group is added. Data in tables created after the new node group is added is distributed automatically; however, data added to any given table 'tbl' that existed before the new nodes were added is not distributed using the new nodes until that table has been reorganized.

  8. 'ALTER TABLE ... REORGANIZE PARTITION ALGORITHM=INPLACE' reorganizes partitions but does not reclaim the space freed on the 'old' nodes. You can do this by issuing, for each note 'NDBCLUSTER': mysql-cluster. table, an note 'OPTIMIZE TABLE': optimize-table. statement in the *note 'mysql': mysql. client.

    This works for space used by variable-width columns of in-memory 'NDB' tables. 'OPTIMIZE TABLE' is not supported for fixed-width columns of in-memory tables; it is also not supported for Disk Data tables.

You can add all the nodes desired, then issue several 'CREATE NODEGROUP' commands in succession to add the new node groups to the cluster.

 File: manual.info.tmp, Node: mysql-cluster-online-add-node-example, Prev: mysql-cluster-online-add-node-basics, Up: mysql-cluster-online-add-node

21.6.7.3 Adding NDB Cluster Data Nodes Online: Detailed Example ...............................................................

In this section we provide a detailed example illustrating how to add new NDB Cluster data nodes online, starting with an NDB Cluster having 2 data nodes in a single node group and concluding with a cluster having 4 data nodes in 2 node groups.

Starting configuration

For purposes of illustration, we assume a minimal configuration, and that the cluster uses a 'config.ini' file containing only the following information:

 [ndbd default]
 DataMemory = 100M
 IndexMemory = 100M
 NoOfReplicas = 2
 DataDir = /usr/local/mysql/var/mysql-cluster

 [ndbd]
 Id = 1
 HostName = 198.51.100.1

 [ndbd]
 Id = 2
 HostName = 198.51.100.2

 [mgm]
 HostName = 198.51.100.10
 Id = 10

 [api]
 Id=20
 HostName = 198.51.100.20

 [api]
 Id=21
 HostName = 198.51.100.21

Note:

We have left a gap in the sequence between data node IDs and other nodes. This make it easier later to assign node IDs that are not already in use to data nodes which are newly added.

We also assume that you have already started the cluster using the appropriate command line or 'my.cnf' options, and that running 'SHOW' in the management client produces output similar to what is shown here:

 -- NDB Cluster -- Management Client --
 ndb_mgm> SHOW
 Connected to Management Server at: 198.51.100.10:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)]     2 node(s)
 id=1    @198.51.100.1  (5.7.44-ndb-7.5.36, Nodegroup: 0, *)
 id=2    @198.51.100.2  (5.7.44-ndb-7.5.36, Nodegroup: 0)

 [ndb_mgmd(MGM)] 1 node(s)
 id=10   @198.51.100.10  (5.7.44-ndb-7.5.36)

 [mysqld(API)]   2 node(s)
 id=20   @198.51.100.20  (5.7.44-ndb-7.5.36)
 id=21   @198.51.100.21  (5.7.44-ndb-7.5.36)

Finally, we assume that the cluster contains a single *note 'NDBCLUSTER': mysql-cluster. table created as shown here:

 USE n;

 CREATE TABLE ips (
     id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
     country_code CHAR(2) NOT NULL,
     type CHAR(4) NOT NULL,
     ip_address VARCHAR(15) NOT NULL,
     addresses BIGINT UNSIGNED DEFAULT NULL,
     date BIGINT UNSIGNED DEFAULT NULL
 )   ENGINE NDBCLUSTER;

The memory usage and related information shown later in this section was generated after inserting approximately 50000 rows into this table.

Note:

In this example, we show the single-threaded note 'ndbd': mysql-cluster-programs-ndbd. being used for the data node processes. You can also apply this example, if you are using the multithreaded note 'ndbmtd': mysql-cluster-programs-ndbmtd. by substituting note 'ndbmtd': mysql-cluster-programs-ndbmtd. for note 'ndbd': mysql-cluster-programs-ndbd. wherever it appears in the steps that follow.

Step 1: Update configuration file

Open the cluster global configuration file in a text editor and add '[ndbd]' sections corresponding to the 2 new data nodes. (We give these data nodes IDs 3 and 4, and assume that they are to be run on host machines at addresses 198.51.100.3 and 198.51.100.4, respectively.) After you have added the new sections, the contents of the 'config.ini' file should look like what is shown here, where the additions to the file are shown in bold type:

 [ndbd default]
 DataMemory = 100M
 IndexMemory = 100M
 NoOfReplicas = 2
 DataDir = /usr/local/mysql/var/mysql-cluster

 [ndbd]
 Id = 1
 HostName = 198.51.100.1

 [ndbd]
 Id = 2
 HostName = 198.51.100.2

 *[ndbd]
 Id = 3
 HostName = 198.51.100.3

 [ndbd]
 Id = 4
 HostName = 198.51.100.4*

 [mgm]
 HostName = 198.51.100.10
 Id = 10

 [api]
 Id=20
 HostName = 198.51.100.20

 [api]
 Id=21
 HostName = 198.51.100.21

Once you have made the necessary changes, save the file.

Step 2: Restart the management server

Restarting the cluster management server requires that you issue separate commands to stop the management server and then to start it again, as follows:

  1. Stop the management server using the management client 'STOP' command, as shown here:

      ndb_mgm> 10 STOP
      Node 10 has shut down.
      Disconnecting to allow Management Server to shutdown
    
      $>
  2. Because shutting down the management server causes the management client to terminate, you must start the management server from the system shell. For simplicity, we assume that 'config.ini' is in the same directory as the management server binary, but in practice, you must supply the correct path to the configuration file. You must also supply the '--reload' or '--initial' option so that the management server reads the new configuration from the file rather than its configuration cache. If your shell's current directory is also the same as the directory where the management server binary is located, then you can invoke the management server as shown here:

      $> ndb_mgmd -f config.ini --reload
      2008-12-08 17:29:23 [MgmSrvr] INFO     -- NDB Cluster Management Server. 5.7.44-ndb-7.5.36
      2008-12-08 17:29:23 [MgmSrvr] INFO     -- Reading cluster configuration from 'config.ini'

If you check the output of 'SHOW' in the management client after restarting the *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. process, you should now see something like this:

 -- NDB Cluster -- Management Client --
 ndb_mgm> SHOW
 Connected to Management Server at: 198.51.100.10:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)]     2 node(s)
 id=1    @198.51.100.1  (5.7.44-ndb-7.5.36, Nodegroup: 0, *)
 id=2    @198.51.100.2  (5.7.44-ndb-7.5.36, Nodegroup: 0)
 id=3 (not connected, accepting connect from 198.51.100.3)
 id=4 (not connected, accepting connect from 198.51.100.4)

 [ndb_mgmd(MGM)] 1 node(s)
 id=10   @198.51.100.10  (5.7.44-ndb-7.5.36)

 [mysqld(API)]   2 node(s)
 id=20   @198.51.100.20  (5.7.44-ndb-7.5.36)
 id=21   @198.51.100.21  (5.7.44-ndb-7.5.36)

Step 3: Perform a rolling restart of the existing data nodes

This step can be accomplished entirely within the cluster management client using the 'RESTART' command, as shown here:

 ndb_mgm> 1 RESTART
 Node 1: Node shutdown initiated
 Node 1: Node shutdown completed, restarting, no start.
 Node 1 is being restarted

 ndb_mgm> Node 1: Start initiated (version 7.5.36)
 Node 1: Started (version 7.5.36)

 ndb_mgm> 2 RESTART
 Node 2: Node shutdown initiated
 Node 2: Node shutdown completed, restarting, no start.
 Node 2 is being restarted

 ndb_mgm> Node 2: Start initiated (version 7.5.36)

 ndb_mgm> Node 2: Started (version 7.5.36)

Important:

After issuing each 'X RESTART' command, wait until the management client reports 'Node X: Started (version ...)' before proceeding any further.

You can verify that all existing data nodes were restarted using the updated configuration by checking the note 'ndbinfo.nodes': mysql-cluster-ndbinfo-nodes. table in the note 'mysql': mysql. client.

Step 4: Perform a rolling restart of all cluster API nodes

Shut down and restart each MySQL server acting as an SQL node in the cluster using note 'mysqladmin shutdown': mysqladmin. followed by note 'mysqld_safe': mysqld-safe. (or another startup script). This should be similar to what is shown here, where PASSWORD is the MySQL 'root' password for a given MySQL server instance:

 $> mysqladmin -uroot -pPASSWORD shutdown
 081208 20:19:56 mysqld_safe mysqld from pid file
 /usr/local/mysql/var/tonfisk.pid ended
 $> mysqld_safe --ndbcluster --ndb-connectstring=198.51.100.10 &
 081208 20:20:06 mysqld_safe Logging to '/usr/local/mysql/var/tonfisk.err'.
 081208 20:20:06 mysqld_safe Starting mysqld daemon with databases
 from /usr/local/mysql/var

Of course, the exact input and output depend on how and where MySQL is installed on the system, as well as which options you choose to start it (and whether or not some or all of these options are specified in a 'my.cnf' file).

Step 5: Perform an initial start of the new data nodes

From a system shell on each of the hosts for the new data nodes, start the data nodes as shown here, using the '--initial' option:

 $> ndbd -c 198.51.100.10 --initial

Note:

Unlike the case with restarting the existing data nodes, you can start the new data nodes concurrently; you do not need to wait for one to finish starting before starting the other.

Wait until both of the new data nodes have started before proceeding with the next step. Once the new data nodes have started, you can see in the output of the management client 'SHOW' command that they do not yet belong to any node group (as indicated with bold type here):

 ndb_mgm> SHOW
 Connected to Management Server at: 198.51.100.10:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)]     2 node(s)
 id=1    @198.51.100.1  (5.7.44-ndb-7.5.36, Nodegroup: 0, *)
 id=2    @198.51.100.2  (5.7.44-ndb-7.5.36, Nodegroup: 0)
 id=3    @198.51.100.3  (5.7.44-ndb-7.5.36, *no nodegroup*)
 id=4    @198.51.100.4  (5.7.44-ndb-7.5.36, *no nodegroup*)

 [ndb_mgmd(MGM)] 1 node(s)
 id=10   @198.51.100.10  (5.7.44-ndb-7.5.36)

 [mysqld(API)]   2 node(s)
 id=20   @198.51.100.20  (5.7.44-ndb-7.5.36)
 id=21   @198.51.100.21  (5.7.44-ndb-7.5.36)

Step 6: Create a new node group

You can do this by issuing a 'CREATE NODEGROUP' command in the cluster management client. This command takes as its argument a comma-separated list of the node IDs of the data nodes to be included in the new node group, as shown here:

 ndb_mgm> CREATE NODEGROUP 3,4
 Nodegroup 1 created

By issuing 'SHOW' again, you can verify that data nodes 3 and 4 have joined the new node group (again indicated in bold type):

 ndb_mgm> SHOW
 Connected to Management Server at: 198.51.100.10:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)]     2 node(s)
 id=1    @198.51.100.1  (5.7.44-ndb-7.5.36, Nodegroup: 0, *)
 id=2    @198.51.100.2  (5.7.44-ndb-7.5.36, Nodegroup: 0)
 id=3    @198.51.100.3  (5.7.44-ndb-7.5.36, *Nodegroup: 1*)
 id=4    @198.51.100.4  (5.7.44-ndb-7.5.36, *Nodegroup: 1*)

 [ndb_mgmd(MGM)] 1 node(s)
 id=10   @198.51.100.10  (5.7.44-ndb-7.5.36)

 [mysqld(API)]   2 node(s)
 id=20   @198.51.100.20  (5.7.44-ndb-7.5.36)
 id=21   @198.51.100.21  (5.7.44-ndb-7.5.36)

Step 7: Redistribute cluster data

When a node group is created, existing data and indexes are not automatically distributed to the new node group's data nodes, as you can see by issuing the appropriate 'REPORT' command in the management client:

 ndb_mgm> ALL REPORT MEMORY

 Node 1: Data usage is 5%(177 32K pages of total 3200)
 Node 1: Index usage is 0%(108 8K pages of total 12832)
 Node 2: Data usage is 5%(177 32K pages of total 3200)
 Node 2: Index usage is 0%(108 8K pages of total 12832)
 *Node 3: Data usage is 0%(0 32K pages of total 3200)
 Node 3: Index usage is 0%(0 8K pages of total 12832)
 Node 4: Data usage is 0%(0 32K pages of total 3200)
 Node 4: Index usage is 0%(0 8K pages of total 12832)*

By using *note 'ndb_desc': mysql-cluster-programs-ndb-desc. with the '-p' option, which causes the output to include partitioning information, you can see that the table still uses only 2 partitions (in the 'Per partition info' section of the output, shown here in bold text):

 $> ndb_desc -c 198.51.100.10 -d n ips -p
 -- ips --
 Version: 1
 Fragment type: 9
 K Value: 6
 Min load factor: 78
 Max load factor: 80
 Temporary table: no
 Number of attributes: 6
 Number of primary keys: 1
 Length of frm data: 340
 Row Checksum: 1
 Row GCI: 1
 SingleUserMode: 0
 ForceVarPart: 1
 FragmentCount: 2
 TableStatus: Retrieved
 -- Attributes --
 id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
 country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
 type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
 ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
 addresses Bigunsigned NULL AT=FIXED ST=MEMORY
 date Bigunsigned NULL AT=FIXED ST=MEMORY

 -- Indexes --
 PRIMARY KEY(id) - UniqueHashIndex
 PRIMARY(id) - OrderedIndex

 *-- Per partition info --
 Partition   Row count   Commit count  Frag fixed memory   Frag varsized memory
 0           26086       26086         1572864             557056
 1           26329       26329         1605632             557056*

 NDBT_ProgramExit: 0 - OK

You can cause the data to be redistributed among all of the data nodes by performing, for each note 'NDB': mysql-cluster. table, an note 'ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION': alter-table. statement in the *note 'mysql': mysql. client.

Important:

'ALTER TABLE ... ALGORITHM=INPLACE, REORGANIZE PARTITION' does not work on tables that were created with the 'MAX_ROWS' option. Instead, use 'ALTER TABLE ... ALGORITHM=INPLACE, MAX_ROWS=...' to reorganize such tables.

Keep in mind that using 'MAX_ROWS' to set the number of partitions per table is deprecated in NDB 7.5.4 and later, where you should use 'PARTITION_BALANCE' instead; see *note create-table-ndb-comment-options::, for more information.

After issuing the statement 'ALTER TABLE ips ALGORITHM=INPLACE, REORGANIZE PARTITION', you can see using *note 'ndb_desc': mysql-cluster-programs-ndb-desc. that the data for this table is now stored using 4 partitions, as shown here (with the relevant portions of the output in bold type):

 $> ndb_desc -c 198.51.100.10 -d n ips -p
 -- ips --
 Version: 16777217
 Fragment type: 9
 K Value: 6
 Min load factor: 78
 Max load factor: 80
 Temporary table: no
 Number of attributes: 6
 Number of primary keys: 1
 Length of frm data: 341
 Row Checksum: 1
 Row GCI: 1
 SingleUserMode: 0
 ForceVarPart: 1
 FragmentCount: 4
 TableStatus: Retrieved
 -- Attributes --
 id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
 country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
 type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
 ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
 addresses Bigunsigned NULL AT=FIXED ST=MEMORY
 date Bigunsigned NULL AT=FIXED ST=MEMORY

 -- Indexes --
 PRIMARY KEY(id) - UniqueHashIndex
 PRIMARY(id) - OrderedIndex

 *-- Per partition info --
 Partition   Row count   Commit count  Frag fixed memory   Frag varsized memory
 0           12981       52296         1572864             557056
 1           13236       52515         1605632             557056
 2           13105       13105         819200              294912
 3           13093       13093         819200              294912*

 NDBT_ProgramExit: 0 - OK

Note:

Normally, *note 'ALTER TABLE TABLE_NAME [ALGORITHM=INPLACE,] REORGANIZE PARTITION': alter-table. is used with a list of partition identifiers and a set of partition definitions to create a new partitioning scheme for a table that has already been explicitly partitioned. Its use here to redistribute data onto a new NDB Cluster node group is an exception in this regard; when used in this way, no other keywords or identifiers follow 'REORGANIZE PARTITION'.

For more information, see *note alter-table::.

In addition, for each table, the note 'ALTER TABLE': alter-table. statement should be followed by an note 'OPTIMIZE TABLE': optimize-table. to reclaim wasted space. You can obtain a list of all note 'NDBCLUSTER': mysql-cluster. tables using the following query against the Information Schema note 'TABLES': information-schema-tables-table. table:

 SELECT TABLE_SCHEMA, TABLE_NAME
     FROM INFORMATION_SCHEMA.TABLES
     WHERE ENGINE = 'NDBCLUSTER';

Note:

The 'INFORMATION_SCHEMA.TABLES.ENGINE' value for an NDB Cluster table is always note 'NDBCLUSTER': mysql-cluster, regardless of whether the 'CREATE TABLE' statement used to create the table (or note 'ALTER TABLE': alter-table. statement used to convert an existing table from a different storage engine) used note 'NDB': mysql-cluster. or note 'NDBCLUSTER': mysql-cluster. in its 'ENGINE' option.

You can see after performing these statements in the output of 'ALL REPORT MEMORY' that the data and indexes are now redistributed between all cluster data nodes, as shown here:

 ndb_mgm> ALL REPORT MEMORY

 Node 1: Data usage is 5%(176 32K pages of total 3200)
 Node 1: Index usage is 0%(76 8K pages of total 12832)
 Node 2: Data usage is 5%(176 32K pages of total 3200)
 Node 2: Index usage is 0%(76 8K pages of total 12832)
 Node 3: Data usage is 2%(80 32K pages of total 3200)
 Node 3: Index usage is 0%(51 8K pages of total 12832)
 Node 4: Data usage is 2%(80 32K pages of total 3200)
 Node 4: Index usage is 0%(50 8K pages of total 12832)

Note:

Since only one DDL operation on note 'NDBCLUSTER': mysql-cluster. tables can be executed at a time, you must wait for each note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table. statement to finish before issuing the next one.

It is not necessary to issue note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table. statements for note 'NDBCLUSTER': mysql-cluster. tables created after the new data nodes have been added; data added to such tables is distributed among all data nodes automatically. However, in note 'NDBCLUSTER': mysql-cluster. tables that existed prior to the addition of the new nodes, neither existing nor new data is distributed using the new nodes until these tables have been reorganized using note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table.

Alternative procedure, without rolling restart

It is possible to avoid the need for a rolling restart by configuring the extra data nodes, but not starting them, when first starting the cluster. We assume, as before, that you wish to start with two data nodes--nodes 1 and 2--in one node group and later to expand the cluster to four data nodes, by adding a second node group consisting of nodes 3 and 4:

 [ndbd default]
 DataMemory = 100M
 IndexMemory = 100M
 NoOfReplicas = 2
 DataDir = /usr/local/mysql/var/mysql-cluster

 [ndbd]
 Id = 1
 HostName = 198.51.100.1

 [ndbd]
 Id = 2
 HostName = 198.51.100.2

 [ndbd]
 Id = 3
 HostName = 198.51.100.3
 Nodegroup = 65536

 [ndbd]
 Id = 4
 HostName = 198.51.100.4
 Nodegroup = 65536

 [mgm]
 HostName = 198.51.100.10
 Id = 10

 [api]
 Id=20
 HostName = 198.51.100.20

 [api]
 Id=21
 HostName = 198.51.100.21

The data nodes to be brought online at a later time (nodes 3 and 4) can be configured with 'NodeGroup = 65536', in which case nodes 1 and 2 can each be started as shown here:

 $> ndbd -c 198.51.100.10 --initial

The data nodes configured with 'NodeGroup = 65536' are treated by the management server as though you had started nodes 1 and 2 using '--nowait-nodes=3,4' after waiting for a period of time determined by the setting for the 'StartNoNodeGroupTimeout' data node configuration parameter. By default, this is 15 seconds (15000 milliseconds).

Note:

'StartNoNodegroupTimeout' must be the same for all data nodes in the cluster; for this reason, you should always set it in the '[ndbd default]' section of the 'config.ini' file, rather than for individual data nodes.

When you are ready to add the second node group, you need only perform the following additional steps:

  1. Start data nodes 3 and 4, invoking the data node process once for each new node:

      $> ndbd -c 198.51.100.10 --initial
  2. Issue the appropriate 'CREATE NODEGROUP' command in the management client:

      ndb_mgm> CREATE NODEGROUP 3,4
  3. In the note 'mysql': mysql. client, issue note 'ALTER TABLE ... REORGANIZE PARTITION': alter-table. and note 'OPTIMIZE TABLE': optimize-table. statements for each existing note 'NDBCLUSTER': mysql-cluster. table. (As noted elsewhere in this section, existing NDB Cluster tables cannot use the new nodes for data distribution until this has been done.)

 File: manual.info.tmp, Node: mysql-cluster-backup, Next: mysql-cluster-importing-data, Prev: mysql-cluster-online-add-node, Up: mysql-cluster-management

21.6.8 Online Backup of NDB Cluster

The next few sections describe how to prepare for and then to create an NDB Cluster backup using the functionality for this purpose found in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. management client. To distinguish this type of backup from a backup made using note 'mysqldump': mysqldump, we sometimes refer to it as a 'native' NDB Cluster backup. (For information about the creation of backups with note 'mysqldump': mysqldump, see note mysqldump::.) Restoration of NDB Cluster backups is done using the note 'ndb_restore': mysql-cluster-programs-ndb-restore. utility provided with the NDB Cluster distribution; for information about note 'ndb_restore': mysql-cluster-programs-ndb-restore. and its use in restoring NDB Cluster backups, see *note mysql-cluster-programs-ndb-restore::.

 File: manual.info.tmp, Node: mysql-cluster-backup-concepts, Next: mysql-cluster-backup-using-management-client, Prev: mysql-cluster-backup, Up: mysql-cluster-backup

21.6.8.1 NDB Cluster Backup Concepts ....................................

A backup is a snapshot of the database at a given time. The backup consists of three main parts:

Each of these parts is saved on all nodes participating in the backup. During backup, each node saves these three parts into three files on disk:

In the listing just shown, BACKUP_ID stands for the backup identifier and NODE_ID is the unique identifier for the node creating the file.

The location of the backup files is determined by the 'BackupDataDir' parameter.

 File: manual.info.tmp, Node: mysql-cluster-backup-using-management-client, Next: mysql-cluster-backup-configuration, Prev: mysql-cluster-backup-concepts, Up: mysql-cluster-backup

21.6.8.2 Using The NDB Cluster Management Client to Create a Backup ...................................................................

Before starting a backup, make sure that the cluster is properly configured for performing one. (See *note mysql-cluster-backup-configuration::.)

The 'START BACKUP' command is used to create a backup:

 START BACKUP [BACKUP_ID] [WAIT_OPTION] [SNAPSHOT_OPTION]

 WAIT_OPTION:
 WAIT {STARTED | COMPLETED} | NOWAIT

 SNAPSHOT_OPTION:
 SNAPSHOTSTART | SNAPSHOTEND

Successive backups are automatically identified sequentially, so the BACKUP_ID, an integer greater than or equal to 1, is optional; if it is omitted, the next available value is used. If an existing BACKUP_ID value is used, the backup fails with the error 'Backup failed: file already exists'. If used, the BACKUP_ID must follow 'START BACKUP' immediately, before any other options are used.

The WAIT_OPTION can be used to determine when control is returned to the management client after a 'START BACKUP' command is issued, as shown in the following list:

'WAIT COMPLETED' is the default.

A SNAPSHOT_OPTION can be used to determine whether the backup matches the state of the cluster when 'START BACKUP' was issued, or when it was completed. 'SNAPSHOTSTART' causes the backup to match the state of the cluster when the backup began; 'SNAPSHOTEND' causes the backup to reflect the state of the cluster when the backup was finished. 'SNAPSHOTEND' is the default, and matches the behavior found in previous NDB Cluster releases.

Note:

If you use the 'SNAPSHOTSTART' option with 'START BACKUP', and the 'CompressedBackup' parameter is enabled, only the data and control files are compressed--the log file is not compressed.

If both a WAIT_OPTION and a SNAPSHOT_OPTION are used, they may be specified in either order. For example, all of the following commands are valid, assuming that there is no existing backup having 4 as its ID:

 START BACKUP WAIT STARTED SNAPSHOTSTART
 START BACKUP SNAPSHOTSTART WAIT STARTED
 START BACKUP 4 WAIT COMPLETED SNAPSHOTSTART
 START BACKUP SNAPSHOTEND WAIT COMPLETED
 START BACKUP 4 NOWAIT SNAPSHOTSTART

The procedure for creating a backup consists of the following steps:

  1. Start the management client (*note 'ndb_mgm': mysql-cluster-programs-ndb-mgm.), if it not running already.

  2. Execute the START BACKUP command. This produces several lines of output indicating the progress of the backup, as shown here:

      ndb_mgm> START BACKUP
      Waiting for completed, this may take several minutes
      Node 2: Backup 1 started from node 1
      Node 2: Backup 1 started from node 1 completed
       StartGCP: 177 StopGCP: 180
       #Records: 7362 #LogRecords: 0
       Data: 453648 bytes Log: 0 bytes
      ndb_mgm>
  3. When the backup has started the management client displays this message:

      Backup BACKUP_ID started from node NODE_ID

    BACKUP_ID is the unique identifier for this particular backup. This identifier is saved in the cluster log, if it has not been configured otherwise. NODE_ID is the identifier of the management server that is coordinating the backup with the data nodes. At this point in the backup process the cluster has received and processed the backup request. It does not mean that the backup has finished. An example of this statement is shown here:

      Node 2: Backup 1 started from node 1
  4. The management client indicates with a message like this one that the backup has started:

      Backup BACKUP_ID started from node NODE_ID completed

    As is the case for the notification that the backup has started, BACKUP_ID is the unique identifier for this particular backup, and NODE_ID is the node ID of the management server that is coordinating the backup with the data nodes. This output is accompanied by additional information including relevant global checkpoints, the number of records backed up, and the size of the data, as shown here:

      Node 2: Backup 1 started from node 1 completed
       StartGCP: 177 StopGCP: 180
       #Records: 7362 #LogRecords: 0
       Data: 453648 bytes Log: 0 bytes

It is also possible to perform a backup from the system shell by invoking *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. with the '-e' or '--execute' option, as shown in this example:

 $> ndb_mgm -e "START BACKUP 6 WAIT COMPLETED SNAPSHOTSTART"

When using 'START BACKUP' in this way, you must specify the backup ID.

Cluster backups are created by default in the 'BACKUP' subdirectory of the 'DataDir' on each data node. This can be overridden for one or more data nodes individually, or for all cluster data nodes in the 'config.ini' file using the 'BackupDataDir' configuration parameter. The backup files created for a backup with a given BACKUP_ID are stored in a subdirectory named 'BACKUP-BACKUP_ID' in the backup directory.

Cancelling backups

To cancel or abort a backup that is already in progress, perform the following steps:

  1. Start the management client.

  2. Execute this command:

      ndb_mgm> ABORT BACKUP BACKUP_ID

    The number BACKUP_ID is the identifier of the backup that was included in the response of the management client when the backup was started (in the message 'Backup BACKUP_ID started from node MANAGEMENT_NODE_ID').

  3. The management client acknowledges the abort request with 'Abort of backup BACKUP_ID ordered'.

    Note:

    At this point, the management client has not yet received a response from the cluster data nodes to this request, and the backup has not yet actually been aborted.

  4. After the backup has been aborted, the management client reports this fact in a manner similar to what is shown here:

      Node 1: Backup 3 started from 5 has been aborted.
        Error: 1321 - Backup aborted by user request: Permanent error: User defined error
      Node 3: Backup 3 started from 5 has been aborted.
        Error: 1323 - 1323: Permanent error: Internal error
      Node 2: Backup 3 started from 5 has been aborted.
        Error: 1323 - 1323: Permanent error: Internal error
      Node 4: Backup 3 started from 5 has been aborted.
        Error: 1323 - 1323: Permanent error: Internal error

    In this example, we have shown sample output for a cluster with 4 data nodes, where the sequence number of the backup to be aborted is '3', and the management node to which the cluster management client is connected has the node ID '5'. The first node to complete its part in aborting the backup reports that the reason for the abort was due to a request by the user. (The remaining nodes report that the backup was aborted due to an unspecified internal error.)

    Note:

    There is no guarantee that the cluster nodes respond to an 'ABORT BACKUP' command in any particular order.

    The 'Backup BACKUP_ID started from node MANAGEMENT_NODE_ID has been aborted' messages mean that the backup has been terminated and that all files relating to this backup have been removed from the cluster file system.

It is also possible to abort a backup in progress from a system shell using this command:

 $> ndb_mgm -e "ABORT BACKUP BACKUP_ID"

Note:

If there is no backup having the ID BACKUP_ID running when an 'ABORT BACKUP' is issued, the management client makes no response, nor is it indicated in the cluster log that an invalid abort command was sent.

 File: manual.info.tmp, Node: mysql-cluster-backup-configuration, Next: mysql-cluster-backup-troubleshooting, Prev: mysql-cluster-backup-using-management-client, Up: mysql-cluster-backup

21.6.8.3 Configuration for NDB Cluster Backups ..............................................

Five configuration parameters are essential for backup:

In addition, 'CompressedBackup' causes 'NDB' to use compression when creating and writing to backup files.

More detailed information about these parameters can be found in Backup Parameters.

You can also set a location for the backup files using the 'BackupDataDir' configuration parameter. The default is 'FileSystemPath''/BACKUP/BACKUP-BACKUP_ID'.

 File: manual.info.tmp, Node: mysql-cluster-backup-troubleshooting, Prev: mysql-cluster-backup-configuration, Up: mysql-cluster-backup

21.6.8.4 NDB Cluster Backup Troubleshooting ...........................................

If an error code is returned when issuing a backup request, the most likely cause is insufficient memory or disk space. You should check that there is enough memory allocated for the backup.

Important:

If you have set 'BackupDataBufferSize' and 'BackupLogBufferSize' and their sum is greater than 4MB, then you must also set 'BackupMemory' as well.

You should also make sure that there is sufficient space on the hard drive partition of the backup target.

*note 'NDB': mysql-cluster. does not support repeatable reads, which can cause problems with the restoration process. Although the backup process is 'hot', restoring an NDB Cluster from backup is not a 100% 'hot' process. This is due to the fact that, for the duration of the restore process, running transactions get nonrepeatable reads from the restored data. This means that the state of the data is inconsistent while the restore is in progress.

 File: manual.info.tmp, Node: mysql-cluster-importing-data, Next: mysql-cluster-mysqld, Prev: mysql-cluster-backup, Up: mysql-cluster-management

21.6.9 Importing Data Into MySQL Cluster

It is common when setting up a new instance of NDB Cluster to need to import data from an existing NDB Cluster, instance of MySQL, or other source. This data is most often available in one or more of the following formats:

When importing data from an SQL file, it is often not necessary to enforce transactions or foreign keys, and temporarily disabling these features can speed up the import process greatly. This can be done using the note 'mysql': mysql. client, either from a client session, or by invoking it on the command line. Within a note 'mysql': mysql. client session, you can perform the import using the following SQL statements:

 *note SET: set-variable. ndb_use_transactions=0;
 SET foreign_key_checks=0;

 source PATH/TO/DUMPFILE;

 SET ndb_use_transactions=1;
 SET foreign_key_checks=1;

When performing the import in this fashion, you must enable 'ndb_use_transaction' and 'foreign_key_checks' again following execution of the *note 'mysql': mysql. client's 'source' command. Otherwise, it is possible for later statements in same session may also be executed without enforcing transactions or foreign key constraints, and which could lead to data inconcsistency.

From the system shell, you can import the SQL file while disabling enforcement of transaction and foreign keys by using the *note 'mysql': mysql. client with the '--init-command' option, like this:

 $> mysql --init-command='SET ndb_use_transactions=0; SET foreign_key_checks=0' < PATH/TO/DUMPFILE

It is also possible to load the data into an *note 'InnoDB': innodb-storage-engine. table, and convert it to use the NDB storage engine afterwards using ALTER TABLE ... ENGINE NDB). You should take into account, especially for many tables, that this may require a number of such operations; in addition, if foreign keys are used, you must mind the order of the 'ALTER TABLE' statements carefully, due to the fact that foreign keys do not work between tables using different MySQL storage engines.

You should be aware that the methods described previously in this section are not optimized for very large data sets or large transactions. Should an application really need big transactions or many concurrent transactions as part of normal operation, you may wish to increase the value of the 'MaxNoOfConcurrentOperations' data node configuration parameter, which reserves more memory to allow a data node to take over a transaction if its transaction coordinator stops unexpectedly.

You may also wish to do this when performing bulk note 'DELETE': delete. or note 'UPDATE': update. operations on NDB Cluster tables. If possible, try to have applications perform these operations in chunks, for example, by adding 'LIMIT' to such statements.

If a data import operation does not complete successfully, for whatever reason, you should be prepared to perform any necessary cleanup including possibly one or more note 'DROP TABLE': drop-table. statements, note 'DROP DATABASE': drop-database. statements, or both. Failing to do so may leave the database in an inconsistent state.

 File: manual.info.tmp, Node: mysql-cluster-mysqld, Next: mysql-cluster-disk-data, Prev: mysql-cluster-importing-data, Up: mysql-cluster-management

21.6.10 MySQL Server Usage for NDB Cluster

note 'mysqld': mysqld. is the traditional MySQL server process. To be used with NDB Cluster, note 'mysqld': mysqld. needs to be built with support for the *note 'NDB': mysql-cluster. storage engine, as it is in the precompiled binaries available from https://dev.mysql.com/downloads/. If you build MySQL from source, you must invoke 'CMake' with the '-DWITH_NDBCLUSTER=1' option to include support for 'NDB'.

For more information about compiling NDB Cluster from source, see note mysql-cluster-install-linux-source::, and note mysql-cluster-install-windows-source::.

(For information about note 'mysqld': mysqld. options and variables, in addition to those discussed in this section, which are relevant to NDB Cluster, see note mysql-cluster-options-variables::.)

If the note 'mysqld': mysqld. binary has been built with Cluster support, the note 'NDBCLUSTER': mysql-cluster. storage engine is still disabled by default. You can use either of two possible options to enable this engine:

An easy way to verify that your server is running with the note 'NDBCLUSTER': mysql-cluster. storage engine enabled is to issue the note 'SHOW ENGINES': show-engines. statement in the MySQL Monitor (note 'mysql': mysql.). You should see the value 'YES' as the 'Support' value in the row for note 'NDBCLUSTER': mysql-cluster. If you see 'NO' in this row or if there is no such row displayed in the output, you are not running an *note 'NDB': mysql-cluster.-enabled version of MySQL. If you see 'DISABLED' in this row, you need to enable it in either one of the two ways just described.

To read cluster configuration data, the MySQL server requires at a minimum three pieces of information:

Node IDs can be allocated dynamically, so it is not strictly necessary to specify them explicitly.

The note 'mysqld': mysqld. parameter 'ndb-connectstring' is used to specify the connection string either on the command line when starting note 'mysqld': mysqld. or in 'my.cnf'. The connection string contains the host name or IP address where the management server can be found, as well as the TCP/IP port it uses.

In the following example, 'ndb_mgmd.mysql.com' is the host where the management server resides, and the management server listens for cluster messages on port 1186:

 $> mysqld --ndbcluster --ndb-connectstring=ndb_mgmd.mysql.com:1186

See *note mysql-cluster-connection-strings::, for more information on connection strings.

Given this information, the MySQL server can act as a full participant in the cluster. (We often refer to a *note 'mysqld': mysqld. process running in this manner as an SQL node.) It is fully aware of all cluster data nodes as well as their status, and establishes connections to all data nodes. In this case, it is able to use any data node as a transaction coordinator and to read and update node data.

You can see in the note 'mysql': mysql. client whether a MySQL server is connected to the cluster using note 'SHOW PROCESSLIST': show-processlist. If the MySQL server is connected to the cluster, and you have the 'PROCESS' privilege, then the first row of the output is as shown here:

 mysql> SHOW PROCESSLIST \G
 *************************** 1. row ***************************
      Id: 1
    User: system user
    Host:
      db:
 Command: Daemon
    Time: 1
   State: Waiting for event from ndbcluster
    Info: NULL

Important:

To participate in an NDB Cluster, the note 'mysqld': mysqld. process must be started with both the options '--ndbcluster' and '--ndb-connectstring' (or their equivalents in 'my.cnf'). If note 'mysqld': mysqld. is started with only the '--ndbcluster' option, or if it is unable to contact the cluster, it is not possible to work with note 'NDB': mysql-cluster. tables, nor is it possible to create any new tables regardless of storage engine. The latter restriction is a safety measure intended to prevent the creation of tables having the same names as note 'NDB': mysql-cluster. tables while the SQL node is not connected to the cluster. If you wish to create tables using a different storage engine while the *note 'mysqld': mysqld. process is not participating in an NDB Cluster, you must restart the server without the '--ndbcluster' option.

 File: manual.info.tmp, Node: mysql-cluster-disk-data, Next: mysql-cluster-online-operations, Prev: mysql-cluster-mysqld, Up: mysql-cluster-management

21.6.11 NDB Cluster Disk Data Tables

It is possible to store the nonindexed columns of *note 'NDB': mysql-cluster. tables on disk, rather than in RAM.

As part of implementing NDB Cluster Disk Data work, a number of improvements were made in NDB Cluster for the efficient handling of very large amounts (terabytes) of data during node recovery and restart. These include a 'no-steal' algorithm for synchronizing a starting node with very large data sets. For more information, see the paper 'Recovery Principles of NDB Cluster 5.1 (http://www.vldb2005.org/program/paper/wed/p1108-ronstrom.pdf)', by NDB Cluster developers Mikael Ronstro"m and Jonas Oreland.

NDB Cluster Disk Data performance can be influenced by a number of configuration parameters. For information about these parameters and their effects, see 'NDB Cluster Disk Data configuration parameters' and 'NDB Cluster Disk Data storage and 'GCP Stop' errors'

The performance of an NDB Cluster that uses Disk Data storage can also be greatly improved by separating data node file systems from undo log files and tablespace data files, which can be done using symbolic links. For more information, see *note mysql-cluster-disk-data-symlinks::.

 File: manual.info.tmp, Node: mysql-cluster-disk-data-objects, Next: mysql-cluster-disk-data-symlinks, Prev: mysql-cluster-disk-data, Up: mysql-cluster-disk-data

21.6.11.1 NDB Cluster Disk Data Objects .......................................

NDB Cluster Disk Data storage is implemented using a number of Disk Data objects. These include the following:

Undo log files and data files are actual files in the file system of each data node; by default they are placed in 'ndb_NODE_ID_fs' in the DATADIR specified in the NDB Cluster 'config.ini' file, and where NODE_ID is the data node's node ID. It is possible to place these elsewhere by specifying either an absolute or relative path as part of the filename when creating the undo log or data file. Statements that create these files are shown later in this section.

NDB Cluster tablespaces and log file groups are not implemented as files.

Important:

Although not all Disk Data objects are implemented as files, they all share the same namespace. This means that each Disk Data object must be uniquely named (and not merely each Disk Data object of a given type). For example, you cannot have a tablespace and a log file group both named 'dd1'.

Assuming that you have already set up an NDB Cluster with all nodes (including management and SQL nodes), the basic steps for creating an NDB Cluster table on disk are as follows:

  1. Create a log file group, and assign one or more undo log files to it (an undo log file is also sometimes referred to as an undofile).

    Note:

    Undo log files are necessary only for Disk Data tables; they are not used for *note 'NDBCLUSTER': mysql-cluster. tables that are stored only in memory.

  2. Create a tablespace; assign the log file group, as well as one or more data files, to the tablespace.

  3. Create a Disk Data table that uses this tablespace for data storage.

Each of these tasks can be accomplished using SQL statements in the *note 'mysql': mysql. client or other MySQL client application, as shown in the example that follows.

  1. We create a log file group named 'lg_1' using note 'CREATE LOGFILE GROUP': create-logfile-group. This log file group is to be made up of two undo log files, which we name 'undo_1.log' and 'undo_2.log', whose initial sizes are 16 MB and 12 MB, respectively. (The default initial size for an undo log file is 128 MB.) Optionally, you can also specify a size for the log file group's undo buffer, or permit it to assume the default value of 8 MB. In this example, we set the UNDO buffer's size at 2 MB. A log file group must be created with an undo log file; so we add 'undo_1.log' to 'lg_1' in this note 'CREATE LOGFILE GROUP': create-logfile-group. statement:

      CREATE LOGFILE GROUP lg_1
          ADD UNDOFILE 'undo_1.log'
          INITIAL_SIZE 16M
          UNDO_BUFFER_SIZE 2M
          ENGINE NDBCLUSTER;

    To add 'undo_2.log' to the log file group, use the following *note 'ALTER LOGFILE GROUP': alter-logfile-group. statement:

      ALTER LOGFILE GROUP lg_1
          ADD UNDOFILE 'undo_2.log'
          INITIAL_SIZE 12M
          ENGINE NDBCLUSTER;

    Some items of note:

    * The '.log' file extension used here is not required.  We use
      it merely to make the log files easily recognizable.
    
    * Every *note 'CREATE LOGFILE GROUP': create-logfile-group. and
      *note 'ALTER LOGFILE GROUP': alter-logfile-group. statement
      must include an 'ENGINE' option.  The only permitted values
      for this option are *note 'NDBCLUSTER': mysql-cluster. and
      *note 'NDB': mysql-cluster.
    
      *Important*:
    
      There can exist at most one log file group in the same NDB
      Cluster at any given time.
    
    * When you add an undo log file to a log file group using 'ADD
      UNDOFILE 'FILENAME'', a file with the name FILENAME is created
      in the 'ndb_NODE_ID_fs' directory within the 'DataDir' of each
      data node in the cluster, where NODE_ID is the node ID of the
      data node.  Each undo log file is of the size specified in the
      SQL statement.  For example, if an NDB Cluster has 4 data
      nodes, then the *note 'ALTER LOGFILE GROUP':
      alter-logfile-group. statement just shown creates 4 undo log
      files, 1 each on in the data directory of each of the 4 data
      nodes; each of these files is named 'undo_2.log' and each file
      is 12 MB in size.
    
    * 'UNDO_BUFFER_SIZE' is limited by the amount of system memory
      available.
    
    * For more information about the *note 'CREATE LOGFILE GROUP':
      create-logfile-group. statement, see *note
      create-logfile-group::.  For more information about *note
      'ALTER LOGFILE GROUP': alter-logfile-group, see *note
      alter-logfile-group::.
  2. Now we can create a tablespace, which contains files to be used by NDB Cluster Disk Data tables for storing their data. A tablespace is also associated with a particular log file group. When creating a new tablespace, you must specify the log file group which it is to use for undo logging; you must also specify a data file. You can add more data files to the tablespace after the tablespace is created; it is also possible to drop data files from a tablespace (an example of dropping data files is provided later in this section).

    Assume that we wish to create a tablespace named 'ts_1' which uses 'lg_1' as its log file group. This tablespace is to contain two data files named 'data_1.dat' and 'data_2.dat', whose initial sizes are 32 MB and 48 MB, respectively. (The default value for 'INITIAL_SIZE' is 128 MB.) We can do this using two SQL statements, as shown here:

      CREATE TABLESPACE ts_1
          ADD DATAFILE 'data_1.dat'
          USE LOGFILE GROUP lg_1
          INITIAL_SIZE 32M
          ENGINE NDBCLUSTER;
    
      ALTER TABLESPACE ts_1
          ADD DATAFILE 'data_2.dat'
          INITIAL_SIZE 48M
          ENGINE NDBCLUSTER;

    The note 'CREATE TABLESPACE': create-tablespace. statement creates a tablespace 'ts_1' with the data file 'data_1.dat', and associates 'ts_1' with log file group 'lg_1'. The note 'ALTER TABLESPACE': alter-tablespace. adds the second data file ('data_2.dat').

    Some items of note:

    * As is the case with the '.log' file extension used in this
      example for undo log files, there is no special significance
      for the '.dat' file extension; it is used merely for easy
      recognition of data files.
    
    * When you add a data file to a tablespace using 'ADD DATAFILE
      'FILENAME'', a file with the name FILENAME is created in the
      'ndb_NODE_ID_fs' directory within the 'DataDir' of each data
      node in the cluster, where NODE_ID is the node ID of the data
      node.  Each data file is of the size specified in the SQL
      statement.  For example, if an NDB Cluster has 4 data nodes,
      then the *note 'ALTER TABLESPACE': alter-tablespace. statement
      just shown creates 4 data files, 1 each in the data directory
      of each of the 4 data nodes; each of these files is named
      'data_2.dat' and each file is 48 MB in size.
    
    * NDB 7.6 (and later) reserves 4% of each tablespace for use
      during data node restarts.  This space is not available for
      storing data.
    
    * All *note 'CREATE TABLESPACE': create-tablespace. and *note
      'ALTER TABLESPACE': alter-tablespace. statements must contain
      an 'ENGINE' clause; only tables using the same storage engine
      as the tablespace can be created in the tablespace.  For NDB
      Cluster tablespaces, the only permitted values for this option
      are *note 'NDBCLUSTER': mysql-cluster. and *note 'NDB':
      mysql-cluster.
    
    * For more information about the *note 'CREATE TABLESPACE':
      create-tablespace. and *note 'ALTER TABLESPACE':
      alter-tablespace. statements, see *note create-tablespace::,
      and *note alter-tablespace::.
  3. Now it is possible to create a table whose nonindexed columns are stored on disk in the tablespace 'ts_1':

      CREATE TABLE dt_1 (
          member_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
          last_name VARCHAR(50) NOT NULL,
          first_name VARCHAR(50) NOT NULL,
          dob DATE NOT NULL,
          joined DATE NOT NULL,
          INDEX(last_name, first_name)
          )
          TABLESPACE ts_1 STORAGE DISK
          ENGINE NDBCLUSTER;

    The 'TABLESPACE ... STORAGE DISK' option tells the *note 'NDBCLUSTER': mysql-cluster. storage engine to use tablespace 'ts_1' for disk data storage.

    Once table 'ts_1' has been created as shown, you can perform note 'INSERT': insert, note 'SELECT': select, note 'UPDATE': update, and note 'DELETE': delete. statements on it just as you would with any other MySQL table.

    It is also possible to specify whether an individual column is stored on disk or in memory by using a 'STORAGE' clause as part of the column's definition in a note 'CREATE TABLE': create-table. or note 'ALTER TABLE': alter-table. statement. 'STORAGE DISK' causes the column to be stored on disk, and 'STORAGE MEMORY' causes in-memory storage to be used. See *note create-table::, for more information.

Indexing of columns implicitly stored on disk

For table 'dt_1' as defined in the example just shown, only the 'dob' and 'joined' columns are stored on disk. This is because there are indexes on the 'id', 'last_name', and 'first_name' columns, and so data belonging to these columns is stored in RAM. Only nonindexed columns can be held on disk; indexes and indexed column data continue to be stored in memory. This tradeoff between the use of indexes and conservation of RAM is something you must keep in mind as you design Disk Data tables.

You cannot add an index to a column that has been explicitly declared 'STORAGE DISK', without first changing its storage type to 'MEMORY'; any attempt to do so fails with an error. A column which implicitly uses disk storage can be indexed; when this is done, the column's storage type is changed to 'MEMORY' automatically. By 'implicitly', we mean a column whose storage type is not declared, but which is which inherited from the parent table. In the following CREATE TABLE statement (using the tablespace 'ts_1' defined previously), columns 'c2' and 'c3' use disk storage implicitly:

 mysql> CREATE TABLE ti (
     ->     c1 INT PRIMARY KEY,
     ->     c2 INT,
     ->     c3 INT,
     ->     c4 INT
     -> )
     ->     STORAGE DISK
     ->     TABLESPACE ts_1
     ->     ENGINE NDBCLUSTER;
 Query OK, 0 rows affected (1.31 sec)

Because 'c2', 'c3', and 'c4' are themselves not declared with 'STORAGE DISK', it is possible to index them. Here, we add indexes to 'c2' and 'c3', using, respectively, 'CREATE INDEX' and 'ALTER TABLE':

 mysql> CREATE INDEX i1 ON ti(c2);
 Query OK, 0 rows affected (2.72 sec)
 Records: 0  Duplicates: 0  Warnings: 0

 mysql> ALTER TABLE ti ADD INDEX i2(c3);
 Query OK, 0 rows affected (0.92 sec)
 Records: 0  Duplicates: 0  Warnings: 0

*note 'SHOW CREATE TABLE': show-create-table. confirms that the indexes were added.

 mysql> SHOW CREATE TABLE ti\G
 *************************** 1. row ***************************
        Table: ti
 Create Table: CREATE TABLE `ti` (
   `c1` int(11) NOT NULL,
   `c2` int(11) DEFAULT NULL,
   `c3` int(11) DEFAULT NULL,
   `c4` int(11) DEFAULT NULL,
   PRIMARY KEY (`c1`),
   KEY `i1` (`c2`),
   KEY `i2` (`c3`)
 ) /*!50100 TABLESPACE `ts_1` STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
 1 row in set (0.00 sec)

You can see using *note 'ndb_desc': mysql-cluster-programs-ndb-desc. that the indexed columns (emphasized text) now use in-memory rather than on-disk storage:

 $> ./ndb_desc -d test t1
 -- t1 --
 Version: 33554433
 Fragment type: HashMapPartition
 K Value: 6
 Min load factor: 78
 Max load factor: 80
 Temporary table: no
 Number of attributes: 4
 Number of primary keys: 1
 Length of frm data: 317
 Max Rows: 0
 Row Checksum: 1
 Row GCI: 1
 SingleUserMode: 0
 ForceVarPart: 1
 PartitionCount: 4
 FragmentCount: 4
 PartitionBalance: FOR_RP_BY_LDM
 ExtraRowGciBits: 0
 ExtraRowAuthorBits: 0
 TableStatus: Retrieved
 Table options:
 HashMap: DEFAULT-HASHMAP-3840-4
 -- Attributes --
 c1 Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY
 _c2 Int NULL AT=FIXED ST=MEMORY
 c3 Int NULL AT=FIXED ST=MEMORY_
 c4 Int NULL AT=FIXED ST=DISK
 -- Indexes --
 PRIMARY KEY(c1) - UniqueHashIndex
 i2(c3) - OrderedIndex
 PRIMARY(c1) - OrderedIndex
 i1(c2) - OrderedIndex

 NDBT_ProgramExit: 0 - OK

Performance note

The performance of a cluster using Disk Data storage is greatly improved if Disk Data files are kept on a separate physical disk from the data node file system. This must be done for each data node in the cluster to derive any noticeable benefit.

You may use absolute and relative file system paths with 'ADD UNDOFILE' and 'ADD DATAFILE'. Relative paths are calculated relative to the data node's data directory. You may also use symbolic links; see *note mysql-cluster-disk-data-symlinks::, for more information and examples.

A log file group, a tablespace, and any Disk Data tables using these must be created in a particular order. The same is true for dropping any of these objects:

For example, to drop all the objects created so far in this section, you would use the following statements:

 mysql> DROP TABLE dt_1;

 mysql> ALTER TABLESPACE ts_1
     -> DROP DATAFILE 'data_2.dat'
     -> ENGINE NDBCLUSTER;

 mysql> ALTER TABLESPACE ts_1
     -> DROP DATAFILE 'data_1.dat'
     -> ENGINE NDBCLUSTER;

 mysql> DROP TABLESPACE ts_1
     -> ENGINE NDBCLUSTER;

 mysql> DROP LOGFILE GROUP lg_1
     -> ENGINE NDBCLUSTER;

These statements must be performed in the order shown, except that the two 'ALTER TABLESPACE ... DROP DATAFILE' statements may be executed in either order.

You can obtain information about data files used by Disk Data tables by querying the note 'FILES': information-schema-files-table. table in the 'INFORMATION_SCHEMA' database. An extra ''NULL' row' provides additional information about undo log files. For more information and examples, see note information-schema-files-table::.

 File: manual.info.tmp, Node: mysql-cluster-disk-data-symlinks, Next: mysql-cluster-disk-data-storage-requirements, Prev: mysql-cluster-disk-data-objects, Up: mysql-cluster-disk-data

21.6.11.2 Using Symbolic Links with Disk Data Objects .....................................................

The performance of an NDB Cluster that uses Disk Data storage can be greatly improved by separating the data node file system from any tablespace files (undo log files and data files), and placing these on different disks. In early versions of NDB Cluster, there was no direct support for this in NDB Cluster, and it was necessary to achieve this separation using symbolic links. NDB Cluster now supports the data node configuration parameters 'FileSystemPathDD', 'FileSystemPathDataFiles', and 'FileSystemPathUndoFiles', which make the use of symbolic links for this purpose unnecessary. For more information about these parameters, see *note mysql-cluster-ndbd-disk-data-filesystem-parameters::.

 File: manual.info.tmp, Node: mysql-cluster-disk-data-storage-requirements, Prev: mysql-cluster-disk-data-symlinks, Up: mysql-cluster-disk-data

21.6.11.3 NDB Cluster Disk Data Storage Requirements ....................................................

The following items apply to Disk Data storage requirements:

Important:

Starting the cluster with the '--initial' option does not remove Disk Data files. You must remove these manually prior to performing an initial restart of the cluster.

Performance of Disk Data tables can be improved by minimizing the number of disk seeks by making sure that 'DiskPageBufferMemory' is of sufficient size. You can query the *note 'diskpagebuffer': mysql-cluster-ndbinfo-diskpagebuffer. table to help determine whether the value for this parameter needs to be increased.

 File: manual.info.tmp, Node: mysql-cluster-online-operations, Next: mysql-cluster-privilege-distribution, Prev: mysql-cluster-disk-data, Up: mysql-cluster-management

21.6.12 Online Operations with ALTER TABLE in NDB Cluster

MySQL NDB Cluster 7.5 and 7.6 support online table schema changes using 'ALTER TABLE ... ALGORITHM=DEFAULT|INPLACE|COPY'. NDB Cluster handles 'COPY' and 'INPLACE' as described in the next few paragraphs.

For 'ALGORITHM=COPY', the *note 'mysqld': mysqld. NDB Cluster handler performs the following actions:

We sometimes refer to this as a 'copying' or 'offline' 'ALTER TABLE'.

DML operations are not permitted concurrently with a copying 'ALTER TABLE'.

The note 'mysqld': mysqld. on which the copying 'ALTER TABLE' statement is issued takes a metadata lock, but this is in effect only on that note 'mysqld': mysqld. Other 'NDB' clients can modify row data during a copying 'ALTER TABLE', resulting in inconsistency.

For 'ALGORITHM=INPLACE', the NDB Cluster handler tells the data nodes to make the required changes, and does not perform any copying of data.

We also refer to this as a 'non-copying' or 'online' 'ALTER TABLE'.

A non-copying 'ALTER TABLE' allows concurrent DML operations.

Regardless of the algorithm used, the *note 'mysqld': mysqld. takes a Global Schema Lock (GSL) while executing 'ALTER TABLE'; this prevents execution of any (other) DDL or backups concurrently on this or any other SQL node in the cluster. This is normally not problematic, unless the 'ALTER TABLE' takes a very long time.

Note:

Some older releases of NDB Cluster used a syntax specific to *note 'NDB': mysql-cluster. for online 'ALTER TABLE' operations. That syntax has since been removed.

Operations that add and drop indexes on variable-width columns of note 'NDB': mysql-cluster. tables occur online. Online operations are noncopying; that is, they do not require that indexes be re-created. They do not lock the table being altered from access by other API nodes in an NDB Cluster (but see note mysql-cluster-online-limitations::, later in this section). Such operations do not require single user mode for *note 'NDB': mysql-cluster. table alterations made in an NDB cluster with multiple API nodes; transactions can continue uninterrupted during online DDL operations.

'ALGORITHM=INPLACE' can be used to perform online 'ADD COLUMN', 'ADD INDEX' (including 'CREATE INDEX' statements), and 'DROP INDEX' operations on note 'NDB': mysql-cluster. tables. Online renaming of note 'NDB': mysql-cluster. tables is also supported.

Disk-based columns cannot be added to note 'NDB': mysql-cluster. tables online. This means that, if you wish to add an in-memory column to an note 'NDB': mysql-cluster. table that uses a table-level 'STORAGE DISK' option, you must declare the new column as using memory-based storage explicitly. For example--assuming that you have already created tablespace 'ts1'--suppose that you create table 't1' as follows:

 mysql> CREATE TABLE t1 (
      >     c1 INT NOT NULL PRIMARY KEY,
      >     c2 VARCHAR(30)
      >     )
      >     TABLESPACE ts1 STORAGE DISK
      >     ENGINE NDB;
 Query OK, 0 rows affected (1.73 sec)
 Records: 0  Duplicates: 0  Warnings: 0

You can add a new in-memory column to this table online as shown here:

 mysql> ALTER TABLE t1
      >     ADD COLUMN c3 INT COLUMN_FORMAT DYNAMIC STORAGE MEMORY,
      >     ALGORITHM=INPLACE;
 Query OK, 0 rows affected (1.25 sec)
 Records: 0  Duplicates: 0  Warnings: 0

This statement fails if the 'STORAGE MEMORY' option is omitted:

 mysql> ALTER TABLE t1
      >     ADD COLUMN c4 INT COLUMN_FORMAT DYNAMIC,
      >     ALGORITHM=INPLACE;
 ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason:
 Adding column(s) or add/reorganize partition not supported online. Try
 ALGORITHM=COPY.

If you omit the 'COLUMN_FORMAT DYNAMIC' option, the dynamic column format is employed automatically, but a warning is issued, as shown here:

 mysql> ALTER ONLINE TABLE t1 ADD COLUMN c4 INT STORAGE MEMORY;
 Query OK, 0 rows affected, 1 warning (1.17 sec)
 Records: 0  Duplicates: 0  Warnings: 0

 mysql> SHOW WARNINGS\G
 *************************** 1. row ***************************
   Level: Warning
    Code: 1478
 Message: DYNAMIC column c4 with STORAGE DISK is not supported, column will
 become FIXED

 mysql> SHOW CREATE TABLE t1\G
 *************************** 1. row ***************************
        Table: t1
 Create Table: CREATE TABLE `t1` (
   `c1` int(11) NOT NULL,
   `c2` varchar(30) DEFAULT NULL,
   `c3` int(11) /*!50606 STORAGE MEMORY */ /*!50606 COLUMN_FORMAT DYNAMIC */ DEFAULT NULL,
   `c4` int(11) /*!50606 STORAGE MEMORY */ DEFAULT NULL,
   PRIMARY KEY (`c1`)
 ) /*!50606 TABLESPACE ts_1 STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1
 1 row in set (0.03 sec)

Note:

The 'STORAGE' and 'COLUMN_FORMAT' keywords are supported only in NDB Cluster; in any other version of MySQL, attempting to use either of these keywords in a note 'CREATE TABLE': create-table. or note 'ALTER TABLE': alter-table. statement results in an error.

It is also possible to use the statement 'ALTER TABLE ... REORGANIZE PARTITION, ALGORITHM=INPLACE' with no 'PARTITION_NAMES INTO (PARTITION_DEFINITIONS)' option on note 'NDB': mysql-cluster. tables. This can be used to redistribute NDB Cluster data among new data nodes that have been added to the cluster online. This does not perform any defragmentation, which requires an note 'OPTIMIZE TABLE': optimize-table. or null note 'ALTER TABLE': alter-table. statement. For more information, see note mysql-cluster-online-add-node::.

Limitations of NDB online operations

Online 'DROP COLUMN' operations are not supported.

Online note 'ALTER TABLE': alter-table, note 'CREATE INDEX': create-index, or *note 'DROP INDEX': drop-index. statements that add columns or add or drop indexes are subject to the following limitations:

Columns to be added online cannot use the note 'BLOB': blob. or note 'TEXT': blob. type, and must meet the following criteria:

For online note 'ALTER TABLE': alter-table. operations on note 'NDB': mysql-cluster. tables, fixed-format columns are converted to dynamic when they are added online, or when indexes are created or dropped online, as shown here (repeating the 'CREATE TABLE' and 'ALTER TABLE' statements just shown for the sake of clarity):

 mysql> CREATE TABLE t2 (
      >     c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY
      >     ) ENGINE=NDB;
 Query OK, 0 rows affected (1.44 sec)

 mysql> ALTER TABLE t2
      >     ADD COLUMN c2 INT,
      >     ADD COLUMN c3 INT,
      >     ALGORITHM=INPLACE;
 Query OK, 0 rows affected, 2 warnings (0.93 sec)

 mysql> SHOW WARNINGS;
 *************************** 1. row ***************************
   Level: Warning
    Code: 1478
 Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN
 *************************** 2. row ***************************
   Level: Warning
    Code: 1478
 Message: Converted FIXED field 'c3' to DYNAMIC to enable online ADD COLUMN
 2 rows in set (0.00 sec)

Only the column or columns to be added online must be dynamic. Existing columns need not be; this includes the table's primary key, which may also be 'FIXED', as shown here:

 mysql> CREATE TABLE t3 (
      >     c1 INT NOT NULL AUTO_INCREMENT PRIMARY KEY COLUMN_FORMAT FIXED
      >     ) ENGINE=NDB;
 Query OK, 0 rows affected (2.10 sec)

 mysql> ALTER TABLE t3 ADD COLUMN c2 INT, ALGORITHM=INPLACE;
 Query OK, 0 rows affected, 1 warning (0.78 sec)
 Records: 0  Duplicates: 0  Warnings: 0

 mysql> SHOW WARNINGS;
 *************************** 1. row ***************************
   Level: Warning
    Code: 1478
 Message: Converted FIXED field 'c2' to DYNAMIC to enable online ADD COLUMN
 1 row in set (0.00 sec)

Columns are not converted from 'FIXED' to 'DYNAMIC' column format by renaming operations. For more information about 'COLUMN_FORMAT', see *note create-table::.

The 'KEY', 'CONSTRAINT', and 'IGNORE' keywords are supported in *note 'ALTER TABLE': alter-table. statements using 'ALGORITHM=INPLACE'.

Beginning with NDB Cluster 7.5.7, setting 'MAX_ROWS' to 0 using an online 'ALTER TABLE' statement is disallowed. You must use a copying 'ALTER TABLE' to perform this operation. (Bug #21960004)

 File: manual.info.tmp, Node: mysql-cluster-privilege-distribution, Next: mysql-cluster-ndb-api-statistics, Prev: mysql-cluster-online-operations, Up: mysql-cluster-management

21.6.13 Distributed Privileges Using Shared Grant Tables

NDB Cluster supports distribution of MySQL users and privileges across all SQL nodes in an NDB Cluster. This support is not enabled by default; you should follow the procedure outlined in this section in order to do so.

Normally, each MySQL server's user privilege tables in the 'mysql' database must use the *note 'MyISAM': myisam-storage-engine. storage engine, which means that a user account and its associated privileges created on one SQL node are not available on the cluster's other SQL nodes. An SQL file 'ndb_dist_priv.sql' provided with the NDB Cluster distribution can be found in the 'share' directory in the MySQL installation directory.

The first step in enabling distributed privileges is to load this script into a MySQL Server that functions as an SQL node (which we refer to after this as the target SQL node or MySQL Server). You can do this by executing the following command from the system shell on the target SQL node after changing to its MySQL installation directory (where OPTIONS stands for any additional options needed to connect to this SQL node):

 $> mysql OPTIONS -uroot < share/ndb_dist_priv.sql

Importing 'ndb_dist_priv.sql' creates a number of stored routines (six stored procedures and one stored function) in the 'mysql' database on the target SQL node. After connecting to the SQL node in the *note 'mysql': mysql. client (as the MySQL 'root' user), you can verify that these were created as shown here:

 mysql> SELECT ROUTINE_NAME, ROUTINE_SCHEMA, ROUTINE_TYPE
     ->     FROM INFORMATION_SCHEMA.ROUTINES
     ->     WHERE ROUTINE_NAME LIKE 'mysql_cluster%'
     ->     ORDER BY ROUTINE_TYPE;
 +---------------------------------------------+----------------+--------------+
 | ROUTINE_NAME                                | ROUTINE_SCHEMA | ROUTINE_TYPE |
 +---------------------------------------------+----------------+--------------+
 | mysql_cluster_privileges_are_distributed    | mysql          | FUNCTION     |
 | mysql_cluster_backup_privileges             | mysql          | PROCEDURE    |
 | mysql_cluster_move_grant_tables             | mysql          | PROCEDURE    |
 | mysql_cluster_move_privileges               | mysql          | PROCEDURE    |
 | mysql_cluster_restore_local_privileges      | mysql          | PROCEDURE    |
 | mysql_cluster_restore_privileges            | mysql          | PROCEDURE    |
 | mysql_cluster_restore_privileges_from_local | mysql          | PROCEDURE    |
 +---------------------------------------------+----------------+--------------+
 7 rows in set (0.01 sec)

The stored procedure named 'mysql_cluster_move_privileges' creates backup copies of the existing privilege tables, then converts them to *note 'NDB': mysql-cluster.

'mysql_cluster_move_privileges' performs the backup and conversion in two steps. The first step is to call 'mysql_cluster_backup_privileges', which creates two sets of copies in the 'mysql' database:

After the copies are created, 'mysql_cluster_move_privileges' invokes 'mysql_cluster_move_grant_tables', which contains the note 'ALTER TABLE ... ENGINE = NDB': alter-table. statements that convert the mysql system tables to note 'NDB': mysql-cluster.

Normally, you should not invoke either 'mysql_cluster_backup_privileges' or 'mysql_cluster_move_grant_tables' manually; these stored procedures are intended only for use by 'mysql_cluster_move_privileges'.

Although the original privilege tables are backed up automatically, it is always a good idea to create backups manually of the existing privilege tables on all affected SQL nodes before proceeding. You can do this using *note 'mysqldump': mysqldump. in a manner similar to what is shown here:

 $> mysqldump OPTIONS -uroot \
     mysql user db tables_priv columns_priv procs_priv proxies_priv > BACKUP_FILE

To perform the conversion, you must be connected to the target SQL node using the *note 'mysql': mysql. client (again, as the MySQL 'root' user). Invoke the stored procedure like this:

 mysql> CALL mysql.mysql_cluster_move_privileges();
 Query OK, 0 rows affected (22.32 sec)

Depending on the number of rows in the privilege tables, this procedure may take some time to execute. If some of the privilege tables are empty, you may see one or more 'No data - zero rows fetched, selected, or processed' warnings when 'mysql_cluster_move_privileges' returns. In such cases, the warnings may be safely ignored. To verify that the conversion was successful, you can use the stored function 'mysql_cluster_privileges_are_distributed' as shown here:

 mysql> SELECT CONCAT(
     ->    'Conversion ',
     ->    IF(mysql.mysql_cluster_privileges_are_distributed(), 'succeeded', 'failed'),
     ->    '.')
     ->    AS Result;
 +-----------------------+
 | Result                |
 +-----------------------+
 | Conversion succeeded. |
 +-----------------------+
 1 row in set (0.00 sec)

'mysql_cluster_privileges_are_distributed' checks for the existence of the distributed privilege tables and returns '1' if all of the privilege tables are distributed; otherwise, it returns '0'.

You can verify that the backups have been created using a query such as this one:

 mysql> SELECT TABLE_NAME, ENGINE FROM INFORMATION_SCHEMA.TABLES
     ->     WHERE TABLE_SCHEMA = 'mysql' AND TABLE_NAME LIKE '%backup'
     ->     ORDER BY ENGINE;
 +-------------------------+------------+
 | TABLE_NAME              | ENGINE     |
 +-------------------------+------------+
 | db_backup               | MyISAM     |
 | user_backup             | MyISAM     |
 | columns_priv_backup     | MyISAM     |
 | tables_priv_backup      | MyISAM     |
 | proxies_priv_backup     | MyISAM     |
 | procs_priv_backup       | MyISAM     |
 | ndb_columns_priv_backup | ndbcluster |
 | ndb_user_backup         | ndbcluster |
 | ndb_tables_priv_backup  | ndbcluster |
 | ndb_proxies_priv_backup | ndbcluster |
 | ndb_procs_priv_backup   | ndbcluster |
 | ndb_db_backup           | ndbcluster |
 +-------------------------+------------+
 12 rows in set (0.00 sec)

Once the conversion to distributed privileges has been made, any time a MySQL user account is created, dropped, or has its privileges updated on any SQL node, the changes take effect immediately on all other MySQL servers attached to the cluster. Once privileges are distributed, any new MySQL Servers that connect to the cluster automatically participate in the distribution.

Note:

For clients connected to SQL nodes at the time that 'mysql_cluster_move_privileges' is executed, you may need to execute 'FLUSH PRIVILEGES' on those SQL nodes, or to disconnect and then reconnect the clients, in order for those clients to be able to see the changes in privileges.

All MySQL user privileges are distributed across all connected MySQL Servers. This includes any privileges associated with views and stored routines, even though distribution of views and stored routines themselves is not currently supported.

In the event that an SQL node becomes disconnected from the cluster while 'mysql_cluster_move_privileges' is running, you must drop its privilege tables after reconnecting to the cluster, using a statement such as *note 'DROP TABLE IF EXISTS mysql.user mysql.db mysql.tables_priv mysql.columns_priv mysql.procs_priv': drop-table. This causes the SQL node to use the shared privilege tables rather than its own local versions of them. This is not needed when connecting a new SQL node to the cluster for the first time.

In the event of an initial restart of the entire cluster (all data nodes shut down, then started again with '--initial'), the shared privilege tables are lost. If this happens, you can restore them using the original target SQL node either from the backups made by 'mysql_cluster_move_privileges' or from a dump file created with *note 'mysqldump': mysqldump. If you need to use a new MySQL Server to perform the restoration, you should start it with '--skip-grant-tables' when connecting to the cluster for the first time; after this, you can restore the privilege tables locally, then distribute them again using 'mysql_cluster_move_privileges'. After restoring and distributing the tables, you should restart this MySQL Server without the '--skip-grant-tables' option.

You can also restore the distributed tables using note 'ndb_restore': mysql-cluster-programs-ndb-restore. '--restore-privilege-tables' from a backup made using note 'START BACKUP': mysql-cluster-backup-using-management-client. in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client. (The note 'MyISAM': myisam-storage-engine. tables created by 'mysql_cluster_move_privileges' are not backed up by the 'START BACKUP' command.) *note 'ndb_restore': mysql-cluster-programs-ndb-restore. does not restore the privilege tables by default; the '--restore-privilege-tables' option causes it to do so.

You can restore the SQL node's local privileges using either of two procedures. 'mysql_cluster_restore_privileges' works as follows:

  1. If copies of the 'mysql.ndb_*_backup' tables are available, attempt to restore the system tables from these.

  2. Otherwise, attempt to restore the system tables from the local backups named '*backup' (without the 'ndb' prefix).

The other procedure, named 'mysql_cluster_restore_local_privileges', restores the system tables from the local backups only, without checking the 'ndb_*' backups.

The system tables re-created by 'mysql_cluster_restore_privileges' or 'mysql_cluster_restore_local_privileges' use the MySQL server default storage engine; they are not shared or distributed in any way, and do not use NDB Cluster's *note 'NDB': mysql-cluster. storage engine.

The additional stored procedure 'mysql_cluster_restore_privileges_from_local' is intended for the use of 'mysql_cluster_restore_privileges' and 'mysql_cluster_restore_local_privileges'. It should not be invoked directly.

Important:

Applications that access NDB Cluster data directly, including NDB API and ClusterJ applications, are not subject to the MySQL privilege system. This means that, once you have distributed the grant tables, they can be freely accessed by such applications, just as they can any other *note 'NDB': mysql-cluster. tables. In particular, you should keep in mind that NDB API and ClusterJ applications can read and write user names, host names, password hashes, and any other contents of the distributed grant tables without any restrictions.

 File: manual.info.tmp, Node: mysql-cluster-ndb-api-statistics, Next: mysql-cluster-ndbinfo, Prev: mysql-cluster-privilege-distribution, Up: mysql-cluster-management

21.6.14 NDB API Statistics Counters and Variables

A number of types of statistical counters relating to actions performed by or affecting 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) objects are available. Such actions include starting and closing (or aborting) transactions; primary key and unique key operations; table, range, and pruned scans; threads blocked while waiting for the completion of various operations; and data and events sent and received by 'NDBCLUSTER'. The counters are incremented inside the NDB kernel whenever NDB API calls are made or data is sent to or received by the data nodes. note 'mysqld': mysqld. exposes these counters as system status variables; their values can be read in the output of note 'SHOW STATUS': show-status, or by querying the Information Schema note 'SESSION_STATUS': information-schema-status-table. or note 'GLOBAL_STATUS': information-schema-status-table. table. By comparing the values before and after statements operating on *note 'NDB': mysql-cluster. tables, you can observe the corresponding actions taken on the API level, and thus the cost of performing the statement.

You can list all of these status variables using the following *note 'SHOW STATUS': show-status. statement:

 mysql> SHOW STATUS LIKE 'ndb_api%';
 +----------------------------------------------+-------------+
 | Variable_name                                | Value       |
 +----------------------------------------------+-------------+
 | Ndb_api_wait_exec_complete_count             | 2           |
 | Ndb_api_wait_scan_result_count               | 3           |
 | Ndb_api_wait_meta_request_count              | 101         |
 | Ndb_api_wait_nanos_count                     | 83664697215 |
 | Ndb_api_bytes_sent_count                     | 13608       |
 | Ndb_api_bytes_received_count                 | 142800      |
 | Ndb_api_trans_start_count                    | 2           |
 | Ndb_api_trans_commit_count                   | 1           |
 | Ndb_api_trans_abort_count                    | 0           |
 | Ndb_api_trans_close_count                    | 2           |
 | Ndb_api_pk_op_count                          | 1           |
 | Ndb_api_uk_op_count                          | 0           |
 | Ndb_api_table_scan_count                     | 1           |
 | Ndb_api_range_scan_count                     | 0           |
 | Ndb_api_pruned_scan_count                    | 0           |
 | Ndb_api_scan_batch_count                     | 0           |
 | Ndb_api_read_row_count                       | 1           |
 | Ndb_api_trans_local_read_row_count           | 1           |
 | Ndb_api_adaptive_send_forced_count           | 0           |
 | Ndb_api_adaptive_send_unforced_count         | 3           |
 | Ndb_api_adaptive_send_deferred_count         | 0           |
 | Ndb_api_event_data_count                     | 0           |
 | Ndb_api_event_nondata_count                  | 0           |
 | Ndb_api_event_bytes_count                    | 0           |
 | Ndb_api_wait_exec_complete_count_slave       | 0           |
 | Ndb_api_wait_scan_result_count_slave         | 0           |
 | Ndb_api_wait_meta_request_count_slave        | 0           |
 | Ndb_api_wait_nanos_count_slave               | 0           |
 | Ndb_api_bytes_sent_count_slave               | 0           |
 | Ndb_api_bytes_received_count_slave           | 0           |
 | Ndb_api_trans_start_count_slave              | 0           |
 | Ndb_api_trans_commit_count_slave             | 0           |
 | Ndb_api_trans_abort_count_slave              | 0           |
 | Ndb_api_trans_close_count_slave              | 0           |
 | Ndb_api_pk_op_count_slave                    | 0           |
 | Ndb_api_uk_op_count_slave                    | 0           |
 | Ndb_api_table_scan_count_slave               | 0           |
 | Ndb_api_range_scan_count_slave               | 0           |
 | Ndb_api_pruned_scan_count_slave              | 0           |
 | Ndb_api_scan_batch_count_slave               | 0           |
 | Ndb_api_read_row_count_slave                 | 0           |
 | Ndb_api_trans_local_read_row_count_slave     | 0           |
 | Ndb_api_adaptive_send_forced_count_slave     | 0           |
 | Ndb_api_adaptive_send_unforced_count_slave   | 0           |
 | Ndb_api_adaptive_send_deferred_count_slave   | 0           |
 | Ndb_api_event_data_count_injector            | 0           |
 | Ndb_api_event_nondata_count_injector         | 0           |
 | Ndb_api_event_bytes_count_injector           | 0           |
 | Ndb_api_wait_exec_complete_count_session     | 0           |
 | Ndb_api_wait_scan_result_count_session       | 0           |
 | Ndb_api_wait_meta_request_count_session      | 0           |
 | Ndb_api_wait_nanos_count_session             | 0           |
 | Ndb_api_bytes_sent_count_session             | 0           |
 | Ndb_api_bytes_received_count_session         | 0           |
 | Ndb_api_trans_start_count_session            | 0           |
 | Ndb_api_trans_commit_count_session           | 0           |
 | Ndb_api_trans_abort_count_session            | 0           |
 | Ndb_api_trans_close_count_session            | 0           |
 | Ndb_api_pk_op_count_session                  | 0           |
 | Ndb_api_uk_op_count_session                  | 0           |
 | Ndb_api_table_scan_count_session             | 0           |
 | Ndb_api_range_scan_count_session             | 0           |
 | Ndb_api_pruned_scan_count_session            | 0           |
 | Ndb_api_scan_batch_count_session             | 0           |
 | Ndb_api_read_row_count_session               | 0           |
 | Ndb_api_trans_local_read_row_count_session   | 0           |
 | Ndb_api_adaptive_send_forced_count_session   | 0           |
 | Ndb_api_adaptive_send_unforced_count_session | 0           |
 | Ndb_api_adaptive_send_deferred_count_session | 0           |
 +----------------------------------------------+-------------+
 69 rows in set (0.00 sec)

These status variables are also available from the note 'SESSION_STATUS': information-schema-status-table. and note 'GLOBAL_STATUS': information-schema-status-table. tables of the 'INFORMATION_SCHEMA' database, as shown here:

 mysql> SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS
     ->   WHERE VARIABLE_NAME LIKE 'ndb_api%';
 +----------------------------------------------+----------------+
 | VARIABLE_NAME                                | VARIABLE_VALUE |
 +----------------------------------------------+----------------+
 | Ndb_api_wait_exec_complete_count             | 2              |
 | Ndb_api_wait_scan_result_count               | 3              |
 | Ndb_api_wait_meta_request_count              | 101            |
 | Ndb_api_wait_nanos_count                     | 74890499869    |
 | Ndb_api_bytes_sent_count                     | 13608          |
 | Ndb_api_bytes_received_count                 | 142800         |
 | Ndb_api_trans_start_count                    | 2              |
 | Ndb_api_trans_commit_count                   | 1              |
 | Ndb_api_trans_abort_count                    | 0              |
 | Ndb_api_trans_close_count                    | 2              |
 | Ndb_api_pk_op_count                          | 1              |
 | Ndb_api_uk_op_count                          | 0              |
 | Ndb_api_table_scan_count                     | 1              |
 | Ndb_api_range_scan_count                     | 0              |
 | Ndb_api_pruned_scan_count                    | 0              |
 | Ndb_api_scan_batch_count                     | 0              |
 | Ndb_api_read_row_count                       | 1              |
 | Ndb_api_trans_local_read_row_count           | 1              |
 | Ndb_api_adaptive_send_forced_count           | 0              |
 | Ndb_api_adaptive_send_unforced_count         | 3              |
 | Ndb_api_adaptive_send_deferred_count         | 0              |
 | Ndb_api_event_data_count                     | 0              |
 | Ndb_api_event_nondata_count                  | 0              |
 | Ndb_api_event_bytes_count                    | 0              |
 | Ndb_api_wait_exec_complete_count_slave       | 0              |
 | Ndb_api_wait_scan_result_count_slave         | 0              |
 | Ndb_api_wait_meta_request_count_slave        | 0              |
 | Ndb_api_wait_nanos_count_slave               | 0              |
 | Ndb_api_bytes_sent_count_slave               | 0              |
 | Ndb_api_bytes_received_count_slave           | 0              |
 | Ndb_api_trans_start_count_slave              | 0              |
 | Ndb_api_trans_commit_count_slave             | 0              |
 | Ndb_api_trans_abort_count_slave              | 0              |
 | Ndb_api_trans_close_count_slave              | 0              |
 | Ndb_api_pk_op_count_slave                    | 0              |
 | Ndb_api_uk_op_count_slave                    | 0              |
 | Ndb_api_table_scan_count_slave               | 0              |
 | Ndb_api_range_scan_count_slave               | 0              |
 | Ndb_api_pruned_scan_count_slave              | 0              |
 | Ndb_api_scan_batch_count_slave               | 0              |
 | Ndb_api_read_row_count_slave                 | 0              |
 | Ndb_api_trans_local_read_row_count_slave     | 0              |
 | Ndb_api_adaptive_send_forced_count_slave     | 0              |
 | Ndb_api_adaptive_send_unforced_count_slave   | 0              |
 | Ndb_api_adaptive_send_deferred_count_slave   | 0              |
 | Ndb_api_event_data_count_injector            | 0              |
 | Ndb_api_event_nondata_count_injector         | 0              |
 | Ndb_api_event_bytes_count_injector           | 0              |
 | Ndb_api_wait_exec_complete_count_session     | 0              |
 | Ndb_api_wait_scan_result_count_session       | 0              |
 | Ndb_api_wait_meta_request_count_session      | 0              |
 | Ndb_api_wait_nanos_count_session             | 0              |
 | Ndb_api_bytes_sent_count_session             | 0              |
 | Ndb_api_bytes_received_count_session         | 0              |
 | Ndb_api_trans_start_count_session            | 0              |
 | Ndb_api_trans_commit_count_session           | 0              |
 | Ndb_api_trans_abort_count_session            | 0              |
 | Ndb_api_trans_close_count_session            | 0              |
 | Ndb_api_pk_op_count_session                  | 0              |
 | Ndb_api_uk_op_count_session                  | 0              |
 | Ndb_api_table_scan_count_session             | 0              |
 | Ndb_api_range_scan_count_session             | 0              |
 | Ndb_api_pruned_scan_count_session            | 0              |
 | Ndb_api_scan_batch_count_session             | 0              |
 | Ndb_api_read_row_count_session               | 0              |
 | Ndb_api_trans_local_read_row_count_session   | 0              |
 | Ndb_api_adaptive_send_forced_count_session   | 0              |
 | Ndb_api_adaptive_send_unforced_count_session | 0              |
 | Ndb_api_adaptive_send_deferred_count_session | 0              |
 +----------------------------------------------+----------------+
 69 rows in set (0.00 sec)

 mysql> SELECT * FROM INFORMATION_SCHEMA.GLOBAL_STATUS
     ->     WHERE VARIABLE_NAME LIKE 'ndb_api%';
 +----------------------------------------------+----------------+
 | VARIABLE_NAME                                | VARIABLE_VALUE |
 +----------------------------------------------+----------------+
 | Ndb_api_wait_exec_complete_count             | 2              |
 | Ndb_api_wait_scan_result_count               | 3              |
 | Ndb_api_wait_meta_request_count              | 101            |
 | Ndb_api_wait_nanos_count                     | 13640285623    |
 | Ndb_api_bytes_sent_count                     | 13608          |
 | Ndb_api_bytes_received_count                 | 142800         |
 | Ndb_api_trans_start_count                    | 2              |
 | Ndb_api_trans_commit_count                   | 1              |
 | Ndb_api_trans_abort_count                    | 0              |
 | Ndb_api_trans_close_count                    | 2              |
 | Ndb_api_pk_op_count                          | 1              |
 | Ndb_api_uk_op_count                          | 0              |
 | Ndb_api_table_scan_count                     | 1              |
 | Ndb_api_range_scan_count                     | 0              |
 | Ndb_api_pruned_scan_count                    | 0              |
 | Ndb_api_scan_batch_count                     | 0              |
 | Ndb_api_read_row_count                       | 1              |
 | Ndb_api_trans_local_read_row_count           | 1              |
 | Ndb_api_adaptive_send_forced_count           | 0              |
 | Ndb_api_adaptive_send_unforced_count         | 3              |
 | Ndb_api_adaptive_send_deferred_count         | 0              |
 | Ndb_api_event_data_count                     | 0              |
 | Ndb_api_event_nondata_count                  | 0              |
 | Ndb_api_event_bytes_count                    | 0              |
 | Ndb_api_wait_exec_complete_count_slave       | 0              |
 | Ndb_api_wait_scan_result_count_slave         | 0              |
 | Ndb_api_wait_meta_request_count_slave        | 0              |
 | Ndb_api_wait_nanos_count_slave               | 0              |
 | Ndb_api_bytes_sent_count_slave               | 0              |
 | Ndb_api_bytes_received_count_slave           | 0              |
 | Ndb_api_trans_start_count_slave              | 0              |
 | Ndb_api_trans_commit_count_slave             | 0              |
 | Ndb_api_trans_abort_count_slave              | 0              |
 | Ndb_api_trans_close_count_slave              | 0              |
 | Ndb_api_pk_op_count_slave                    | 0              |
 | Ndb_api_uk_op_count_slave                    | 0              |
 | Ndb_api_table_scan_count_slave               | 0              |
 | Ndb_api_range_scan_count_slave               | 0              |
 | Ndb_api_pruned_scan_count_slave              | 0              |
 | Ndb_api_scan_batch_count_slave               | 0              |
 | Ndb_api_read_row_count_slave                 | 0              |
 | Ndb_api_trans_local_read_row_count_slave     | 0              |
 | Ndb_api_adaptive_send_forced_count_slave     | 0              |
 | Ndb_api_adaptive_send_unforced_count_slave   | 0              |
 | Ndb_api_adaptive_send_deferred_count_slave   | 0              |
 | Ndb_api_event_data_count_injector            | 0              |
 | Ndb_api_event_nondata_count_injector         | 0              |
 | Ndb_api_event_bytes_count_injector           | 0              |
 | Ndb_api_wait_exec_complete_count_session     | 0              |
 | Ndb_api_wait_scan_result_count_session       | 0              |
 | Ndb_api_wait_meta_request_count_session      | 0              |
 | Ndb_api_wait_nanos_count_session             | 0              |
 | Ndb_api_bytes_sent_count_session             | 0              |
 | Ndb_api_bytes_received_count_session         | 0              |
 | Ndb_api_trans_start_count_session            | 0              |
 | Ndb_api_trans_commit_count_session           | 0              |
 | Ndb_api_trans_abort_count_session            | 0              |
 | Ndb_api_trans_close_count_session            | 0              |
 | Ndb_api_pk_op_count_session                  | 0              |
 | Ndb_api_uk_op_count_session                  | 0              |
 | Ndb_api_table_scan_count_session             | 0              |
 | Ndb_api_range_scan_count_session             | 0              |
 | Ndb_api_pruned_scan_count_session            | 0              |
 | Ndb_api_scan_batch_count_session             | 0              |
 | Ndb_api_read_row_count_session               | 0              |
 | Ndb_api_trans_local_read_row_count_session   | 0              |
 | Ndb_api_adaptive_send_forced_count_session   | 0              |
 | Ndb_api_adaptive_send_unforced_count_session | 0              |
 | Ndb_api_adaptive_send_deferred_count_session | 0              |
 +----------------------------------------------+----------------+
 69 rows in set (0.01 sec)

Each 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) object has its own counters. NDB API applications can read the values of the counters for use in optimization or monitoring. For multithreaded clients which use more than one 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) object concurrently, it is also possible to obtain a summed view of counters from all 'Ndb' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb.html) objects belonging to a given 'Ndb_cluster_connection' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb-cluster-connection.html).

Four sets of these counters are exposed. One set applies to the current session only; the other 3 are global. _This is in spite of the fact that their values can be obtained as either session or global status variables in the note 'mysql': mysql. client_. This means that specifying the 'SESSION' or 'GLOBAL' keyword with note 'SHOW STATUS': show-status. has no effect on the values reported for NDB API statistics status variables, and the value for each of these variables is the same whether the value is obtained from the equivalent column of the note 'SESSION_STATUS': information-schema-status-table. or the note 'GLOBAL_STATUS': information-schema-status-table. table.

You can obtain values for a particular set of variables by additionally filtering for the substring 'session', 'slave', or 'injector' in the variable name (along with the common prefix 'Ndb_api'). For '_session' variables, this can be done as shown here:

 mysql> SHOW STATUS LIKE 'ndb_api%session';
 +--------------------------------------------+---------+
 | Variable_name                              | Value   |
 +--------------------------------------------+---------+
 | Ndb_api_wait_exec_complete_count_session   | 2       |
 | Ndb_api_wait_scan_result_count_session     | 0       |
 | Ndb_api_wait_meta_request_count_session    | 1       |
 | Ndb_api_wait_nanos_count_session           | 8144375 |
 | Ndb_api_bytes_sent_count_session           | 68      |
 | Ndb_api_bytes_received_count_session       | 84      |
 | Ndb_api_trans_start_count_session          | 1       |
 | Ndb_api_trans_commit_count_session         | 1       |
 | Ndb_api_trans_abort_count_session          | 0       |
 | Ndb_api_trans_close_count_session          | 1       |
 | Ndb_api_pk_op_count_session                | 1       |
 | Ndb_api_uk_op_count_session                | 0       |
 | Ndb_api_table_scan_count_session           | 0       |
 | Ndb_api_range_scan_count_session           | 0       |
 | Ndb_api_pruned_scan_count_session          | 0       |
 | Ndb_api_scan_batch_count_session           | 0       |
 | Ndb_api_read_row_count_session             | 1       |
 | Ndb_api_trans_local_read_row_count_session | 1       |
 +--------------------------------------------+---------+
 18 rows in set (0.50 sec)

To obtain a listing of the NDB API *note 'mysqld': mysqld.-level status variables, filter for variable names beginning with 'ndb_api' and ending in '_count', like this:

 mysql> SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS
     ->     WHERE VARIABLE_NAME LIKE 'ndb_api%count';
 +------------------------------------+----------------+
 | VARIABLE_NAME                      | VARIABLE_VALUE |
 +------------------------------------+----------------+
 | NDB_API_WAIT_EXEC_COMPLETE_COUNT   | 4              |
 | NDB_API_WAIT_SCAN_RESULT_COUNT     | 3              |
 | NDB_API_WAIT_META_REQUEST_COUNT    | 28             |
 | NDB_API_WAIT_NANOS_COUNT           | 53756398       |
 | NDB_API_BYTES_SENT_COUNT           | 1060           |
 | NDB_API_BYTES_RECEIVED_COUNT       | 9724           |
 | NDB_API_TRANS_START_COUNT          | 3              |
 | NDB_API_TRANS_COMMIT_COUNT         | 2              |
 | NDB_API_TRANS_ABORT_COUNT          | 0              |
 | NDB_API_TRANS_CLOSE_COUNT          | 3              |
 | NDB_API_PK_OP_COUNT                | 2              |
 | NDB_API_UK_OP_COUNT                | 0              |
 | NDB_API_TABLE_SCAN_COUNT           | 1              |
 | NDB_API_RANGE_SCAN_COUNT           | 0              |
 | NDB_API_PRUNED_SCAN_COUNT          | 0              |
 | NDB_API_SCAN_BATCH_COUNT           | 0              |
 | NDB_API_READ_ROW_COUNT             | 2              |
 | NDB_API_TRANS_LOCAL_READ_ROW_COUNT | 2              |
 | NDB_API_EVENT_DATA_COUNT           | 0              |
 | NDB_API_EVENT_NONDATA_COUNT        | 0              |
 | NDB_API_EVENT_BYTES_COUNT          | 0              |
 +------------------------------------+----------------+
 21 rows in set (0.09 sec)

Not all counters are reflected in all 4 sets of status variables. For the event counters 'DataEventsRecvdCount', 'NondataEventsRecvdCount', and 'EventBytesRecvdCount', only '_injector' and *note 'mysqld': mysqld.-level NDB API status variables are available:

 mysql> SHOW STATUS LIKE 'ndb_api%event%';
 +--------------------------------------+-------+
 | Variable_name                        | Value |
 +--------------------------------------+-------+
 | Ndb_api_event_data_count_injector    | 0     |
 | Ndb_api_event_nondata_count_injector | 0     |
 | Ndb_api_event_bytes_count_injector   | 0     |
 | Ndb_api_event_data_count             | 0     |
 | Ndb_api_event_nondata_count          | 0     |
 | Ndb_api_event_bytes_count            | 0     |
 +--------------------------------------+-------+
 6 rows in set (0.00 sec)

'_injector' status variables are not implemented for any other NDB API counters, as shown here:

 mysql> SHOW STATUS LIKE 'ndb_api%injector%';
 +--------------------------------------+-------+
 | Variable_name                        | Value |
 +--------------------------------------+-------+
 | Ndb_api_event_data_count_injector    | 0     |
 | Ndb_api_event_nondata_count_injector | 0     |
 | Ndb_api_event_bytes_count_injector   | 0     |
 +--------------------------------------+-------+
 3 rows in set (0.00 sec)

The names of the status variables can easily be associated with the names of the corresponding counters. Each NDB API statistics counter is listed in the following table with a description as well as the names of any MySQL server status variables corresponding to this counter.

NDB API statistics counters

Counter Name Description Status Variables (by statistic type):

                                                * Session
                                             
                                                * Slave (replica)
                                             
                                                * Injector
                                             
                                                * Server
                                             

'WaitExecCompleteCount'

Number of times * thread has been 'Ndb_api_wait_exec_complete_count_session' blocked while
waiting for * execution of an 'Ndb_api_wait_exec_complete_count_slave' operation to
complete. Includes * [none] all 'execute()'
(https://dev.mysql.com/doc/ndbapi/en/ndb-ndbtransaction.html#ndb-ndbtransaction-execute) * calls as well as 'Ndb_api_wait_exec_complete_count' implicit executes
for blob operations and auto-increment not visible to clients.

'WaitScanResultCount'

Number of times * thread has been 'Ndb_api_wait_scan_result_count_session' blocked while
waiting for a * scan-based signal, 'Ndb_api_wait_scan_result_count_slave' such waiting for
additional results, * [none] or for a scan to
close. * 'Ndb_api_wait_scan_result_count'

'WaitMetaRequestCount'

Number of times * thread has been 'Ndb_api_wait_meta_request_count_session' blocked waiting for
a metadata-based * signal; this can 'Ndb_api_wait_meta_request_count_slave' occur when waiting
for a DDL operation * [none] or for an epoch to
be started (or * ended). 'Ndb_api_wait_meta_request_count'

'WaitNanosCount'

Total time (in * nanoseconds) spent 'Ndb_api_wait_nanos_count_session' waiting for some
type of signal from * the data nodes. 'Ndb_api_wait_nanos_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_wait_nanos_count'
                   

'BytesSentCount'

Amount of data (in * bytes) sent to the 'Ndb_api_bytes_sent_count_session' data nodes
* 'Ndb_api_bytes_sent_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_bytes_sent_count'
                   

'BytesRecvdCount'

Amount of data (in * bytes) received from 'Ndb_api_bytes_received_count_session' the data nodes
* 'Ndb_api_bytes_received_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_bytes_received_count'
                   

'TransStartCount'

Number of * transactions 'Ndb_api_trans_start_count_session' started.
* 'Ndb_api_trans_start_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_trans_start_count'
                   

'TransCommitCount'

Number of * transactions 'Ndb_api_trans_commit_count_session' committed.
* 'Ndb_api_trans_commit_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_trans_commit_count'
                   

'TransAbortCount'

Number of * transactions 'Ndb_api_trans_abort_count_session' aborted.
* 'Ndb_api_trans_abort_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_trans_abort_count'
                   

'TransCloseCount'

Number of * transactions 'Ndb_api_trans_close_count_session' aborted. (This
value may be greater * than the sum of 'Ndb_api_trans_close_count_slave' 'TransCommitCount'
and * [none] 'TransAbortCount'.)
* 'Ndb_api_trans_close_count'

'PkOpCount'

Number of operations * based on or using 'Ndb_api_pk_op_count_session' primary keys. This
count includes * blob-part table 'Ndb_api_pk_op_count_slave' operations, implicit
unlocking * [none] operations, and
auto-increment * operations, as well 'Ndb_api_pk_op_count' as primary key
operations normally visible to MySQL clients.

'UkOpCount'

Number of operations * based on or using 'Ndb_api_uk_op_count_session' unique keys.
* 'Ndb_api_uk_op_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_uk_op_count'
                   

'TableScanCount'

Number of table * scans that have been 'Ndb_api_table_scan_count_session' started. This
includes scans of * internal tables. 'Ndb_api_table_scan_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_table_scan_count'
                   

'RangeScanCount'

Number of range * scans that have been 'Ndb_api_range_scan_count_session' started.
* 'Ndb_api_range_scan_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_range_scan_count'
                   

'PrunedScanCount'

Number of scans that * have been pruned to 'Ndb_api_pruned_scan_count_session' a single partition.
* 'Ndb_api_pruned_scan_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_pruned_scan_count'
                   

'ScanBatchCount'

Number of batches of * rows received. (A 'Ndb_api_scan_batch_count_session' batch in this
context is a set of * scan results from a 'Ndb_api_scan_batch_count_slave' single fragment.)
* [none]

                      * 
                        'Ndb_api_scan_batch_count'
                   

'ReadRowCount'

Total number of rows * that have been read. 'Ndb_api_read_row_count_session' Includes rows read
using primary key, * unique key, and scan 'Ndb_api_read_row_count_slave' operations.
* [none]

                      * 
                        'Ndb_api_read_row_count'
                   

'TransLocalReadRowCount'

Number of rows read * from the data same 'Ndb_api_trans_local_read_row_count_session' node on which the
transaction was * being run. 'Ndb_api_trans_local_read_row_count_slave'

                      * [none]
                   
                      * 
                        'Ndb_api_trans_local_read_row_count'
                   

'DataEventsRecvdCount'

Number of row change * [none] events received.
* [none]

                      * 
                        'Ndb_api_event_data_count_injector'
                   
                      * 
                        'Ndb_api_event_data_count'
                   

'NondataEventsRecvdCount'

Number of events * [none] received, other than
row change events. * [none]

                      * 
                        'Ndb_api_event_nondata_count_injector'
                   
                      * 
                        'Ndb_api_event_nondata_count'
                   

'EventBytesRecvdCount'

Number of bytes of * [none] events received.
* [none]

                      * 
                        'Ndb_api_event_bytes_count_injector'
                   
                      * 
                        'Ndb_api_event_bytes_count'

To see all counts of committed transactions--that is, all 'TransCommitCount' counter status variables--you can filter the results of *note 'SHOW STATUS': show-status. for the substring 'trans_commit_count', like this:

 mysql> SHOW STATUS LIKE '%trans_commit_count%';
 +------------------------------------+-------+
 | Variable_name                      | Value |
 +------------------------------------+-------+
 | Ndb_api_trans_commit_count_session | 1     |
 | Ndb_api_trans_commit_count_slave   | 0     |
 | Ndb_api_trans_commit_count         | 2     |
 +------------------------------------+-------+
 3 rows in set (0.00 sec)

From this you can determine that 1 transaction has been committed in the current note 'mysql': mysql. client session, and 2 transactions have been committed on this note 'mysqld': mysqld. since it was last restarted.

You can see how various NDB API counters are incremented by a given SQL statement by comparing the values of the corresponding '_session' status variables immediately before and after performing the statement. In this example, after getting the initial values from note 'SHOW STATUS': show-status, we create in the 'test' database an note 'NDB': mysql-cluster. table, named 't', that has a single column:

 mysql> SHOW STATUS LIKE 'ndb_api%session%';
 +--------------------------------------------+--------+
 | Variable_name                              | Value  |
 +--------------------------------------------+--------+
 | Ndb_api_wait_exec_complete_count_session   | 2      |
 | Ndb_api_wait_scan_result_count_session     | 0      |
 | Ndb_api_wait_meta_request_count_session    | 3      |
 | Ndb_api_wait_nanos_count_session           | 820705 |
 | Ndb_api_bytes_sent_count_session           | 132    |
 | Ndb_api_bytes_received_count_session       | 372    |
 | Ndb_api_trans_start_count_session          | 1      |
 | Ndb_api_trans_commit_count_session         | 1      |
 | Ndb_api_trans_abort_count_session          | 0      |
 | Ndb_api_trans_close_count_session          | 1      |
 | Ndb_api_pk_op_count_session                | 1      |
 | Ndb_api_uk_op_count_session                | 0      |
 | Ndb_api_table_scan_count_session           | 0      |
 | Ndb_api_range_scan_count_session           | 0      |
 | Ndb_api_pruned_scan_count_session          | 0      |
 | Ndb_api_scan_batch_count_session           | 0      |
 | Ndb_api_read_row_count_session             | 1      |
 | Ndb_api_trans_local_read_row_count_session | 1      |
 +--------------------------------------------+--------+
 18 rows in set (0.00 sec)

 mysql> USE test;
 Database changed
 mysql> CREATE TABLE t (c INT) ENGINE NDBCLUSTER;
 Query OK, 0 rows affected (0.85 sec)

Now you can execute a new *note 'SHOW STATUS': show-status. statement and observe the changes, as shown here (with the changed rows highlighted in the output):

 mysql> SHOW STATUS LIKE 'ndb_api%session%';
 +--------------------------------------------+-----------+
 | Variable_name                              | Value     |
 +--------------------------------------------+-----------+_
 | Ndb_api_wait_exec_complete_count_session   | 8         |_
 | Ndb_api_wait_scan_result_count_session     | 0         |
 _| Ndb_api_wait_meta_request_count_session    | 17        |_
 _| Ndb_api_wait_nanos_count_session           | 706871709 |_
 _| Ndb_api_bytes_sent_count_session           | 2376      |_
 _| Ndb_api_bytes_received_count_session       | 3844      |_
 _| Ndb_api_trans_start_count_session          | 4         |_
 _| Ndb_api_trans_commit_count_session         | 4         |_
 | Ndb_api_trans_abort_count_session          | 0         |
 _| Ndb_api_trans_close_count_session          | 4         |_
 _| Ndb_api_pk_op_count_session                | 6         |_
 | Ndb_api_uk_op_count_session                | 0         |
 | Ndb_api_table_scan_count_session           | 0         |
 | Ndb_api_range_scan_count_session           | 0         |
 | Ndb_api_pruned_scan_count_session          | 0         |
 | Ndb_api_scan_batch_count_session           | 0         |
 _| Ndb_api_read_row_count_session             | 2         |_
 | Ndb_api_trans_local_read_row_count_session | 1         |
 +--------------------------------------------+-----------+
 18 rows in set (0.00 sec)

Similarly, you can see the changes in the NDB API statistics counters caused by inserting a row into 't': Insert the row, then run the same *note 'SHOW STATUS': show-status. statement used in the previous example, as shown here:

 mysql> INSERT INTO t VALUES (100);
 Query OK, 1 row affected (0.00 sec)

 mysql> SHOW STATUS LIKE 'ndb_api%session%';
 +--------------------------------------------+-----------+
 | Variable_name                              | Value     |
 +--------------------------------------------+-----------+
 _| Ndb_api_wait_exec_complete_count_session   | 11        |_
 _| Ndb_api_wait_scan_result_count_session     | 6         |_
 _| Ndb_api_wait_meta_request_count_session    | 20        |_
 _| Ndb_api_wait_nanos_count_session           | 707370418 |_
 _| Ndb_api_bytes_sent_count_session           | 2724      |_
 _| Ndb_api_bytes_received_count_session       | 4116      |_
 _| Ndb_api_trans_start_count_session          | 7         |_
 _| Ndb_api_trans_commit_count_session         | 6         |_
 | Ndb_api_trans_abort_count_session          | 0         |
 _| Ndb_api_trans_close_count_session          | 7         |_
 _| Ndb_api_pk_op_count_session                | 8         |_
 | Ndb_api_uk_op_count_session                | 0         |
 _| Ndb_api_table_scan_count_session           | 1         |_
 | Ndb_api_range_scan_count_session           | 0         |
 | Ndb_api_pruned_scan_count_session          | 0         |
 | Ndb_api_scan_batch_count_session           | 0         |
 _| Ndb_api_read_row_count_session             | 3         |_
 _| Ndb_api_trans_local_read_row_count_session | 2         |_
 +--------------------------------------------+-----------+
 18 rows in set (0.00 sec)

We can make a number of observations from these results:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo, Next: mysql-cluster-information-schema-tables, Prev: mysql-cluster-ndb-api-statistics, Up: mysql-cluster-management

21.6.15 ndbinfo: The NDB Cluster Information Database

'ndbinfo' is a database containing information specific to NDB Cluster.

This database contains a number of tables, each providing a different sort of data about NDB Cluster node status, resource usage, and operations. You can find more detailed information about each of these tables in the next several sections.

'ndbinfo' is included with NDB Cluster support in the MySQL Server; no special compilation or configuration steps are required; the tables are created by the MySQL Server when it connects to the cluster. You can verify that 'ndbinfo' support is active in a given MySQL Server instance using *note 'SHOW PLUGINS': show-plugins.; if 'ndbinfo' support is enabled, you should see a row containing 'ndbinfo' in the 'Name' column and 'ACTIVE' in the 'Status' column, as shown here (emphasized text):

 mysql> SHOW PLUGINS;
 +----------------------------------+--------+--------------------+---------+---------+
 | Name                             | Status | Type               | Library | License |
 +----------------------------------+--------+--------------------+---------+---------+
 | binlog                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | mysql_native_password            | ACTIVE | AUTHENTICATION     | NULL    | GPL     |
 | sha256_password                  | ACTIVE | AUTHENTICATION     | NULL    | GPL     |
 | MRG_MYISAM                       | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | MEMORY                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | CSV                              | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | MyISAM                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | InnoDB                           | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | INNODB_TRX                       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_LOCKS                     | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_LOCK_WAITS                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMP                       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMP_RESET                 | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMPMEM                    | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMPMEM_RESET              | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMP_PER_INDEX             | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_CMP_PER_INDEX_RESET       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_BUFFER_PAGE               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_BUFFER_PAGE_LRU           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_BUFFER_POOL_STATS         | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_TEMP_TABLE_INFO           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_METRICS                   | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_DEFAULT_STOPWORD       | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_DELETED                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_BEING_DELETED          | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_CONFIG                 | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_INDEX_CACHE            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_FT_INDEX_TABLE            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_TABLES                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_TABLESTATS            | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_INDEXES               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_COLUMNS               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_FIELDS                | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_FOREIGN               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_FOREIGN_COLS          | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_TABLESPACES           | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_DATAFILES             | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | INNODB_SYS_VIRTUAL               | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | PERFORMANCE_SCHEMA               | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | ndbCluster                      | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 _| ndbinfo                          | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |_
 | ndb_transid_mysql_connection_map | ACTIVE | INFORMATION SCHEMA | NULL    | GPL     |
 | BLACKHOLE                        | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | ARCHIVE                          | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | partition                        | ACTIVE | STORAGE ENGINE     | NULL    | GPL     |
 | ngram                            | ACTIVE | FTPARSER           | NULL    | GPL     |
 +----------------------------------+--------+--------------------+---------+---------+
 46 rows in set (0.00 sec)

You can also do this by checking the output of *note 'SHOW ENGINES': show-engines. for a line including 'ndbinfo' in the 'Engine' column and 'YES' in the 'Support' column, as shown here (emphasized text):

 mysql> SHOW ENGINES\G
 *************************** 1. row ***************************
       Engine: ndbcluster
      Support: YES
      Comment: Clustered, fault-tolerant tables
 Transactions: YES
           XA: NO
   Savepoints: NO
 *************************** 2. row ***************************
       Engine: CSV
      Support: YES
      Comment: CSV storage engine
 Transactions: NO
           XA: NO
   Savepoints: NO
 *************************** 3. row ***************************
       Engine: InnoDB
      Support: DEFAULT
      Comment: Supports transactions, row-level locking, and foreign keys
 Transactions: YES
           XA: YES
   Savepoints: YES
 *************************** 4. row ***************************
       Engine: BLACKHOLE
      Support: YES
      Comment: /dev/null storage engine (anything you write to it disappears)
 Transactions: NO
           XA: NO
   Savepoints: NO
 *************************** 5. row ***************************
       Engine: MyISAM
      Support: YES
      Comment: MyISAM storage engine
 Transactions: NO
           XA: NO
   Savepoints: NO
 *************************** 6. row ***************************
       Engine: MRG_MYISAM
      Support: YES
      Comment: Collection of identical MyISAM tables
 Transactions: NO
           XA: NO
   Savepoints: NO
 *************************** 7. row ***************************
       Engine: ARCHIVE
      Support: YES
      Comment: Archive storage engine
 Transactions: NO
           XA: NO
   Savepoints: NO
 _*************************** 8. row ***************************
       Engine: ndbinfo
      Support: YES
      Comment: NDB Cluster system information storage engine
 Transactions: NO
           XA: NO
   Savepoints: NO_
 *************************** 9. row ***************************
       Engine: PERFORMANCE_SCHEMA
      Support: YES
      Comment: Performance Schema
 Transactions: NO
           XA: NO
   Savepoints: NO
 *************************** 10. row ***************************
       Engine: MEMORY
      Support: YES
      Comment: Hash based, stored in memory, useful for temporary tables
 Transactions: NO
           XA: NO
   Savepoints: NO
 10 rows in set (0.00 sec)

If 'ndbinfo' support is enabled, then you can access 'ndbinfo' using SQL statements in note 'mysql': mysql. or another MySQL client. For example, you can see 'ndbinfo' listed in the output of note 'SHOW DATABASES': show-databases, as shown here (emphasized text):

 mysql> SHOW DATABASES;
 +--------------------+
 | Database           |
 +--------------------+
 | information_schema |
 | mysql              |
 _| ndbinfo            |_
 | performance_schema |
 | sys                |
 +--------------------+
 5 rows in set (0.04 sec)

If the note 'mysqld': mysqld. process was not started with the '--ndbcluster' option, 'ndbinfo' is not available and is not displayed by note 'SHOW DATABASES': show-databases. If *note 'mysqld': mysqld. was formerly connected to an NDB Cluster but the cluster becomes unavailable (due to events such as cluster shutdown, loss of network connectivity, and so forth), 'ndbinfo' and its tables remain visible, but an attempt to access any tables (other than 'blocks' or 'config_params') fails with 'Got error 157 'Connection to NDB failed' from NDBINFO'.

With the exception of the note 'blocks': mysql-cluster-ndbinfo-blocks. and note 'config_params': mysql-cluster-ndbinfo-config-params. tables, what we refer to as 'ndbinfo' 'tables' are actually views generated from internal *note 'NDB': mysql-cluster. tables not normally visible to the MySQL Server. You can make these tables visible by setting the 'ndbinfo_show_hidden' system variable to 'ON' (or '1'), but this is normally not necessary.

All 'ndbinfo' tables are read-only, and are generated on demand when queried. Because many of them are generated in parallel by the data nodes while other are specific to a given SQL node, they are not guaranteed to provide a consistent snapshot.

In addition, pushing down of joins is not supported on 'ndbinfo' tables; so joining large 'ndbinfo' tables can require transfer of a large amount of data to the requesting API node, even when the query makes use of a 'WHERE' clause.

'ndbinfo' tables are not included in the query cache. (Bug #59831)

You can select the 'ndbinfo' database with a note 'USE': use. statement, and then issue a note 'SHOW TABLES': show-tables. statement to obtain a list of tables, just as for any other database, like this:

 mysql> USE ndbinfo;
 Database changed

 mysql> SHOW TABLES;
 +---------------------------------+
 | Tables_in_ndbinfo               |
 +---------------------------------+
 | arbitrator_validity_detail      |
 | arbitrator_validity_summary     |
 | blocks                          |
 | cluster_locks                   |
 | cluster_operations              |
 | cluster_transactions            |
 | config_nodes                    |
 | config_params                   |
 | config_values                   |
 | counters                        |
 | cpustat                         |
 | cpustat_1sec                    |
 | cpustat_20sec                   |
 | cpustat_50ms                    |
 | dict_obj_info                   |
 | dict_obj_types                  |
 | disk_write_speed_aggregate      |
 | disk_write_speed_aggregate_node |
 | disk_write_speed_base           |
 | diskpagebuffer                  |
 | error_messages                  |
 | locks_per_fragment              |
 | logbuffers                      |
 | logspaces                       |
 | membership                      |
 | memory_per_fragment             |
 | memoryusage                     |
 | nodes                           |
 | operations_per_fragment         |
 | processes                       |
 | resources                       |
 | restart_info                    |
 | server_locks                    |
 | server_operations               |
 | server_transactions             |
 | table_distribution_status       |
 | table_fragments                 |
 | table_info                      |
 | table_replicas                  |
 | tc_time_track_stats             |
 | threadblocks                    |
 | threads                         |
 | threadstat                      |
 | transporters                    |
 +---------------------------------+
 44 rows in set (0.00 sec)

In NDB 7.5.0 (and later), all 'ndbinfo' tables use the 'NDB' storage engine; however, an 'ndbinfo' entry still appears in the output of note 'SHOW ENGINES': show-engines. and note 'SHOW PLUGINS': show-plugins. as described previously.

The *note 'config_values': mysql-cluster-ndbinfo-config-values. table was added in NDB 7.5.0.

The note 'cpustat': mysql-cluster-ndbinfo-cpustat, note 'cpustat_50ms': mysql-cluster-ndbinfo-cpustat-50ms, note 'cpustat_1sec': mysql-cluster-ndbinfo-cpustat-1sec, note 'cpustat_20sec': mysql-cluster-ndbinfo-cpustat-20sec, and *note 'threads': mysql-cluster-ndbinfo-threads. tables were added in NDB 7.5.2.

The note 'cluster_locks': mysql-cluster-ndbinfo-cluster-locks, note 'locks_per_fragment': mysql-cluster-ndbinfo-locks-per-fragment, and *note 'server_locks': mysql-cluster-ndbinfo-server-locks. tables were added in NDB 7.5.3.

The note 'dict_obj_info': mysql-cluster-ndbinfo-dict-obj-info, note 'table_distribution_status': mysql-cluster-ndbinfo-table-distribution-status, note 'table_fragments': mysql-cluster-ndbinfo-table-fragments, note 'table_info': mysql-cluster-ndbinfo-table-info, and *note 'table_replicas': mysql-cluster-ndbinfo-table-replicas. tables were added in NDB 7.5.4.

The note 'config_nodes': mysql-cluster-ndbinfo-config-nodes. and note 'processes': mysql-cluster-ndbinfo-processes. tables were added in NDB 7.5.7.

The *note 'error_messages': mysql-cluster-ndbinfo-error-messages. table was added in NDB 7.6.

You can execute *note 'SELECT': select. statements against these tables, just as you would normally expect:

 mysql> SELECT * FROM memoryusage;
 +---------+---------------------+--------+------------+------------+-------------+
 | node_id | memory_type         | used   | used_pages | total      | total_pages |
 +---------+---------------------+--------+------------+------------+-------------+
 |       5 | Data memory         | 753664 |         23 | 1073741824 |       32768 |
 |       5 | Index memory        | 163840 |         20 | 1074003968 |      131104 |
 |       5 | Long message buffer |   2304 |          9 |   67108864 |      262144 |
 |       6 | Data memory         | 753664 |         23 | 1073741824 |       32768 |
 |       6 | Index memory        | 163840 |         20 | 1074003968 |      131104 |
 |       6 | Long message buffer |   2304 |          9 |   67108864 |      262144 |
 +---------+---------------------+--------+------------+------------+-------------+
 6 rows in set (0.02 sec)

More complex queries, such as the two following note 'SELECT': select. statements using the note 'memoryusage': mysql-cluster-ndbinfo-memoryusage. table, are possible:

 mysql> SELECT SUM(used) as 'Data Memory Used, All Nodes'
      >     FROM memoryusage
      >     WHERE memory_type = 'Data memory';
 +-----------------------------+
 | Data Memory Used, All Nodes |
 +-----------------------------+
 |                        6460 |
 +-----------------------------+
 1 row in set (0.37 sec)

 mysql> SELECT SUM(max) as 'Total IndexMemory Available'
      >     FROM memoryusage
      >     WHERE memory_type = 'Index memory';
 +-----------------------------+
 | Total IndexMemory Available |
 +-----------------------------+
 |                       25664 |
 +-----------------------------+
 1 row in set (0.33 sec)

'ndbinfo' table and column names are case-sensitive (as is the name of the 'ndbinfo' database itself). These identifiers are in lowercase. Trying to use the wrong lettercase results in an error, as shown in this example:

 mysql> SELECT * FROM nodes;
 +---------+--------+---------+-------------+
 | node_id | uptime | status  | start_phase |
 +---------+--------+---------+-------------+
 |       1 |  13602 | STARTED |           0 |
 |       2 |     16 | STARTED |           0 |
 +---------+--------+---------+-------------+
 2 rows in set (0.04 sec)

 mysql> SELECT * FROM Nodes;
 ERROR 1146 (42S02): Table 'ndbinfo.Nodes' doesn't exist

*note 'mysqldump': mysqldump. ignores the 'ndbinfo' database entirely, and excludes it from any output. This is true even when using the '--databases' or '--all-databases' option.

NDB Cluster also maintains tables in the 'INFORMATION_SCHEMA' information database, including the note 'FILES': information-schema-files-table. table which contains information about files used for NDB Cluster Disk Data storage, and the note 'ndb_transid_mysql_connection_map': information-schema-ndb-transid-mysql-connection-map-table. table, which shows the relationships between transactions, transaction coordinators, and NDB Cluster API nodes. For more information, see the descriptions of the tables or *note mysql-cluster-information-schema-tables::.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-arbitrator-validity-detail, Next: mysql-cluster-ndbinfo-arbitrator-validity-summary, Prev: mysql-cluster-ndbinfo, Up: mysql-cluster-ndbinfo

21.6.15.1 The ndbinfo arbitrator_validity_detail Table ......................................................

The 'arbitrator_validity_detail' table shows the view that each data node in the cluster has of the arbitrator. It is a subset of the *note 'membership': mysql-cluster-ndbinfo-membership. table.

The 'arbitrator_validity_detail' table contains the following columns:

Notes

The node ID is the same as that reported by *note 'ndb_mgm -e "SHOW"': mysql-cluster-programs-ndb-mgm.

All nodes should show the same 'arbitrator' and 'arb_ticket' values as well as the same 'arb_state' value. Possible 'arb_state' values are 'ARBIT_NULL', 'ARBIT_INIT', 'ARBIT_FIND', 'ARBIT_PREP1', 'ARBIT_PREP2', 'ARBIT_START', 'ARBIT_RUN', 'ARBIT_CHOOSE', 'ARBIT_CRASH', and 'UNKNOWN'.

'arb_connected' shows whether the current node is connected to the 'arbitrator'.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-arbitrator-validity-summary, Next: mysql-cluster-ndbinfo-blocks, Prev: mysql-cluster-ndbinfo-arbitrator-validity-detail, Up: mysql-cluster-ndbinfo

21.6.15.2 The ndbinfo arbitrator_validity_summary Table .......................................................

The 'arbitrator_validity_summary' table provides a composite view of the arbitrator with regard to the cluster's data nodes.

The 'arbitrator_validity_summary' table contains the following columns:

Notes

In normal operations, this table should have only 1 row for any appreciable length of time. If it has more than 1 row for longer than a few moments, then either not all nodes are connected to the arbitrator, or all nodes are connected, but do not agree on the same arbitrator.

The 'arbitrator' column shows the arbitrator's node ID.

'arb_ticket' is the internal identifier used by this arbitrator.

'arb_connected' shows whether this node is connected to the cluster as an arbitrator.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-blocks, Next: mysql-cluster-ndbinfo-cluster-locks, Prev: mysql-cluster-ndbinfo-arbitrator-validity-summary, Up: mysql-cluster-ndbinfo

21.6.15.3 The ndbinfo blocks Table ..................................

The 'blocks' table is a static table which simply contains the names and internal IDs of all NDB kernel blocks (see NDB Kernel Blocks (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks.html)). It is for use by the other *note 'ndbinfo': mysql-cluster-ndbinfo. tables (most of which are actually views) in mapping block numbers to block names for producing human-readable output.

The 'blocks' table contains the following columns:

Notes

To obtain a list of all block names, simply execute 'SELECT block_name FROM ndbinfo.blocks'. Although this is a static table, its content can vary between different NDB Cluster releases.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cluster-locks, Next: mysql-cluster-ndbinfo-cluster-operations, Prev: mysql-cluster-ndbinfo-blocks, Up: mysql-cluster-ndbinfo

21.6.15.4 The ndbinfo cluster_locks Table .........................................

The 'cluster_locks' table provides information about current lock requests holding and waiting for locks on 'NDB' tables in an NDB Cluster, and is intended as a companion table to *note 'cluster_operations': mysql-cluster-ndbinfo-cluster-operations. Information obtain from the 'cluster_locks' table may be useful in investigating stalls and deadlocks.

The 'cluster_locks' table contains the following columns:

Notes

The table ID ('tableid' column) is assigned internally, and is the same as that used in other 'ndbinfo' tables. It is also shown in the output of *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables.

The transaction ID ('transid' column) is the identifier generated by the NDB API for the transaction requesting or holding the current lock.

The 'mode' column shows the lock mode; this is always one of 'S' (indicating a shared lock) or 'X' (an exclusive lock). If a transaction holds an exclusive lock on a given row, all other locks on that row have the same transaction ID.

The 'state' column shows the lock state. Its value is always one of 'H' (holding) or 'W' (waiting). A waiting lock request waits for a lock held by a different transaction.

When the 'detail' column contains a '*' (asterisk character), this means that this lock is the first holding lock in the affected row's lock queue; otherwise, this column is empty. This information can be used to help identify the unique entries in a list of lock requests.

The 'op' column shows the type of operation requesting the lock. This is always one of the values 'READ', 'INSERT', 'UPDATE', 'DELETE', 'SCAN', or 'REFRESH'.

The 'duration_millis' column shows the number of milliseconds for which this lock request has been waiting or holding the lock. This is reset to 0 when a lock is granted for a waiting request.

The lock ID ('lockid' column) is unique to this node and block instance.

The lock state is shown in the 'lock_state' column; if this is 'W', the lock is waiting to be granted, and the 'waiting_for' column shows the lock ID of the lock object this request is waiting for. Otherwise, the 'waiting_for' column is empty. 'waiting_for' can refer only to locks on the same row, as identified by 'node_id', 'block_instance', 'tableid', 'fragmentid', and 'rowid'.

The 'cluster_locks' table was added in NDB 7.5.3.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cluster-operations, Next: mysql-cluster-ndbinfo-cluster-transactions, Prev: mysql-cluster-ndbinfo-cluster-locks, Up: mysql-cluster-ndbinfo

21.6.15.5 The ndbinfo cluster_operations Table ..............................................

The 'cluster_operations' table provides a per-operation (stateful primary key op) view of all activity in the NDB Cluster from the point of view of the local data management (LQH) blocks (see The DBLQH Block (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html)).

The 'cluster_operations' table contains the following columns:

Notes

The transaction ID is a unique 64-bit number which can be obtained using the NDB API's 'getTransactionId()' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndbtransaction.html#ndb-ndbtransaction-gettransactionid) method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

The 'operation_type' column can take any one of the values 'READ', 'READ-SH', 'READ-EX', 'INSERT', 'UPDATE', 'DELETE', 'WRITE', 'UNLOCK', 'REFRESH', 'SCAN', 'SCAN-SH', 'SCAN-EX', or ''.

The 'state' column can have any one of the values 'ABORT_QUEUED', 'ABORT_STOPPED', 'COMMITTED', 'COMMIT_QUEUED', 'COMMIT_STOPPED', 'COPY_CLOSE_STOPPED', 'COPY_FIRST_STOPPED', 'COPY_STOPPED', 'COPY_TUPKEY', 'IDLE', 'LOG_ABORT_QUEUED', 'LOG_COMMIT_QUEUED', 'LOG_COMMIT_QUEUED_WAIT_SIGNAL', 'LOG_COMMIT_WRITTEN', 'LOG_COMMIT_WRITTEN_WAIT_SIGNAL', 'LOG_QUEUED', 'PREPARED', 'PREPARED_RECEIVED_COMMIT', 'SCAN_CHECK_STOPPED', 'SCAN_CLOSE_STOPPED', 'SCAN_FIRST_STOPPED', 'SCAN_RELEASE_STOPPED', 'SCAN_STATE_USED', 'SCAN_STOPPED', 'SCAN_TUPKEY', 'STOPPED', 'TC_NOT_CONNECTED', 'WAIT_ACC', 'WAIT_ACC_ABORT', 'WAIT_AI_AFTER_ABORT', 'WAIT_ATTR', 'WAIT_SCAN_AI', 'WAIT_TUP', 'WAIT_TUPKEYINFO', 'WAIT_TUP_COMMIT', or 'WAIT_TUP_TO_ABORT'. (If the MySQL Server is running with 'ndbinfo_show_hidden' enabled, you can view this list of states by selecting from the 'ndb$dblqh_tcconnect_state' table, which is normally hidden.)

You can obtain the name of an 'NDB' table from its table ID by checking the output of *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables.

The 'fragid' is the same as the partition number seen in the output of *note 'ndb_desc': mysql-cluster-programs-ndb-desc. '--extra-partition-info' (short form '-p').

In 'client_node_id' and 'client_block_ref', 'client' refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

The 'block_instance' and 'tc_block_instance' column provide, respectively, the 'DBLQH' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html) and 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) block instance numbers. You can use these along with the block names to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cluster-transactions, Next: mysql-cluster-ndbinfo-config-nodes, Prev: mysql-cluster-ndbinfo-cluster-operations, Up: mysql-cluster-ndbinfo

21.6.15.6 The ndbinfo cluster_transactions Table ................................................

The 'cluster_transactions' table shows information about all ongoing transactions in an NDB Cluster.

The 'cluster_transactions' table contains the following columns:

Notes

The transaction ID is a unique 64-bit number which can be obtained using the NDB API's 'getTransactionId()' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndbtransaction.html#ndb-ndbtransaction-gettransactionid) method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

The 'state' column can have any one of the values 'CS_ABORTING', 'CS_COMMITTING', 'CS_COMMIT_SENT', 'CS_COMPLETE_SENT', 'CS_COMPLETING', 'CS_CONNECTED', 'CS_DISCONNECTED', 'CS_FAIL_ABORTED', 'CS_FAIL_ABORTING', 'CS_FAIL_COMMITTED', 'CS_FAIL_COMMITTING', 'CS_FAIL_COMPLETED', 'CS_FAIL_PREPARED', 'CS_PREPARE_TO_COMMIT', 'CS_RECEIVING', 'CS_REC_COMMITTING', 'CS_RESTART', 'CS_SEND_FIRE_TRIG_REQ', 'CS_STARTED', 'CS_START_COMMITTING', 'CS_START_SCAN', 'CS_WAIT_ABORT_CONF', 'CS_WAIT_COMMIT_CONF', 'CS_WAIT_COMPLETE_CONF', 'CS_WAIT_FIRE_TRIG_REQ'. (If the MySQL Server is running with 'ndbinfo_show_hidden' enabled, you can view this list of states by selecting from the 'ndb$dbtc_apiconnect_state' table, which is normally hidden.)

In 'client_node_id' and 'client_block_ref', 'client' refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

The 'tc_block_instance' column provides the 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) block instance number. You can use this along with the block name to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-config-nodes, Next: mysql-cluster-ndbinfo-config-params, Prev: mysql-cluster-ndbinfo-cluster-transactions, Up: mysql-cluster-ndbinfo

21.6.15.7 The ndbinfo config_nodes Table ........................................

The 'config_nodes' table shows nodes configured in an NDB Cluster 'config.ini' file. For each node, the table displays a row containing the node ID, the type of node (management node, data node, or API node), and the name or IP address of the host on which the node is configured to run.

This table does not indicate whether a given node is actually running, or whether it is currently connected to the cluster. Information about nodes connected to an NDB Cluster can be obtained from the note 'nodes': mysql-cluster-ndbinfo-nodes. and note 'processes': mysql-cluster-ndbinfo-processes. table.

The 'config_nodes' table contains the following columns:

Notes

The 'node_id' column shows the node ID used in the 'config.ini' file for this node; if none is specified, the node ID that would be assigned automatically to this node is displayed.

The 'node_type' column displays one of the following three values:

The 'node_hostname' column shows the node host as specified in the 'config.ini' file. This can be empty for an API node, if 'HostName' has not been set in the cluster configuration file. If 'HostName' has not been set for a data node in the configuration file, 'localhost' is used here. 'localhost' is also used if 'HostName' has not been specified for a management node.

The 'config_nodes' table was added in NDB 7.5.7.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-config-params, Next: mysql-cluster-ndbinfo-config-values, Prev: mysql-cluster-ndbinfo-config-nodes, Up: mysql-cluster-ndbinfo

21.6.15.8 The ndbinfo config_params Table .........................................

The 'config_params' table is a static table which provides the names and internal ID numbers of and other information about NDB Cluster configuration parameters.

The 'config_params' table contains the following columns:

Notes

In NDB Cluster 7.5 (and later), this table is read-only. The 'param_description', 'param_type', 'param_default', 'param_min', 'param_max', 'param_mandatory', and 'param_status' columns were all added in NDB 7.5.0.

Although this is a static table, its content can vary between NDB Cluster installations, since supported parameters can vary due to differences between software releases, cluster hardware configurations, and other factors.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-config-values, Next: mysql-cluster-ndbinfo-counters, Prev: mysql-cluster-ndbinfo-config-params, Up: mysql-cluster-ndbinfo

21.6.15.9 The ndbinfo config_values Table .........................................

The 'config_values' table, implemented in NDB 7.5.0, provides information about the current state of node configuration parameter values. Each row in the table corresponds to the current value of a parameter on a given node.

Notes

This table's 'config_param' column and the *note 'config_params': mysql-cluster-ndbinfo-config-params. table's 'param_number' column use the same parameter identifiers. By joining the two tables on these columns, you can obtain detailed information about desired node configuration parameters. The query shown here provides the current values for all parameters on each data node in the cluster, ordered by node ID and parameter name:

 SELECT    v.node_id AS 'Node Id',
           p.param_name AS 'Parameter',
           v.config_value AS 'Value'
 FROM      config_values v
 JOIN      config_params p
 ON        v.config_param=p.param_number
 WHERE     p.param_name NOT LIKE '\_\_%'
 ORDER BY  v.node_id, p.param_name;

Partial output from the previous query when run on a small example cluster used for simple testing:

 +---------+------------------------------------------+----------------+
 | Node Id | Parameter                                | Value          |
 +---------+------------------------------------------+----------------+
 |       2 | Arbitration                              | 1              |
 |       2 | ArbitrationTimeout                       | 7500           |
 |       2 | BackupDataBufferSize                     | 16777216       |
 |       2 | BackupDataDir                            | /home/jon/data |
 |       2 | BackupDiskWriteSpeedPct                  | 50             |
 |       2 | BackupLogBufferSize                      | 16777216       |

 ...

 |       3 | TotalSendBufferMemory                    | 0              |
 |       3 | TransactionBufferMemory                  | 1048576        |
 |       3 | TransactionDeadlockDetectionTimeout      | 1200           |
 |       3 | TransactionInactiveTimeout               | 4294967039     |
 |       3 | TwoPassInitialNodeRestartCopy            | 0              |
 |       3 | UndoDataBuffer                           | 16777216       |
 |       3 | UndoIndexBuffer                          | 2097152        |
 +---------+------------------------------------------+----------------+
 248 rows in set (0.02 sec)

The 'WHERE' clause filters out parameters whose names begin with a double underscore ('__'); these parameters are reserved for testing and other internal uses by the NDB developers, and are not intended for use in a production NDB Cluster.

You can obtain output that is more specific, more detailed, or both by issuing the proper queries. This example provides all types of available information about the 'NodeId', 'NoOfReplicas', 'HostName', 'DataMemory', 'IndexMemory', and 'TotalSendBufferMemory' parameters as currently set for all data nodes in the cluster:

 SELECT  p.param_name AS Name,
         v.node_id AS Node,
         p.param_type AS Type,
         p.param_default AS 'Default',
         p.param_min AS Minimum,
         p.param_max AS Maximum,
         CASE p.param_mandatory WHEN 1 THEN 'Y' ELSE 'N' END AS 'Required',
         v.config_value AS Current
 FROM    config_params p
 JOIN    config_values v
 ON      p.param_number = v.config_param
 WHERE   p. param_name
   IN ('NodeId', 'NoOfReplicas', 'HostName',
       'DataMemory', 'IndexMemory', 'TotalSendBufferMemory')\G

The output from this query when run on a small NDB Cluster with 2 data nodes used for simple testing is shown here:

 *************************** 1. row ***************************
     Name: NodeId
     Node: 2
     Type: unsigned
  Default:
  Minimum: 1
  Maximum: 48
 Required: Y
  Current: 2
 *************************** 2. row ***************************
     Name: HostName
     Node: 2
     Type: string
  Default: localhost
  Minimum:
  Maximum:
 Required: N
  Current: 127.0.0.1
 *************************** 3. row ***************************
     Name: TotalSendBufferMemory
     Node: 2
     Type: unsigned
  Default: 0
  Minimum: 262144
  Maximum: 4294967039
 Required: N
  Current: 0
 *************************** 4. row ***************************
     Name: NoOfReplicas
     Node: 2
     Type: unsigned
  Default: 2
  Minimum: 1
  Maximum: 4
 Required: N
  Current: 2
 *************************** 5. row ***************************
     Name: DataMemory
     Node: 2
     Type: unsigned
  Default: 102760448
  Minimum: 1048576
  Maximum: 1099511627776
 Required: N
  Current: 524288000
 *************************** 6. row ***************************
     Name: NodeId
     Node: 3
     Type: unsigned
  Default:
  Minimum: 1
  Maximum: 48
 Required: Y
  Current: 3
 *************************** 7. row ***************************
     Name: HostName
     Node: 3
     Type: string
  Default: localhost
  Minimum:
  Maximum:
 Required: N
  Current: 127.0.0.1
 *************************** 8. row ***************************
     Name: TotalSendBufferMemory
     Node: 3
     Type: unsigned
  Default: 0
  Minimum: 262144
  Maximum: 4294967039
 Required: N
  Current: 0
 *************************** 9. row ***************************
     Name: NoOfReplicas
     Node: 3
     Type: unsigned
  Default: 2
  Minimum: 1
  Maximum: 4
 Required: N
  Current: 2
 *************************** 10. row ***************************
     Name: DataMemory
     Node: 3
     Type: unsigned
  Default: 102760448
  Minimum: 1048576
  Maximum: 1099511627776
 Required: N
  Current: 524288000
 10 rows in set (0.01 sec)

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-counters, Next: mysql-cluster-ndbinfo-cpustat, Prev: mysql-cluster-ndbinfo-config-values, Up: mysql-cluster-ndbinfo

21.6.15.10 The ndbinfo counters Table .....................................

The 'counters' table provides running totals of events such as reads and writes for specific kernel blocks and data nodes. Counts are kept from the most recent node start or restart; a node start or restart resets all counters on that node. Not all kernel blocks have all types of counters.

The 'counters' table contains the following columns:

Notes

Each counter is associated with a particular NDB kernel block.

The 'OPERATIONS' counter is associated with the 'DBLQH' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html) (local query handler) kernel block (see The DBLQH Block (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html)). A primary-key read counts as one operation, as does a primary-key update. For reads, there is one operation in 'DBLQH' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dblqh.html) per operation in 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html). For writes, there is one operation counted per fragment replica.

The 'ATTRINFO', 'TRANSACTIONS', 'COMMITS', 'READS', 'LOCAL_READS', 'SIMPLE_READS', 'WRITES', 'LOCAL_WRITES', 'ABORTS', 'TABLE_SCANS', and 'RANGE_SCANS' counters are associated with the DBTC (transaction co-ordinator) kernel block (see The DBTC Block (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html)).

'LOCAL_WRITES' and 'LOCAL_READS' are primary-key operations using a transaction coordinator in a node that also holds the primary fragment replica of the record.

The 'READS' counter includes all reads. 'LOCAL_READS' includes only those reads of the primary fragment replica on the same node as this transaction coordinator. 'SIMPLE_READS' includes only those reads in which the read operation is the beginning and ending operation for a given transaction. Simple reads do not hold locks but are part of a transaction, in that they observe uncommitted changes made by the transaction containing them but not of any other uncommitted transactions. Such reads are 'simple' from the point of view of the TC block; since they hold no locks they are not durable, and once 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) has routed them to the relevant LQH block, it holds no state for them.

'ATTRINFO' keeps a count of the number of times an interpreted program is sent to the data node. See NDB Protocol Messages (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-ndb-protocol-messages.html), for more information about 'ATTRINFO' messages in the 'NDB' kernel.

The 'LOCAL_TABLE_SCANS_SENT', 'READS_RECEIVED', 'PRUNED_RANGE_SCANS_RECEIVED', 'RANGE_SCANS_RECEIVED', 'LOCAL_READS_SENT', 'CONST_PRUNED_RANGE_SCANS_RECEIVED', 'LOCAL_RANGE_SCANS_SENT', 'REMOTE_READS_SENT', 'REMOTE_RANGE_SCANS_SENT', 'READS_NOT_FOUND', 'SCAN_BATCHES_RETURNED', 'TABLE_SCANS_RECEIVED', and 'SCAN_ROWS_RETURNED' counters are associated with the 'DBSPJ' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbspj.html) (select push-down join) kernel block (see The DBSPJ Block (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbspj.html)).

The 'block_name' and 'block_instance' columns provide, respectively, the applicable NDB kernel block name and instance number. You can use these to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

A number of counters provide information about transporter overload and send buffer sizing when troubleshooting such issues. For each LQH instance, there is one instance of each counter in the following list:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cpustat, Next: mysql-cluster-ndbinfo-cpustat-50ms, Prev: mysql-cluster-ndbinfo-counters, Up: mysql-cluster-ndbinfo

21.6.15.11 The ndbinfo cpustat Table ....................................

The 'cpustat' table provides per-thread CPU statistics gathered each second, for each thread running in the 'NDB' kernel.

The 'cpustat' table contains the following columns:

Notes

This table was added in NDB 7.5.2.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cpustat-50ms, Next: mysql-cluster-ndbinfo-cpustat-1sec, Prev: mysql-cluster-ndbinfo-cpustat, Up: mysql-cluster-ndbinfo

21.6.15.12 The ndbinfo cpustat_50ms Table .........................................

The 'cpustat_50ms' table provides raw, per-thread CPU data obtained each 50 milliseconds for each thread running in the 'NDB' kernel.

Like note 'cpustat_1sec': mysql-cluster-ndbinfo-cpustat-1sec. and note 'cpustat_20sec': mysql-cluster-ndbinfo-cpustat-20sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, 'cpsustat_50ms' provides 1 second of history.

The 'cpustat_50ms' table contains the following columns:

Notes

This table was added in NDB 7.5.2.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cpustat-1sec, Next: mysql-cluster-ndbinfo-cpustat-20sec, Prev: mysql-cluster-ndbinfo-cpustat-50ms, Up: mysql-cluster-ndbinfo

21.6.15.13 The ndbinfo cpustat_1sec Table .........................................

The 'cpustat-1sec' table provides raw, per-thread CPU data obtained each second for each thread running in the 'NDB' kernel.

Like note 'cpustat_50ms': mysql-cluster-ndbinfo-cpustat-50ms. and note 'cpustat_20sec': mysql-cluster-ndbinfo-cpustat-20sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, 'cpsustat_1sec' provides 20 seconds of history.

The 'cpustat_1sec' table contains the following columns:

Notes

This table was added in NDB 7.5.2.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-cpustat-20sec, Next: mysql-cluster-ndbinfo-dict-obj-info, Prev: mysql-cluster-ndbinfo-cpustat-1sec, Up: mysql-cluster-ndbinfo

21.6.15.14 The ndbinfo cpustat_20sec Table ..........................................

The 'cpustat_20sec' table provides raw, per-thread CPU data obtained each 20 seconds, for each thread running in the 'NDB' kernel.

Like note 'cpustat_50ms': mysql-cluster-ndbinfo-cpustat-50ms. and note 'cpustat_1sec': mysql-cluster-ndbinfo-cpustat-1sec, this table shows 20 measurement sets per thread, each referencing a period of the named duration. Thus, 'cpsustat_20sec' provides 400 seconds of history.

The 'cpustat_20sec' table contains the following columns:

Notes

This table was added in NDB 7.5.2.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-dict-obj-info, Next: mysql-cluster-ndbinfo-dict-obj-types, Prev: mysql-cluster-ndbinfo-cpustat-20sec, Up: mysql-cluster-ndbinfo

21.6.15.15 The ndbinfo dict_obj_info Table ..........................................

The 'dict_obj_info' table provides information about 'NDB' data dictionary ('DICT' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbdict.html)) objects such as tables and indexes. (The *note 'dict_obj_types': mysql-cluster-ndbinfo-dict-obj-types. table can be queried for a list of all the types.) This information includes the object's type, state, parent object (if any), and fully qualified name.

The 'dict_obj_info' table contains the following columns:

Notes

This table was added in NDB 7.5.4.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-dict-obj-types, Next: mysql-cluster-ndbinfo-disk-write-speed-base, Prev: mysql-cluster-ndbinfo-dict-obj-info, Up: mysql-cluster-ndbinfo

21.6.15.16 The ndbinfo dict_obj_types Table ...........................................

The 'dict_obj_types' table is a static table listing possible dictionary object types used in the NDB kernel. These are the same types defined by 'Object::Type' (https://dev.mysql.com/doc/ndbapi/en/ndb-object.html#ndb-object-type) in the NDB API.

The 'dict_obj_types' table contains the following columns:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-disk-write-speed-base, Next: mysql-cluster-ndbinfo-disk-write-speed-aggregate, Prev: mysql-cluster-ndbinfo-dict-obj-types, Up: mysql-cluster-ndbinfo

21.6.15.17 The ndbinfo disk_write_speed_base Table ..................................................

The 'disk_write_speed_base' table provides base information about the speed of disk writes during LCP, backup, and restore operations.

The 'disk_write_speed_base' table contains the following columns:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-disk-write-speed-aggregate, Next: mysql-cluster-ndbinfo-disk-write-speed-aggregate-node, Prev: mysql-cluster-ndbinfo-disk-write-speed-base, Up: mysql-cluster-ndbinfo

21.6.15.18 The ndbinfo disk_write_speed_aggregate Table .......................................................

The 'disk_write_speed_aggregate' table provides aggregated information about the speed of disk writes during LCP, backup, and restore operations.

The 'disk_write_speed_aggregate' table contains the following columns:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-disk-write-speed-aggregate-node, Next: mysql-cluster-ndbinfo-diskpagebuffer, Prev: mysql-cluster-ndbinfo-disk-write-speed-aggregate, Up: mysql-cluster-ndbinfo

21.6.15.19 The ndbinfo disk_write_speed_aggregate_node Table ............................................................

The 'disk_write_speed_aggregate_node' table provides aggregated information per node about the speed of disk writes during LCP, backup, and restore operations.

The 'disk_write_speed_aggregate_node' table contains the following columns:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-diskpagebuffer, Next: mysql-cluster-ndbinfo-error-messages, Prev: mysql-cluster-ndbinfo-disk-write-speed-aggregate-node, Up: mysql-cluster-ndbinfo

21.6.15.20 The ndbinfo diskpagebuffer Table ...........................................

The 'diskpagebuffer' table provides statistics about disk page buffer usage by NDB Cluster Disk Data tables.

The 'diskpagebuffer' table contains the following columns:

Notes

You can use this table with NDB Cluster Disk Data tables to determine whether 'DiskPageBufferMemory' is sufficiently large to allow data to be read from the buffer rather from disk; minimizing disk seeks can help improve performance of such tables.

You can determine the proportion of reads from 'DiskPageBufferMemory' to the total number of reads using a query such as this one, which obtains this ratio as a percentage:

 SELECT
   node_id,
   100 * page_requests_direct_return /
     (page_requests_direct_return + page_requests_wait_io)
       AS hit_ratio
 FROM ndbinfo.diskpagebuffer;

The result from this query should be similar to what is shown here, with one row for each data node in the cluster (in this example, the cluster has 4 data nodes):

 +---------+-----------+
 | node_id | hit_ratio |
 +---------+-----------+
 |       5 |   97.6744 |
 |       6 |   97.6879 |
 |       7 |   98.1776 |
 |       8 |   98.1343 |
 +---------+-----------+
 4 rows in set (0.00 sec)

'hit_ratio' values approaching 100% indicate that only a very small number of reads are being made from disk rather than from the buffer, which means that Disk Data read performance is approaching an optimum level. If any of these values are less than 95%, this is a strong indicator that the setting for 'DiskPageBufferMemory' needs to be increased in the 'config.ini' file.

Note:

A change in 'DiskPageBufferMemory' requires a rolling restart of all of the cluster's data nodes before it takes effect.

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table. Using this information, you can obtain information about disk page buffer metrics relating to individual threads; an example query using 'LIMIT 1' to limit the output to a single thread is shown here:

 mysql> SELECT
      >   node_id, thr_no, block_name, thread_name, pages_written,
      >   pages_written_lcp, pages_read, log_waits,
      >   page_requests_direct_return, page_requests_wait_queue,
      >   page_requests_wait_io
      > FROM ndbinfo.diskpagebuffer
      >   INNER JOIN ndbinfo.threadblocks USING (node_id, block_instance)
      >   INNER JOIN ndbinfo.threads USING (node_id, thr_no)
      > WHERE block_name = 'PGMAN' LIMIT 1\G
 *************************** 1. row ***************************
                     node_id: 1
                      thr_no: 1
                  block_name: PGMAN
                 thread_name: rep
               pages_written: 0
           pages_written_lcp: 0
                  pages_read: 1
                   log_waits: 0
 page_requests_direct_return: 4
    page_requests_wait_queue: 0
       page_requests_wait_io: 1
 1 row in set (0.01 sec)

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-error-messages, Next: mysql-cluster-ndbinfo-locks-per-fragment, Prev: mysql-cluster-ndbinfo-diskpagebuffer, Up: mysql-cluster-ndbinfo

21.6.15.21 The ndbinfo error_messages Table ...........................................

The 'error_messages' table provides information about

The 'error_messages' table contains the following columns:

Notes

'error_code' is a numeric NDB error code. This is the same error code that can be supplied to note 'ndb_perror': mysql-cluster-programs-ndb-perror. or note 'perror': perror. '--ndb'.

'error_description' provides a basic description of the condition causing the error.

The 'error_status' column provides status information relating to the error. Possible values for this column are listed here:

The error_classification column shows the error classification. See NDB Error Classifications (https://dev.mysql.com/doc/ndbapi/en/ndb-error-classifications.html), for information about classification codes and their meanings.

The 'error_messages' table was added in NDB 7.6.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-locks-per-fragment, Next: mysql-cluster-ndbinfo-logbuffers, Prev: mysql-cluster-ndbinfo-error-messages, Up: mysql-cluster-ndbinfo

21.6.15.22 The ndbinfo locks_per_fragment Table ...............................................

The 'locks_per_fragment' table provides information about counts of lock claim requests, and the outcomes of these requests on a per-fragment basis, serving as a companion table to note 'operations_per_fragment': mysql-cluster-ndbinfo-operations-per-fragment. and note 'memory_per_fragment': mysql-cluster-ndbinfo-memory-per-fragment. This table also shows the total time spent waiting for locks successfully and unsuccessfully since fragment or table creation, or since the most recent restart.

The 'locks_per_fragment' table contains the following columns:

Notes

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

'fq_name' is a fully qualified database object name in DATABASE/SCHEMA/NAME format, such as 'test/def/t1' or 'sys/def/10/b$unique'.

'parent_fq_name' is the fully qualified name of this object's parent object (table).

'table_id' is the table's internal ID generated by 'NDB'. This is the same internal table ID shown in other 'ndbinfo' tables; it is also visible in the output of *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables.

The 'type' column shows the type of table. This is always one of 'System table', 'User table', 'Unique hash index', 'Hash index', 'Unique ordered index', 'Ordered index', 'Hash index trigger', 'Subscription trigger', 'Read only constraint', 'Index trigger', 'Reorganize trigger', 'Tablespace', 'Log file group', 'Data file', 'Undo file', 'Hash map', 'Foreign key definition', 'Foreign key parent trigger', 'Foreign key child trigger', or 'Schema transaction'.

The values shown in all of the columns 'ex_req', 'ex_req_imm_ok', 'ex_wait_ok', 'ex_wait_fail', 'sh_req', 'sh_req_imm_ok', 'sh_wait_ok', and 'sh_wait_fail' represent cumulative numbers of requests since the table or fragment was created, or since the last restart of this node, whichever of these occurred later. This is also true for the time values shown in the 'wait_ok_millis' and 'wait_fail_millis' columns.

Every lock request is considered either to be in progress, or to have completed in some way (that is, to have succeeded or failed). This means that the following relationships are true:

 ex_req >= (ex_req_imm_ok + ex_wait_ok + ex_wait_fail)

 sh_req >= (sh_req_imm_ok + sh_wait_ok + sh_wait_fail)

The number of requests currently in progress is the current number of incomplete requests, which can be found as shown here:

 [exclusive lock requests in progress] =
     ex_req - (ex_req_imm_ok + ex_wait_ok + ex_wait_fail)

 [shared lock requests in progress] =
     sh_req - (sh_req_imm_ok + sh_wait_ok + sh_wait_fail)

A failed wait indicates an aborted transaction, but the abort may or may not be caused by a lock wait timeout. You can obtain the total number of aborts while waiting for locks as shown here:

 [aborts while waiting for locks] = ex_wait_fail + sh_wait_fail

The 'locks_per_fragment' table was added in NDB 7.5.3.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-logbuffers, Next: mysql-cluster-ndbinfo-logspaces, Prev: mysql-cluster-ndbinfo-locks-per-fragment, Up: mysql-cluster-ndbinfo

21.6.15.23 The ndbinfo logbuffers Table .......................................

The 'logbuffer' table provides information on NDB Cluster log buffer usage.

The 'logbuffers' table contains the following columns:

Notes

NDB 7.6.6 makes available 'logbuffers' table rows reflecting two additional log types when performing an NDB backup. One of these rows has the log type 'BACKUP-DATA', which shows the amount of data buffer used during backup to copy fragments to backup files. The other row has the log type 'BACKUP-LOG', which displays the amount of log buffer used during the backup to record changes made after the backup has started. One each of these 'log_type' rows is shown in the 'logbuffers' table for each data node in the cluster. These rows are not present unless an NDB backup is currently being performed. (Bug #25822988)

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-logspaces, Next: mysql-cluster-ndbinfo-membership, Prev: mysql-cluster-ndbinfo-logbuffers, Up: mysql-cluster-ndbinfo

21.6.15.24 The ndbinfo logspaces Table ......................................

This table provides information about NDB Cluster log space usage.

The 'logspaces' table contains the following columns:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-membership, Next: mysql-cluster-ndbinfo-memoryusage, Prev: mysql-cluster-ndbinfo-logspaces, Up: mysql-cluster-ndbinfo

21.6.15.25 The ndbinfo membership Table .......................................

The 'membership' table describes the view that each data node has of all the others in the cluster, including node group membership, president node, arbitrator, arbitrator successor, arbitrator connection states, and other information.

The 'membership' table contains the following columns:

Notes

The node ID and node group ID are the same as reported by *note 'ndb_mgm -e "SHOW"': mysql-cluster-programs-ndb-mgm.

'left_node' and 'right_node' are defined in terms of a model that connects all data nodes in a circle, in order of their node IDs, similar to the ordering of the numbers on a clock dial, as shown here:

FIGURE GOES HERE: Circular Arrangement of NDB Cluster Nodes

In this example, we have 8 data nodes, numbered 5, 6, 7, 8, 12, 13, 14, and 15, ordered clockwise in a circle. We determine 'left' and 'right' from the interior of the circle. The node to the left of node 5 is node 15, and the node to the right of node 5 is node 6. You can see all these relationships by running the following query and observing the output:

 mysql> SELECT node_id,left_node,right_node
     -> FROM ndbinfo.membership;
 +---------+-----------+------------+
 | node_id | left_node | right_node |
 +---------+-----------+------------+
 |       5 |        15 |          6 |
 |       6 |         5 |          7 |
 |       7 |         6 |          8 |
 |       8 |         7 |         12 |
 |      12 |         8 |         13 |
 |      13 |        12 |         14 |
 |      14 |        13 |         15 |
 |      15 |        14 |          5 |
 +---------+-----------+------------+
 8 rows in set (0.00 sec)

The designations 'left' and 'right' are used in the event log in the same way.

The 'president' node is the node viewed by the current node as responsible for setting an arbitrator (see NDB Cluster Start Phases (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-start-phases.html)). If the president fails or becomes disconnected, the current node expects the node whose ID is shown in the 'successor' column to become the new president. The 'succession_order' column shows the place in the succession queue that the current node views itself as having.

In a normal NDB Cluster, all data nodes should see the same node as 'president', and the same node (other than the president) as its 'successor'. In addition, the current president should see itself as '1' in the order of succession, the 'successor' node should see itself as '2', and so on.

All nodes should show the same 'arb_ticket' values as well as the same 'arb_state' values. Possible 'arb_state' values are 'ARBIT_NULL', 'ARBIT_INIT', 'ARBIT_FIND', 'ARBIT_PREP1', 'ARBIT_PREP2', 'ARBIT_START', 'ARBIT_RUN', 'ARBIT_CHOOSE', 'ARBIT_CRASH', and 'UNKNOWN'.

'arb_connected' shows whether this node is connected to the node shown as this node's 'arbitrator'.

The 'connected_rank1_arbs' and 'connected_rank2_arbs' columns each display a list of 0 or more arbitrators having an 'ArbitrationRank' equal to 1, or to 2, respectively.

Note:

Both management nodes and API nodes are eligible to become arbitrators.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-memoryusage, Next: mysql-cluster-ndbinfo-memory-per-fragment, Prev: mysql-cluster-ndbinfo-membership, Up: mysql-cluster-ndbinfo

21.6.15.26 The ndbinfo memoryusage Table ........................................

Querying this table provides information similar to that provided by the 'ALL REPORT MemoryUsage' command in the *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client, or logged by 'ALL DUMP 1000' (https://dev.mysql.com/doc/ndb-internals/en/dump-command-1000.html).

The 'memoryusage' table contains the following columns:

Notes

The 'total' column represents the total amount of memory in bytes available for the given resource (data memory or index memory) on a particular data node. This number should be approximately equal to the setting of the corresponding configuration parameter in the 'config.ini' file.

Suppose that the cluster has 2 data nodes having node IDs '5' and '6', and the 'config.ini' file contains the following:

 [ndbd default]
 DataMemory = 1G
 IndexMemory = 1G

Suppose also that the value of the 'LongMessageBuffer' configuration parameter is allowed to assume its default (64 MB).

The following query shows approximately the same values:

 mysql> SELECT node_id, memory_type, total
      > FROM ndbinfo.memoryusage;
 +---------+---------------------+------------+
 | node_id | memory_type         | total      |
 +---------+---------------------+------------+
 |       5 | Data memory         | 1073741824 |
 |       5 | Index memory        | 1074003968 |
 |       5 | Long message buffer |   67108864 |
 |       6 | Data memory         | 1073741824 |
 |       6 | Index memory        | 1074003968 |
 |       6 | Long message buffer |   67108864 |
 +---------+---------------------+------------+
 6 rows in set (0.00 sec)

In this case, the 'total' column values for index memory are slightly higher than the value set of 'IndexMemory' due to internal rounding.

For the 'used_pages' and 'total_pages' columns, resources are measured in pages, which are 32K in size for 'DataMemory' and 8K for 'IndexMemory'. For long message buffer memory, the page size is 256 bytes.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-memory-per-fragment, Next: mysql-cluster-ndbinfo-nodes, Prev: mysql-cluster-ndbinfo-memoryusage, Up: mysql-cluster-ndbinfo

21.6.15.27 The ndbinfo memory_per_fragment Table ................................................

The 'memory_per_fragment' table provides information about the usage of memory by individual fragments. See the Notes later in this section to see how you can use this to find out how much memory is used by 'NDB' tables.

The 'memory_per_fragment' table contains the following columns:

memory_per_fragment Table: Notes

The 'memory_per_fragment' table contains one row for every table fragment replica and every index fragment replica in the system; this means that, for example, when 'NoOfReplicas=2', there are normally two fragment replicas for each fragment. This is true as long as all data nodes are running and connected to the cluster; for a data node that is missing, there are no rows for the fragment replicas that it hosts.

The columns of the 'memory_per_fragment' table can be grouped according to their function or purpose as follows:

The 'parent_fq_name' and 'fq_name' columns can be used to identify indexes associated with a table. Similar schema object hierarchy information is available in other 'ndbinfo' tables.

Table and index fragment replicas allocate 'DataMemory' in 32KB pages. These memory pages are managed as listed here:

Each row in an 'NDB' table has a fixed-size part, consisting of a row header, and one or more fixed-size columns. The row may also contain one or more variable-size part references, one or more disk part references, or both. Each row also has a primary key hash index entry (corresponding to the hidden primary key that is part of every 'NDB' table).

From the foregoing we can see that each table fragment and index fragment together allocate the amount of 'DataMemory' calculated as shown here:

 DataMemory =
   (NUMBER_OF_FIXED_PAGES + NUMBER_OF_VAR_PAGES) * 32KB
     + NUMBER_OF_HASH_PAGES * 8KB

Since 'fixed_elem_alloc_bytes' and 'var_elem_alloc_bytes' are always multiples of 32768 bytes, we can further determine that 'NUMBER_OF_FIXED_PAGES = fixed_elem_alloc_bytes / 32768' and 'NUMBER_OF_VAR_PAGES = var_elem_alloc_bytes / 32768'. 'hash_index_alloc_bytes' is always a multiple of 8192 bytes, so 'NUMBER_OF_HASH_PAGES = hash_index_alloc_bytes / 8192'.

A fixed size page has an internal header and a number of fixed-size slots, each of which can contain one row's fixed-size part. The size of a given row's fixed size part is schema-dependent, and is provided by the 'fixed_elem_size_bytes' column; the number of fixed-size slots per page can be determined by calculating the total number of slots and the total number of pages, like this:

 FIXED_SLOTS = fixed_elem_count + fixed_elem_free_count

 FIXED_PAGES = fixed_elem_alloc_bytes / 32768

 SLOTS_PER_PAGE = total_slots / total_pages

'fixed_elem_count' is in effect the row count for a given table fragment, since each row has 1 fixed element, 'fixed_elem_free_count' is the total number of free fixed-size slots across the allocated pages. 'fixed_elem_free_bytes' is equal to 'fixed_elem_free_count * fixed_elem_size_bytes'.

A fragment can have any number of fixed-size pages; when the last row on a fixed-size page is deleted, the page is released to the 'DataMemory' page pool. Fixed-size pages can be fragmented, with more pages allocated than is required by the number of fixed-size slots in use. You can check whether this is the case by comparing the pages required to the pages allocated, which you can calculate like this:

 FIXED_PAGES_REQUIRED = 1 + (fixed_elem_count / SLOTS_PER_PAGE)

 fixed_page_utilization = FIXED_PAGES_REQUIRED / FIXED_PAGES

A variable-sized page has an internal header and uses the remaining space to store one or more variable-sized row parts; the number of parts stored depends on the schema and the actual data stored. Since not all schemas or rows have a variable-sized part, 'var_elem_count' can be less than 'fixed_elem_count'. The total free space available on all variable-sized pages in the fragment is shown by the 'var_elem_free_bytes' column; because this space may be spread over multiple pages, it cannot necessarily be used to store an entry of a particular size. Each variable-sized page is reorganized as needed to fit the changing size of variable-sized row parts as they are inserted, updated, and deleted; if a given row part grows too large for the page it is in, it can be moved to a different page.

Variable-sized page utilisation can be calculated as shown here:

 VAR_PAGE_USED_BYTES =  var_elem_alloc_bytes - var_elem_free_bytes

 VAR_PAGE_UTILISATION = var_page_used_bytes / var_elem_alloc_bytes

 AVG_ROW_VAR_PART_SIZE = VAR_PAGE_USED_BYTES / fixed_elem_count

We can obtain the average variable part size per row like this:

 AVG_ROW_VAR_PART_SIZE = VAR_PAGE_USED_BYTES / fixed_elem_count

Secondary unique indexes are implemented internally as independent tables with the following schema:

These tables are distributed and fragmented as normal. This means that their fragment replicas use fixed, variable, and hash index pages as with any other 'NDB' table.

Secondary ordered indexes are fragmented and distributed in the same way as the base table. Ordered index fragments are T-tree structures which maintain a balanced tree containing row references in the order implied by the indexed columns. Since the tree contains references rather than actual data, the T-tree storage cost is not dependent on the size or number of indexed columns, but is rather a function of the number of rows. The tree is constructed using fixed-size node structures, each of which may contain a number of row references; the number of nodes required depends on the number of rows in the table, and the tree structure necessary to represent the ordering. In the 'memory_per_fragment' table, we can see that ordered indexes allocate only fixed-size pages, so as usual the relevant columns from this table are as listed here:

If free space in a page is fragmented, the page is defragmented. *note 'OPTIMIZE TABLE': optimize-table. can be used to defragment a table's variable-sized pages; this moves row variable-sized parts between pages so that some whole pages can be freed for re-use.

memory_per_fragment Table: Examples

For the following examples, we create a simple table with three integer columns, one of which has a primary key, one having a unique index, and one with no indexes, as well as one *note 'VARCHAR': char. column with no indexes, as shown here:

 mysql> CREATE DATABASE IF NOT EXISTS test;
 Query OK, 1 row affected (0.06 sec)

 mysql> USE test;
 Database changed

 mysql> CREATE TABLE t1 (
     ->    c1 BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
     ->    c2 INT,
     ->    c3 INT UNIQUE,
     -> )  ENGINE=NDBCLUSTER;
 Query OK, 0 rows affected (0.27 sec)

Following creation of the table, we insert 50,000 rows containing random data; the precise method of generating and inserting these rows makes no practical difference, and we leave the method of accomplishing as an exercise for the user.

Getting general information about fragments and memory usage

This query shows general information about memory usage for each fragment:

 mysql> SELECT
     ->   fq_name, node_id, block_instance, fragment_num, fixed_elem_alloc_bytes,
     ->   fixed_elem_free_bytes, fixed_elem_size_bytes, fixed_elem_count,
     ->   fixed_elem_free_count, var_elem_alloc_bytes, var_elem_free_bytes,
     ->   var_elem_count
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = "test/def/t1"\G
 *************************** 1. row ***************************
                fq_name: test/def/t1
                node_id: 5
         block_instance: 1
           fragment_num: 0
 fixed_elem_alloc_bytes: 1114112
  fixed_elem_free_bytes: 11836
  fixed_elem_size_bytes: 44
       fixed_elem_count: 24925
  fixed_elem_free_count: 269
   var_elem_alloc_bytes: 1245184
    var_elem_free_bytes: 32552
         var_elem_count: 24925
 *************************** 2. row ***************************
                fq_name: test/def/t1
                node_id: 5
         block_instance: 1
           fragment_num: 1
 fixed_elem_alloc_bytes: 1114112
  fixed_elem_free_bytes: 5236
  fixed_elem_size_bytes: 44
       fixed_elem_count: 25075
  fixed_elem_free_count: 119
   var_elem_alloc_bytes: 1277952
    var_elem_free_bytes: 54232
         var_elem_count: 25075
 *************************** 3. row ***************************
                fq_name: test/def/t1
                node_id: 6
         block_instance: 1
           fragment_num: 0
 fixed_elem_alloc_bytes: 1114112
  fixed_elem_free_bytes: 11836
  fixed_elem_size_bytes: 44
       fixed_elem_count: 24925
  fixed_elem_free_count: 269
   var_elem_alloc_bytes: 1245184
    var_elem_free_bytes: 32552
         var_elem_count: 24925
 *************************** 4. row ***************************
                fq_name: test/def/t1
                node_id: 6
         block_instance: 1
           fragment_num: 1
 fixed_elem_alloc_bytes: 1114112
  fixed_elem_free_bytes: 5236
  fixed_elem_size_bytes: 44
       fixed_elem_count: 25075
  fixed_elem_free_count: 119
   var_elem_alloc_bytes: 1277952
    var_elem_free_bytes: 54232
         var_elem_count: 25075
 4 rows in set (0.12 sec)

Finding a table and its indexes

This query can be used to find a specific table and its indexes:

 mysql> SELECT fq_name
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+
 | fq_name              |
 +----------------------+
 | test/def/t1          |
 | sys/def/13/PRIMARY   |
 | sys/def/13/c3        |
 | sys/def/13/c3$unique |
 +----------------------+
 4 rows in set (0.13 sec)

 mysql> SELECT COUNT(*) FROM t1;
 +----------+
 | COUNT(*) |
 +----------+
 |    50000 |
 +----------+
 1 row in set (0.00 sec)

Finding the memory allocated by schema elements

This query shows the memory allocated by each schema element (in total across all replicas):

 mysql> SELECT
     ->   fq_name AS Name,
     ->   SUM(fixed_elem_alloc_bytes) AS Fixed,
     ->   SUM(var_elem_alloc_bytes) AS Var,
     ->   SUM(hash_index_alloc_bytes) AS Hash,
     ->   SUM(fixed_elem_alloc_bytes+var_elem_alloc_bytes+hash_index_alloc_bytes) AS Total
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+---------+---------+---------+----------+
 | Name                 | Fixed   | Var     | Hash    | Total    |
 +----------------------+---------+---------+---------+----------+
 | test/def/t1          | 4456448 | 5046272 | 1425408 | 10928128 |
 | sys/def/13/PRIMARY   | 1966080 |       0 |       0 |  1966080 |
 | sys/def/13/c3        | 1441792 |       0 |       0 |  1441792 |
 | sys/def/13/c3$unique | 3276800 |       0 | 1425408 |  4702208 |
 +----------------------+---------+---------+---------+----------+
 4 rows in set (0.11 sec)

Finding the memory allocated for a table and all indexes

The sum of memory allocated for the table and all its indexes (in total across all replicas) can be obtained using the query shown here:

 mysql> SELECT
     ->   SUM(fixed_elem_alloc_bytes) AS Fixed,
     ->   SUM(var_elem_alloc_bytes) AS Var,
     ->   SUM(hash_index_alloc_bytes) AS Hash,
     ->   SUM(fixed_elem_alloc_bytes+var_elem_alloc_bytes+hash_index_alloc_bytes) AS Total
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +----------+---------+---------+----------+
 | Fixed    | Var     | Hash    | Total    |
 +----------+---------+---------+----------+
 | 11141120 | 5046272 | 2850816 | 19038208 |
 +----------+---------+---------+----------+
 1 row in set (0.12 sec)

This is an abbreviated version of the previous query which shows only the total memory used by the table:

 mysql> SELECT
     ->   SUM(fixed_elem_alloc_bytes+var_elem_alloc_bytes+hash_index_alloc_bytes) AS Total
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +----------+
 | Total    |
 +----------+
 | 19038208 |
 +----------+
 1 row in set (0.12 sec)

Finding the memory allocated per row

The following query shows the total memory allocated per row (across all replicas):

 mysql> SELECT
     ->   SUM(fixed_elem_alloc_bytes+var_elem_alloc_bytes+hash_index_alloc_bytes)
     ->   /
     ->   SUM(fixed_elem_count) AS Total_alloc_per_row
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1';
 +---------------------+
 | Total_alloc_per_row |
 +---------------------+
 |            109.2813 |
 +---------------------+
 1 row in set (0.12 sec)

Finding the total memory in use per row

To obtain the total memory in use per row (across all replicas), we need the total memory used divided by the row count, which is the 'fixed_elem_count' for the base table like this:

 mysql> SELECT
     ->   SUM(
     ->     (fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->     + (var_elem_alloc_bytes - var_elem_free_bytes)
     ->     + hash_index_alloc_bytes
     ->   )
     ->   /
     ->   SUM(fixed_elem_count)
     ->   AS total_in_use_per_row
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1';
 +----------------------+
 | total_in_use_per_row |
 +----------------------+
 |             107.2042 |
 +----------------------+
 1 row in set (0.12 sec)

Finding the memory allocated per element

The memory allocated by each schema element (in total across all replicas) can be found using the following query:

 mysql> SELECT
     ->   fq_name AS Name,
     ->   SUM(fixed_elem_alloc_bytes) AS Fixed,
     ->   SUM(var_elem_alloc_bytes) AS Var,
     ->   SUM(hash_index_alloc_bytes) AS Hash,
     ->   SUM(fixed_elem_alloc_bytes + var_elem_alloc_bytes + hash_index_alloc_bytes)
     ->     AS Total_alloc
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+---------+---------+---------+-------------+
 | Name                 | Fixed   | Var     | Hash    | Total_alloc |
 +----------------------+---------+---------+---------+-------------+
 | test/def/t1          | 4456448 | 5046272 | 1425408 |    10928128 |
 | sys/def/13/PRIMARY   | 1966080 |       0 |       0 |     1966080 |
 | sys/def/13/c3        | 1441792 |       0 |       0 |     1441792 |
 | sys/def/13/c3$unique | 3276800 |       0 | 1425408 |     4702208 |
 +----------------------+---------+---------+---------+-------------+
 4 rows in set (0.11 sec)

Finding the average memory allocated per row, by element

To obtain the average memory allocated per row by each schema element (in total across all replicas), we use a subquery to get the base table fixed element count each time to get an average per row since 'fixed_elem_count' for the indexes is not necessarily the same as for the base table, as shown here:

 mysql> SELECT
     ->   fq_name AS Name,
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Table_rows,
     ->
     ->   SUM(fixed_elem_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Avg_fixed_alloc,
     ->
     ->   SUM(var_elem_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') as Avg_var_alloc,
     ->
     ->   SUM(hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') as Avg_hash_alloc,
     ->
     ->   SUM(fixed_elem_alloc_bytes+var_elem_alloc_bytes+hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') as Avg_total_alloc
     ->
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' or parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+------------+-----------------+---------------+----------------+-----------------+
 | Name                 | Table_rows | Avg_fixed_alloc | Avg_var_alloc | Avg_hash_alloc | Avg_total_alloc |
 +----------------------+------------+-----------------+---------------+----------------+-----------------+
 | test/def/t1          |     100000 |         44.5645 |       50.4627 |        14.2541 |        109.2813 |
 | sys/def/13/PRIMARY   |     100000 |         19.6608 |        0.0000 |         0.0000 |         19.6608 |
 | sys/def/13/c3        |     100000 |         14.4179 |        0.0000 |         0.0000 |         14.4179 |
 | sys/def/13/c3$unique |     100000 |         32.7680 |        0.0000 |        14.2541 |         47.0221 |
 +----------------------+------------+-----------------+---------------+----------------+-----------------+
 4 rows in set (0.70 sec)

Finding the average memory allocated per row

Average memory allocated per row (in total across all replicas):

 mysql> SELECT
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Table_rows,
     ->
     ->   SUM(fixed_elem_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Avg_fixed_alloc,
     ->
     ->   SUM(var_elem_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Avg_var_alloc,
     ->
     ->   SUM(hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Avg_hash_alloc,
     ->
     ->   SUM(fixed_elem_alloc_bytes + var_elem_alloc_bytes + hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS Avg_total_alloc
     ->
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +------------+-----------------+---------------+----------------+-----------------+
 | Table_rows | Avg_fixed_alloc | Avg_var_alloc | Avg_hash_alloc | Avg_total_alloc |
 +------------+-----------------+---------------+----------------+-----------------+
 |     100000 |        111.4112 |       50.4627 |        28.5082 |        190.3821 |
 +------------+-----------------+---------------+----------------+-----------------+
 1 row in set (0.71 sec)

Finding the average memory allocated per row for a table

To get the average amount of memory allocated per row for the entire table across all replicas, we can use the query shown here:

 mysql> SELECT
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS table_rows,
     ->
     ->   SUM(fixed_elem_alloc_bytes + var_elem_alloc_bytes + hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_total_alloc
     ->
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +------------+-----------------+
 | table_rows | avg_total_alloc |
 +------------+-----------------+
 |     100000 |        190.3821 |
 +------------+-----------------+
 1 row in set (0.33 sec)

Finding the memory in use by each schema element

To obtain the memory in use per schema element across all replicas, we need to sum the difference between allocated and free memory for each element, like this:

 mysql> SELECT
     ->   fq_name AS Name,
     ->   SUM(fixed_elem_alloc_bytes - fixed_elem_free_bytes) AS fixed_inuse,
     ->   SUM(var_elem_alloc_bytes-var_elem_free_bytes) AS var_inuse,
     ->   SUM(hash_index_alloc_bytes) AS hash_memory,
     ->   SUM(  (fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->       + (var_elem_alloc_bytes - var_elem_free_bytes)
     ->       + hash_index_alloc_bytes) AS total_alloc
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+-------------+-----------+---------+-------------+
 | fq_name              | fixed_inuse | var_inuse | hash    | total_alloc |
 +----------------------+-------------+-----------+---------+-------------+
 | test/def/t1          |     4422304 |   4872704 | 1425408 |    10720416 |
 | sys/def/13/PRIMARY   |     1950848 |         0 |       0 |     1950848 |
 | sys/def/13/c3        |     1428736 |         0 |       0 |     1428736 |
 | sys/def/13/c3$unique |     3212800 |         0 | 1425408 |     4638208 |
 +----------------------+-------------+-----------+---------+-------------+
 4 rows in set (0.13 sec)

Finding the average memory in use by each schema element

This query gets the average memory in use per schema element across all replicas:

 mysql> SELECT
     ->   fq_name AS Name,
     ->
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS table_rows,
     ->
     ->   SUM(fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_fixed_inuse,
     ->
     ->   SUM(var_elem_alloc_bytes - var_elem_free_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_var_inuse,
     ->
     ->   SUM(hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_hash,
     ->
     ->   SUM(
     ->       (fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->     + (var_elem_alloc_bytes - var_elem_free_bytes) + hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_total_inuse
     ->
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1'
     -> GROUP BY fq_name;
 +----------------------+------------+-----------------+---------------+----------+-----------------+
 | Name                 | table_rows | avg_fixed_inuse | avg_var_inuse | avg_hash | avg_total_inuse |
 +----------------------+------------+-----------------+---------------+----------+-----------------+
 | test/def/t1          |     100000 |         44.2230 |       48.7270 |  14.2541 |        107.2042 |
 | sys/def/13/PRIMARY   |     100000 |         19.5085 |        0.0000 |   0.0000 |         19.5085 |
 | sys/def/13/c3        |     100000 |         14.2874 |        0.0000 |   0.0000 |         14.2874 |
 | sys/def/13/c3$unique |     100000 |         32.1280 |        0.0000 |  14.2541 |         46.3821 |
 +----------------------+------------+-----------------+---------------+----------+-----------------+
 4 rows in set (0.72 sec)

Finding the average memory in use per row, by element

This query gets the average memory in use per row, by element, across all replicas:

 mysql> SELECT
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS table_rows,
     ->
     ->   SUM(fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_fixed_inuse,
     ->
     ->   SUM(var_elem_alloc_bytes - var_elem_free_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_var_inuse,
     ->
     ->   SUM(hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_hash,
     ->
     ->   SUM(
     ->     (fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->     + (var_elem_alloc_bytes - var_elem_free_bytes)
     ->     + hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT SUM(fixed_elem_count)
     ->     FROM ndbinfo.memory_per_fragment
     ->     WHERE fq_name='test/def/t1') AS avg_total_inuse
     ->
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +------------+-----------------+---------------+----------+-----------------+
 | table_rows | avg_fixed_inuse | avg_var_inuse | avg_hash | avg_total_inuse |
 +------------+-----------------+---------------+----------+-----------------+
 |     100000 |        110.1469 |       48.7270 |  28.5082 |        187.3821 |
 +------------+-----------------+---------------+----------+-----------------+
 1 row in set (0.68 sec)

Finding the total average memory in use per row

This query obtains the total average memory in use, per row:

 mysql> SELECT
     ->   SUM(
     ->     (fixed_elem_alloc_bytes - fixed_elem_free_bytes)
     ->     + (var_elem_alloc_bytes - var_elem_free_bytes)
     ->     + hash_index_alloc_bytes)
     ->   /
     ->   ( SELECT
     ->       SUM(fixed_elem_count)
     ->       FROM ndbinfo.memory_per_fragment
     ->       WHERE fq_name='test/def/t1') AS avg_total_in_use
     -> FROM ndbinfo.memory_per_fragment
     -> WHERE fq_name = 'test/def/t1' OR parent_fq_name='test/def/t1';
 +------------------+
 | avg_total_in_use |
 +------------------+
 |         187.3821 |
 +------------------+
 1 row in set (0.24 sec)

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-nodes, Next: mysql-cluster-ndbinfo-operations-per-fragment, Prev: mysql-cluster-ndbinfo-memory-per-fragment, Up: mysql-cluster-ndbinfo

21.6.15.28 The ndbinfo nodes Table ..................................

This table contains information on the status of data nodes. For each data node that is running in the cluster, a corresponding row in this table provides the node's node ID, status, and uptime. For nodes that are starting, it also shows the current start phase.

The 'nodes' table contains the following columns:

Notes

The 'uptime' column shows the time in seconds that this node has been running since it was last started or restarted. This is a note 'BIGINT': integer-types. value. This figure includes the time actually needed to start the node; in other words, this counter starts running the moment that note 'ndbd': mysql-cluster-programs-ndbd. or *note 'ndbmtd': mysql-cluster-programs-ndbmtd. is first invoked; thus, even for a node that has not yet finished starting, 'uptime' may show a nonzero value.

The 'status' column shows the node's current status. This is one of: 'NOTHING', 'CMVMI', 'STARTING', 'STARTED', 'SINGLEUSER', 'STOPPING_1', 'STOPPING_2', 'STOPPING_3', or 'STOPPING_4'. When the status is 'STARTING', you can see the current start phase in the 'start_phase' column (see later in this section). 'SINGLEUSER' is displayed in the 'status' column for all data nodes when the cluster is in single user mode (see *note mysql-cluster-single-user-mode::). Seeing one of the 'STOPPING' states does not necessarily mean that the node is shutting down but can mean rather that it is entering a new state. For example, if you put the cluster in single user mode, you can sometimes see data nodes report their state briefly as 'STOPPING_2' before the status changes to 'SINGLEUSER'.

The 'start_phase' column uses the same range of values as those used in the output of the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client 'NODE_ID STATUS' command (see note mysql-cluster-mgm-client-commands::). If the node is not currently starting, then this column shows '0'. For a listing of NDB Cluster start phases with descriptions, see *note mysql-cluster-start-phases::.

The 'config_generation' column shows which version of the cluster configuration is in effect on each data node. This can be useful when performing a rolling restart of the cluster in order to make changes in configuration parameters. For example, from the output of the following *note 'SELECT': select. statement, you can see that node 3 is not yet using the latest version of the cluster configuration ('6') although nodes 1, 2, and 4 are doing so:

 mysql> USE ndbinfo;
 Database changed
 mysql> SELECT * FROM nodes;
 +---------+--------+---------+-------------+-------------------+
 | node_id | uptime | status  | start_phase | config_generation |
 +---------+--------+---------+-------------+-------------------+
 |       1 |  10462 | STARTED |           0 |                 6 |
 |       2 |  10460 | STARTED |           0 |                 6 |
 |       3 |  10457 | STARTED |           0 |                 5 |
 |       4 |  10455 | STARTED |           0 |                 6 |
 +---------+--------+---------+-------------+-------------------+
 2 rows in set (0.04 sec)

Therefore, for the case just shown, you should restart node 3 to complete the rolling restart of the cluster.

Nodes that are stopped are not accounted for in this table. Suppose that you have an NDB Cluster with 4 data nodes (node IDs 1, 2, 3 and 4), and all nodes are running normally, then this table contains 4 rows, 1 for each data node:

 mysql> USE ndbinfo;
 Database changed
 mysql> SELECT * FROM nodes;
 +---------+--------+---------+-------------+-------------------+
 | node_id | uptime | status  | start_phase | config_generation |
 +---------+--------+---------+-------------+-------------------+
 |       1 |  11776 | STARTED |           0 |                 6 |
 |       2 |  11774 | STARTED |           0 |                 6 |
 |       3 |  11771 | STARTED |           0 |                 6 |
 |       4 |  11769 | STARTED |           0 |                 6 |
 +---------+--------+---------+-------------+-------------------+
 4 rows in set (0.04 sec)

If you shut down one of the nodes, only the nodes that are still running are represented in the output of this *note 'SELECT': select. statement, as shown here:

 ndb_mgm> 2 STOP
 Node 2: Node shutdown initiated
 Node 2: Node shutdown completed.
 Node 2 has shutdown.

 mysql> SELECT * FROM nodes;
 +---------+--------+---------+-------------+-------------------+
 | node_id | uptime | status  | start_phase | config_generation |
 +---------+--------+---------+-------------+-------------------+
 |       1 |  11807 | STARTED |           0 |                 6 |
 |       3 |  11802 | STARTED |           0 |                 6 |
 |       4 |  11800 | STARTED |           0 |                 6 |
 +---------+--------+---------+-------------+-------------------+
 3 rows in set (0.02 sec)

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-operations-per-fragment, Next: mysql-cluster-ndbinfo-processes, Prev: mysql-cluster-ndbinfo-nodes, Up: mysql-cluster-ndbinfo

21.6.15.29 The ndbinfo operations_per_fragment Table ....................................................

The 'operations_per_fragment' table provides information about the operations performed on individual fragments and fragment replicas, as well as about some of the results from these operations.

The 'operations_per_fragment' table contains the following columns:

Notes

The 'fq_name' contains the fully qualified name of the schema object to which this fragment replica belongs. This currently has the following formats:

The '$unique' suffix shown for unique indexes is added by *note 'mysqld': mysqld.; for an index created by a different NDB API client application, this may differ, or not be present.

The syntax just shown for fully qualified object names is an internal interface which is subject to change in future releases.

Consider a table 't1' created and modified by the following SQL statements:

 CREATE DATABASE mydb;

 USE mydb;

 CREATE TABLE t1 (
   a INT NOT NULL,
   b INT NOT NULL,
   t TEXT NOT NULL,
   PRIMARY KEY (b)
 ) ENGINE=ndbcluster;

 CREATE UNIQUE INDEX ix1 ON t1(b) USING HASH;

If 't1' is assigned table ID 11, this yields the 'fq_name' values shown here:

For indexes or 'BLOB' tables, the 'parent_fq_name' column contains the 'fq_name' of the corresponding base table. For base tables, this column is always 'NULL'.

The 'type' column shows the schema object type used for this fragment, which can take any one of the values 'System table', 'User table', 'Unique hash index', or 'Ordered index'. 'BLOB' tables are shown as 'User table'.

The 'table_id' column value is unique at any given time, but can be reused if the corresponding object has been deleted. The same ID can be seen using the *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables. utility.

The 'block_instance' column shows which LDM instance this fragment replica belongs to. You can use this to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table. The first such instance is always numbered 0.

Since there are typically two replicas, and assuming that this is so, each 'fragment_num' value should appear twice in the table, on two different data nodes from the same node group.

Since 'NDB' does not use single-key access for ordered indexes, the counts for 'tot_key_reads', 'tot_key_inserts', 'tot_key_updates', 'tot_key_writes', and 'tot_key_deletes' are not incremented by ordered index operations.

Note:

When using 'tot_key_writes', you should keep in mind that a write operation in this context updates the row if the key exists, and inserts a new row otherwise. (One use of this is in the 'NDB' implementation of the *note 'REPLACE': replace. SQL statement.)

The 'tot_key_refs' column shows the number of key operations refused by the LDM. Generally, such a refusal is due to duplicate keys (inserts), 'Key not found' errors (updates, deletes, and reads), or the operation was rejected by an interpreted program used as a predicate on the row matching the key.

The 'attrinfo' and 'keyinfo' attributes counted by the 'tot_key_attrinfo_bytes' and 'tot_key_keyinfo_bytes' columns are attributes of an 'LQHKEYREQ' signal (see The NDB Communication Protocol (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-ndb-protocol.html)) used to initiate a key operation by the LDM. An 'attrinfo' typically contains tuple field values (inserts and updates) or projection specifications (for reads); 'keyinfo' contains the primary or unique key needed to locate a given tuple in this schema object.

The value shown by 'tot_frag_scans' includes both full scans (that examine every row) and scans of subsets. Unique indexes and 'BLOB' tables are never scanned, so this value, like other scan-related counts, is 0 for fragment replicas of these.

'tot_scan_rows_examined' may display less than the total number of rows in a given fragment replica, since ordered index scans can limited by bounds. In addition, a client may choose to end a scan before all potentially matching rows have been examined; this occurs when using an SQL statement containing a 'LIMIT' or 'EXISTS' clause, for example. 'tot_scan_rows_returned' is always less than or equal to 'tot_scan_rows_examined'.

'tot_scan_bytes_returned' includes, in the case of pushed joins, projections returned to the 'DBSPJ' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbspj.html) block in the NDB kernel.

'tot_qd_frag_scans' can be effected by the setting for the 'MaxParallelScansPerFragment' data node configuration parameter, which limits the number of scans that may execute concurrently on a single fragment replica.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-processes, Next: mysql-cluster-ndbinfo-resources, Prev: mysql-cluster-ndbinfo-operations-per-fragment, Up: mysql-cluster-ndbinfo

21.6.15.30 The ndbinfo processes Table ......................................

This table contains information about NDB Cluster node processes; each node is represented by the row in the table. Only nodes that are connected to the cluster are shown in this table. You can obtain information about nodes that are configured but not connected to the cluster from the note 'nodes': mysql-cluster-ndbinfo-nodes. and note 'config_nodes': mysql-cluster-ndbinfo-config-nodes. tables.

The 'processes' table contains the following columns:

Notes

'node_id' is the ID assigned to this node in the cluster.

The 'node_type' column displays one of the following three values:

For an executable shipped with the NDB Cluster distribution, 'node_version' shows the two-part MySQL NDB Cluster version string, such as '5.7.44-ndb-7.5.36' or '5.7.44-ndb-7.6.34', that it was compiled with. See *note mysql-cluster-version-strings::, for more information.

'process_id' is the node executable's process ID as shown by the host operating system using a process display application such as 'top' on Linux, or the Task Manager on Windows platforms.

'angel_process_id' is the system process ID for the node's angel process, which ensures that a data node or SQL is automatically restarted in cases of failures. For management nodes and API nodes other than SQL nodes, the value of this column is 'NULL'.

The 'process_name' column shows the name of the running executable. For management nodes, this is 'ndb_mgmd'. For data nodes, this is 'ndbd' (single-threaded) or 'ndbmtd' (multithreaded). For SQL nodes, this is 'mysqld'. For other types of API nodes, it is the name of the executable program connected to the cluster; NDB API applications can set a custom value for this using 'Ndb_cluster_connection::set_name()' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndb-cluster-connection.html#ndb-ndb-cluster-connection-set-name).

'service_URI' shows the service network address. For management nodes and data nodes, the scheme used is 'ndb://'. For SQL nodes, this is 'mysql://'. By default, API nodes other than SQL nodes use 'ndb://' for the scheme; NDB API applications can set this to a custom value using 'Ndb_cluster_connection::set_service_uri()'. regardless of the node type, the scheme is followed by the IP address used by the NDB transporter for the node in question. For management nodes and SQL nodes, this address includes the port number (usually 1186 for management nodes and 3306 for SQL nodes). If the SQL node was started with the 'bind_address' system variable set, this address is used instead of the transporter address, unless the bind address is set to '*', '0.0.0.0', or '::'.

Additional path information may be included in the 'service_URI' value for an SQL node reflecting various configuration options. For example, 'mysql://198.51.100.3/tmp/mysql.sock' indicates that the SQL node was started with the 'skip_networking' system variable enabled, and 'mysql://198.51.100.3:3306/?server-id=1' shows that replication is enabled for this SQL node.

The 'processes' table was added in NDB 7.5.7.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-resources, Next: mysql-cluster-ndbinfo-restart-info, Prev: mysql-cluster-ndbinfo-processes, Up: mysql-cluster-ndbinfo

21.6.15.31 The ndbinfo resources Table ......................................

This table provides information about data node resource availability and usage.

These resources are sometimes known as super-pools.

The 'resources' table contains the following columns:

Notes

The 'resource_name' can be any one of the names shown in the following table:

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-restart-info, Next: mysql-cluster-ndbinfo-server-locks, Prev: mysql-cluster-ndbinfo-resources, Up: mysql-cluster-ndbinfo

21.6.15.32 The ndbinfo restart_info Table .........................................

The 'restart_info' table contains information about node restart operations. Each entry in the table corresponds to a node restart status report in real time from a data node with the given node ID. Only the most recent report for any given node is shown.

The 'restart_info' table contains the following columns:

Notes

The following list contains values defined for the 'node_restart_status_int' column with their internal status names (in parentheses), and the corresponding messages shown in the 'node_restart_status' column:

Status numbers 0 through 12 apply on master nodes only; the remainder of those shown in the table apply to all restarting data nodes. Status numbers 13 and 14 define node failure states; 20 and 21 occur when no information about the restart of a given node is available.

See also *note mysql-cluster-start-phases::.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-server-locks, Next: mysql-cluster-ndbinfo-server-operations, Prev: mysql-cluster-ndbinfo-restart-info, Up: mysql-cluster-ndbinfo

21.6.15.33 The ndbinfo server_locks Table .........................................

The 'server_locks' table is similar in structure to the 'cluster_locks' table, and provides a subset of the information found in the latter table, but which is specific to the SQL node (MySQL server) where it resides. (The 'cluster_locks' table provides information about all locks in the cluster.) More precisely, 'server_locks' contains information about locks requested by threads belonging to the current note 'mysqld': mysqld. instance, and serves as a companion table to note 'server_operations': mysql-cluster-ndbinfo-server-operations. This may be useful for correlating locking patterns with specific MySQL user sessions, queries, or use cases.

The 'server_locks' table contains the following columns:

Notes

The 'mysql_connection_id' column shows the MySQL connection or thread ID as shown by *note 'SHOW PROCESSLIST': show-processlist.

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

The 'tableid' is assigned to the table by 'NDB'; the same ID is used for this table in other 'ndbinfo' tables, as well as in the output of *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables.

The transaction ID shown in the 'transid' column is the identifier generated by the NDB API for the transaction requesting or holding the current lock.

The 'mode' column shows the lock mode, which is always one of 'S' (shared lock) or 'X' (exclusive lock). If a transaction has an exclusive lock on a given row, all other locks on that row have the same transaction ID.

The 'state' column shows the lock state. Its value is always one of 'H' (holding) or 'W' (waiting). A waiting lock request waits for a lock held by a different transaction.

The 'detail' column indicates whether this lock is the first holding lock in the affected row's lock queue, in which case it contains a '*' (asterisk character); otherwise, this column is empty. This information can be used to help identify the unique entries in a list of lock requests.

The 'op' column shows the type of operation requesting the lock. This is always one of the values 'READ', 'INSERT', 'UPDATE', 'DELETE', 'SCAN', or 'REFRESH'.

The 'duration_millis' column shows the number of milliseconds for which this lock request has been waiting or holding the lock. This is reset to 0 when a lock is granted for a waiting request.

The lock ID ('lockid' column) is unique to this node and block instance.

If the 'lock_state' column's value is 'W', this lock is waiting to be granted, and the 'waiting_for' column shows the lock ID of the lock object this request is waiting for. Otherwise, 'waiting_for' is empty. 'waiting_for' can refer only to locks on the same row (as identified by 'node_id', 'block_instance', 'tableid', 'fragmentid', and 'rowid').

The 'server_locks' table was added in NDB 7.5.3.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-server-operations, Next: mysql-cluster-ndbinfo-server-transactions, Prev: mysql-cluster-ndbinfo-server-locks, Up: mysql-cluster-ndbinfo

21.6.15.34 The ndbinfo server_operations Table ..............................................

The 'server_operations' table contains entries for all ongoing note 'NDB': mysql-cluster. operations that the current SQL node (MySQL Server) is currently involved in. It effectively is a subset of the note 'cluster_operations': mysql-cluster-ndbinfo-cluster-operations. table, in which operations for other SQL and API nodes are not shown.

The 'server_operations' table contains the following columns:

Notes

The 'mysql_connection_id' is the same as the connection or session ID shown in the output of note 'SHOW PROCESSLIST': show-processlist. It is obtained from the 'INFORMATION_SCHEMA' table note 'NDB_TRANSID_MYSQL_CONNECTION_MAP': information-schema-ndb-transid-mysql-connection-map-table.

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

The transaction ID ('transid') is a unique 64-bit number which can be obtained using the NDB API's 'getTransactionId()' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndbtransaction.html#ndb-ndbtransaction-gettransactionid) method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

The 'operation_type' column can take any one of the values 'READ', 'READ-SH', 'READ-EX', 'INSERT', 'UPDATE', 'DELETE', 'WRITE', 'UNLOCK', 'REFRESH', 'SCAN', 'SCAN-SH', 'SCAN-EX', or ''.

The 'state' column can have any one of the values 'ABORT_QUEUED', 'ABORT_STOPPED', 'COMMITTED', 'COMMIT_QUEUED', 'COMMIT_STOPPED', 'COPY_CLOSE_STOPPED', 'COPY_FIRST_STOPPED', 'COPY_STOPPED', 'COPY_TUPKEY', 'IDLE', 'LOG_ABORT_QUEUED', 'LOG_COMMIT_QUEUED', 'LOG_COMMIT_QUEUED_WAIT_SIGNAL', 'LOG_COMMIT_WRITTEN', 'LOG_COMMIT_WRITTEN_WAIT_SIGNAL', 'LOG_QUEUED', 'PREPARED', 'PREPARED_RECEIVED_COMMIT', 'SCAN_CHECK_STOPPED', 'SCAN_CLOSE_STOPPED', 'SCAN_FIRST_STOPPED', 'SCAN_RELEASE_STOPPED', 'SCAN_STATE_USED', 'SCAN_STOPPED', 'SCAN_TUPKEY', 'STOPPED', 'TC_NOT_CONNECTED', 'WAIT_ACC', 'WAIT_ACC_ABORT', 'WAIT_AI_AFTER_ABORT', 'WAIT_ATTR', 'WAIT_SCAN_AI', 'WAIT_TUP', 'WAIT_TUPKEYINFO', 'WAIT_TUP_COMMIT', or 'WAIT_TUP_TO_ABORT'. (If the MySQL Server is running with 'ndbinfo_show_hidden' enabled, you can view this list of states by selecting from the 'ndb$dblqh_tcconnect_state' table, which is normally hidden.)

You can obtain the name of an 'NDB' table from its table ID by checking the output of *note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables.

The 'fragid' is the same as the partition number seen in the output of *note 'ndb_desc': mysql-cluster-programs-ndb-desc. '--extra-partition-info' (short form '-p').

In 'client_node_id' and 'client_block_ref', 'client' refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

The 'block_instance' and 'tc_block_instance' column provide NDB kernel block instance numbers. You can use these to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-server-transactions, Next: mysql-cluster-ndbinfo-table-distribution-status, Prev: mysql-cluster-ndbinfo-server-operations, Up: mysql-cluster-ndbinfo

21.6.15.35 The ndbinfo server_transactions Table ................................................

The 'server_transactions' table is subset of the *note 'cluster_transactions': mysql-cluster-ndbinfo-cluster-transactions. table, but includes only those transactions in which the current SQL node (MySQL Server) is a participant, while including the relevant connection IDs.

The 'server_transactions' table contains the following columns:

Notes

The 'mysql_connection_id' is the same as the connection or session ID shown in the output of note 'SHOW PROCESSLIST': show-processlist. It is obtained from the 'INFORMATION_SCHEMA' table note 'NDB_TRANSID_MYSQL_CONNECTION_MAP': information-schema-ndb-transid-mysql-connection-map-table.

'block_instance' refers to an instance of a kernel block. Together with the block name, this number can be used to look up a given instance in the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

The transaction ID ('transid') is a unique 64-bit number which can be obtained using the NDB API's 'getTransactionId()' (https://dev.mysql.com/doc/ndbapi/en/ndb-ndbtransaction.html#ndb-ndbtransaction-gettransactionid) method. (Currently, the MySQL Server does not expose the NDB API transaction ID of an ongoing transaction.)

The 'state' column can have any one of the values 'CS_ABORTING', 'CS_COMMITTING', 'CS_COMMIT_SENT', 'CS_COMPLETE_SENT', 'CS_COMPLETING', 'CS_CONNECTED', 'CS_DISCONNECTED', 'CS_FAIL_ABORTED', 'CS_FAIL_ABORTING', 'CS_FAIL_COMMITTED', 'CS_FAIL_COMMITTING', 'CS_FAIL_COMPLETED', 'CS_FAIL_PREPARED', 'CS_PREPARE_TO_COMMIT', 'CS_RECEIVING', 'CS_REC_COMMITTING', 'CS_RESTART', 'CS_SEND_FIRE_TRIG_REQ', 'CS_STARTED', 'CS_START_COMMITTING', 'CS_START_SCAN', 'CS_WAIT_ABORT_CONF', 'CS_WAIT_COMMIT_CONF', 'CS_WAIT_COMPLETE_CONF', 'CS_WAIT_FIRE_TRIG_REQ'. (If the MySQL Server is running with 'ndbinfo_show_hidden' enabled, you can view this list of states by selecting from the 'ndb$dbtc_apiconnect_state' table, which is normally hidden.)

In 'client_node_id' and 'client_block_ref', 'client' refers to an NDB Cluster API or SQL node (that is, an NDB API client or a MySQL Server attached to the cluster).

The 'block_instance' column provides the 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) kernel block instance number. You can use this to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-table-distribution-status, Next: mysql-cluster-ndbinfo-table-fragments, Prev: mysql-cluster-ndbinfo-server-transactions, Up: mysql-cluster-ndbinfo

21.6.15.36 The ndbinfo table_distribution_status Table ......................................................

The 'table_distribution_status' table provides information about the progress of table distribution for 'NDB' tables.

The 'table_distribution_status' table contains the following columns:

Notes

The 'table_distribution_status' table was added in NDB 7.5.4.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-table-fragments, Next: mysql-cluster-ndbinfo-table-info, Prev: mysql-cluster-ndbinfo-table-distribution-status, Up: mysql-cluster-ndbinfo

21.6.15.37 The ndbinfo table_fragments Table ............................................

The 'table_fragments' table provides information about the fragmentation, partitioning, distribution, and (internal) replication of 'NDB' tables.

The 'table_fragments' table contains the following columns:

Notes

The 'table_fragments' table was added in NDB 7.5.4.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-table-info, Next: mysql-cluster-ndbinfo-table-replicas, Prev: mysql-cluster-ndbinfo-table-fragments, Up: mysql-cluster-ndbinfo

21.6.15.38 The ndbinfo table_info Table .......................................

The 'table_info' table provides information about logging, checkpointing, distribution, and storage options in effect for individual 'NDB' tables.

The 'table_info' table contains the following columns:

Notes

The 'table_info' table was added in NDB 7.5.4.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-table-replicas, Next: mysql-cluster-ndbinfo-tc-time-track-stats, Prev: mysql-cluster-ndbinfo-table-info, Up: mysql-cluster-ndbinfo

21.6.15.39 The ndbinfo table_replicas Table ...........................................

The 'table_replicas' table provides information about the copying, distribution, and checkpointing of 'NDB' table fragments and fragment replicas.

The 'table_replicas' table contains the following columns:

Notes

The 'table_replicas' table was added in NDB 7.5.4.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-tc-time-track-stats, Next: mysql-cluster-ndbinfo-threadblocks, Prev: mysql-cluster-ndbinfo-table-replicas, Up: mysql-cluster-ndbinfo

21.6.15.40 The ndbinfo tc_time_track_stats Table ................................................

The 'tc_time_track_stats' table provides time-tracking information obtained from the 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) block (TC) instances in the data nodes, through API nodes access 'NDB'. Each TC instance tracks latencies for a set of activities it undertakes on behalf of API nodes or other data nodes; these activities include transactions, transaction errors, key reads, key writes, unique index operations, failed key operations of any type, scans, failed scans, fragment scans, and failed fragment scans.

A set of counters is maintained for each activity, each counter covering a range of latencies less than or equal to an upper bound. At the conclusion of each activity, its latency is determined and the appropriate counter incremented. 'tc_time_track_stats' presents this information as rows, with a row for each instance of the following:

Notes

Each row contains a value for each activity type. This is the number of times that this activity occurred with a latency within the range specified by the row (that is, where the latency does not exceed the upper bound).

The 'tc_time_track_stats' table contains the following columns:

The 'block_instance' column provides the 'DBTC' (https://dev.mysql.com/doc/ndb-internals/en/ndb-internals-kernel-blocks-dbtc.html) kernel block instance number. You can use this together with the block name to obtain information about specific threads from the *note 'threadblocks': mysql-cluster-ndbinfo-threadblocks. table.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-threadblocks, Next: mysql-cluster-ndbinfo-threads, Prev: mysql-cluster-ndbinfo-tc-time-track-stats, Up: mysql-cluster-ndbinfo

21.6.15.41 The ndbinfo threadblocks Table .........................................

The 'threadblocks' table associates data nodes, threads, and instances of 'NDB' kernel blocks.

The 'threadblocks' table contains the following columns:

Notes

The value of the 'block_name' in this table is one of the values found in the 'block_name' column when selecting from the *note 'ndbinfo.blocks': mysql-cluster-ndbinfo-blocks. table. Although the list of possible values is static for a given NDB Cluster release, the list may vary between releases.

The 'block_instance' column provides the kernel block instance number.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-threads, Next: mysql-cluster-ndbinfo-threadstat, Prev: mysql-cluster-ndbinfo-threadblocks, Up: mysql-cluster-ndbinfo

21.6.15.42 The ndbinfo threads Table ....................................

The 'threads' table provides information about threads running in the 'NDB' kernel.

The 'threads' table contains the following columns:

Notes

Sample output from a 2-node example cluster, including thread descriptions, is shown here:

 mysql> SELECT * FROM threads;
 +---------+--------+-------------+------------------------------------------------------------------+
 | node_id | thr_no | thread_name | thread_description                                               |
 +---------+--------+-------------+------------------------------------------------------------------+
 |       5 |      0 | main        | main thread, schema and distribution handling                    |
 |       5 |      1 | rep         | rep thread, asynch replication and proxy block handling          |
 |       5 |      2 | ldm         | ldm thread, handling a set of data partitions                    |
 |       5 |      3 | recv        | receive thread, performing receive and polling for new receives |
 |       6 |      0 | main        | main thread, schema and distribution handling                    |
 |       6 |      1 | rep         | rep thread, asynch replication and proxy block handling          |
 |       6 |      2 | ldm         | ldm thread, handling a set of data partitions                    |
 |       6 |      3 | recv        | receive thread, performing receive and polling for new receives |
 +---------+--------+-------------+------------------------------------------------------------------+
 8 rows in set (0.01 sec)

This table was added in NDB 7.5.2.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-threadstat, Next: mysql-cluster-ndbinfo-transporters, Prev: mysql-cluster-ndbinfo-threads, Up: mysql-cluster-ndbinfo

21.6.15.43 The ndbinfo threadstat Table .......................................

The 'threadstat' table provides a rough snapshot of statistics for threads running in the 'NDB' kernel.

The 'threadstat' table contains the following columns:

Notes

'os_time' uses the system 'gettimeofday()' call.

The values of the 'os_ru_utime', 'os_ru_stime', 'os_ru_minflt', 'os_ru_majflt', 'os_ru_nvcsw', and 'os_ru_nivcsw' columns are obtained using the system 'getrusage()' call, or the equivalent.

Since this table contains counts taken at a given point in time, for best results it is necessary to query this table periodically and store the results in an intermediate table or tables. The MySQL Server's Event Scheduler can be employed to automate such monitoring. For more information, see *note event-scheduler::.

 File: manual.info.tmp, Node: mysql-cluster-ndbinfo-transporters, Prev: mysql-cluster-ndbinfo-threadstat, Up: mysql-cluster-ndbinfo

21.6.15.44 The ndbinfo transporters Table .........................................

This table contains information about NDB transporters.

The 'transporters' table contains the following columns:

Notes

For each running data node in the cluster, the 'transporters' table displays a row showing the status of each of that node's connections with all nodes in the cluster, including itself. This information is shown in the table's status column, which can have any one of the following values: 'CONNECTING', 'CONNECTED', 'DISCONNECTING', or 'DISCONNECTED'.

Connections to API and management nodes which are configured but not currently connected to the cluster are shown with status 'DISCONNECTED'. Rows where the 'node_id' is that of a data node which is not currently connected are not shown in this table. (This is similar omission of disconnected nodes in the *note 'ndbinfo.nodes': mysql-cluster-ndbinfo-nodes. table.

The 'remote_address' is the host name or address for the node whose ID is shown in the 'remote_node_id' column. The 'bytes_sent' from this node and 'bytes_received' by this node are the numbers, respectively, of bytes sent and received by the node using this connection since it was established. For nodes whose status is 'CONNECTING' or 'DISCONNECTED', these columns always display '0'.

Assume you have a 5-node cluster consisting of 2 data nodes, 2 SQL nodes, and 1 management node, as shown in the output of the 'SHOW' command in the *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client:

 ndb_mgm> SHOW
 Connected to Management Server at: localhost:1186
 Cluster Configuration
 ---------------------
 [ndbd(NDB)]     2 node(s)
 id=1    @10.100.10.1  (5.7.44-ndb-7.6.34, Nodegroup: 0, *)
 id=2    @10.100.10.2  (5.7.44-ndb-7.6.34, Nodegroup: 0)

 [ndb_mgmd(MGM)] 1 node(s)
 id=10   @10.100.10.10  (5.7.44-ndb-7.6.34)

 [mysqld(API)]   2 node(s)
 id=20   @10.100.10.20  (5.7.44-ndb-7.6.34)
 id=21   @10.100.10.21  (5.7.44-ndb-7.6.34)

There are 10 rows in the 'transporters' table--5 for the first data node, and 5 for the second--assuming that all data nodes are running, as shown here:

 mysql> SELECT node_id, remote_node_id, status
     ->   FROM ndbinfo.transporters;
 +---------+----------------+---------------+
 | node_id | remote_node_id | status        |
 +---------+----------------+---------------+
 |       1 |              1 | DISCONNECTED  |
 |       1 |              2 | CONNECTED     |
 |       1 |             10 | CONNECTED     |
 |       1 |             20 | CONNECTED     |
 |       1 |             21 | CONNECTED     |
 |       2 |              1 | CONNECTED     |
 |       2 |              2 | DISCONNECTED  |
 |       2 |             10 | CONNECTED     |
 |       2 |             20 | CONNECTED     |
 |       2 |             21 | CONNECTED     |
 +---------+----------------+---------------+
 10 rows in set (0.04 sec)

If you shut down one of the data nodes in this cluster using the command '2 STOP' in the note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. client, then repeat the previous query (again using the note 'mysql': mysql. client), this table now shows only 5 rows--1 row for each connection from the remaining management node to another node, including both itself and the data node that is currently offline--and displays 'CONNECTING' for the status of each remaining connection to the data node that is currently offline, as shown here:

 mysql> SELECT node_id, remote_node_id, status
     ->   FROM ndbinfo.transporters;
 +---------+----------------+---------------+
 | node_id | remote_node_id | status        |
 +---------+----------------+---------------+
 |       1 |              1 | DISCONNECTED  |
 |       1 |              2 | CONNECTING    |
 |       1 |             10 | CONNECTED     |
 |       1 |             20 | CONNECTED     |
 |       1 |             21 | CONNECTED     |
 +---------+----------------+---------------+
 5 rows in set (0.02 sec)

The 'connect_count', 'overloaded', 'overload_count', 'slowdown', and 'slowdown_count' counters are reset on connection, and retain their values after the remote node disconnects. The 'bytes_sent' and 'bytes_received' counters are also reset on connection, and so retain their values following disconnection (until the next connection resets them).

The overload state referred to by the 'overloaded' and 'overload_count' columns occurs when this transporter's send buffer contains more than 'OVerloadLimit' bytes (default is 80% of 'SendBufferMemory', that is, 0.8 * 2097152 = 1677721 bytes). When a given transporter is in a state of overload, any new transaction that tries to use this transporter fails with Error 1218 ('Send Buffers overloaded in NDB kernel'). This affects both scans and primary key operations.

The slowdown state referenced by the 'slowdown' and 'slowdown_count' columns of this table occurs when the transporter's send buffer contains more than 60% of the overload limit (equal to 0.6 * 2097152 = 1258291 bytes by default). In this state, any new scan using this transporter has its batch size reduced to minimize the load on the transporter.

Common causes of send buffer slowdown or overloading include the following:

See also *note mysql-cluster-config-send-buffers::.

 File: manual.info.tmp, Node: mysql-cluster-information-schema-tables, Next: mysql-cluster-sql-statements, Prev: mysql-cluster-ndbinfo, Up: mysql-cluster-management

21.6.16 INFORMATION_SCHEMA Tables for NDB Cluster

Two note 'INFORMATION_SCHEMA': information-schema. tables provide information that is of particular use when managing an NDB Cluster . The note 'FILES': information-schema-files-table. table provides information about NDB Cluster Disk Data files. The *note 'ndb_transid_mysql_connection_map': information-schema-ndb-transid-mysql-connection-map-table. table provides a mapping between transactions, transaction coordinators, and API nodes.

Additional statistical and other data about NDB Cluster transactions, operations, threads, blocks, and other aspects of performance can be obtained from the tables in the note 'ndbinfo': mysql-cluster-ndbinfo. database. For information about these tables, see note mysql-cluster-ndbinfo::.

 File: manual.info.tmp, Node: mysql-cluster-sql-statements, Next: mysql-cluster-security, Prev: mysql-cluster-information-schema-tables, Up: mysql-cluster-management

21.6.17 Quick Reference: NDB Cluster SQL Statements

This section discusses several SQL statements that can prove useful in managing and monitoring a MySQL server that is connected to an NDB Cluster, and in some cases provide information about the cluster itself.

You can also query the tables in the note 'ndbinfo': mysql-cluster-ndbinfo. information database for real-time data about many NDB Cluster operations. See note mysql-cluster-ndbinfo::.

 File: manual.info.tmp, Node: mysql-cluster-security, Prev: mysql-cluster-sql-statements, Up: mysql-cluster-management

21.6.18 NDB Cluster Security Issues

This section discusses security considerations to take into account when setting up and running NDB Cluster.

Topics covered in this section include the following:

 File: manual.info.tmp, Node: mysql-cluster-security-networking-issues, Next: mysql-cluster-security-mysql-privileges, Prev: mysql-cluster-security, Up: mysql-cluster-security

21.6.18.1 NDB Cluster Security and Networking Issues ....................................................

In this section, we discuss basic network security issues as they relate to NDB Cluster. It is extremely important to remember that NDB Cluster 'out of the box' is not secure; you or your network administrator must take the proper steps to ensure that your cluster cannot be compromised over the network.

Cluster communication protocols are inherently insecure, and no encryption or similar security measures are used in communications between nodes in the cluster. Because network speed and latency have a direct impact on the cluster's efficiency, it is also not advisable to employ SSL or other encryption to network connections between nodes, as such schemes effectively slow communications.

It is also true that no authentication is used for controlling API node access to an NDB Cluster. As with encryption, the overhead of imposing authentication requirements would have an adverse impact on Cluster performance.

In addition, there is no checking of the source IP address for either of the following when accessing the cluster:

For these reasons, it is necessary to protect the cluster on the network level. The safest network configuration for Cluster is one which isolates connections between Cluster nodes from any other network communications. This can be accomplished by any of the following methods:

  1. Keeping Cluster nodes on a network that is physically separate from any public networks. This option is the most dependable; however, it is the most expensive to implement.

    We show an example of an NDB Cluster setup using such a physically segregated network here:

    FIGURE GOES HERE: NDB Cluster with Hardware Firewall

    This setup has two networks, one private (solid box) for the Cluster management servers and data nodes, and one public (dotted box) where the SQL nodes reside. (We show the management and data nodes connected using a gigabit switch since this provides the best performance.) Both networks are protected from the outside by a hardware firewall, sometimes also known as a network-based firewall.

    This network setup is safest because no packets can reach the cluster's management or data nodes from outside the network--and none of the cluster's internal communications can reach the outside--without going through the SQL nodes, as long as the SQL nodes do not permit any packets to be forwarded. This means, of course, that all SQL nodes must be secured against hacking attempts.

    Important:

    With regard to potential security vulnerabilities, an SQL node is no different from any other MySQL server. See *note security-against-attack::, for a description of techniques you can use to secure MySQL servers.

  2. Using one or more software firewalls (also known as host-based firewalls) to control which packets pass through to the cluster from portions of the network that do not require access to it. In this type of setup, a software firewall must be installed on every host in the cluster which might otherwise be accessible from outside the local network.

    The host-based option is the least expensive to implement, but relies purely on software to provide protection and so is the most difficult to keep secure.

    This type of network setup for NDB Cluster is illustrated here:

    FIGURE GOES HERE: NDB Cluster with Software Firewalls

    Using this type of network setup means that there are two zones of NDB Cluster hosts. Each cluster host must be able to communicate with all of the other machines in the cluster, but only those hosting SQL nodes (dotted box) can be permitted to have any contact with the outside, while those in the zone containing the data nodes and management nodes (solid box) must be isolated from any machines that are not part of the cluster. Applications using the cluster and user of those applications must not be permitted to have direct access to the management and data node hosts.

    To accomplish this, you must set up software firewalls that limit the traffic to the type or types shown in the following table, according to the type of node that is running on each cluster host computer:

    Node types in a host-based firewall cluster configuration

    Node Type Permitted Traffic

    SQL or API node
    * It originates from the IP address of a management or data node (using any TCP or UDP port).

                       * It originates from within the network in which
                         the cluster resides and is on the port that
                         your application is using.

    Data node or
    Management node * It originates from the IP address of a management or data node (using any TCP or UDP port).

                       * It originates from the IP address of an SQL or
                         API node.

    Any traffic other than that shown in the table for a given node type should be denied.

    The specifics of configuring a firewall vary from firewall application to firewall application, and are beyond the scope of this Manual. 'iptables' is a very common and reliable firewall application, which is often used with 'APF' as a front end to make configuration easier. You can (and should) consult the documentation for the software firewall that you employ, should you choose to implement an NDB Cluster network setup of this type, or of a 'mixed' type as discussed under the next item.

  3. It is also possible to employ a combination of the first two methods, using both hardware and software to secure the cluster--that is, using both network-based and host-based firewalls. This is between the first two schemes in terms of both security level and cost. This type of network setup keeps the cluster behind the hardware firewall, but permits incoming packets to travel beyond the router connecting all cluster hosts to reach the SQL nodes.

    One possible network deployment of an NDB Cluster using hardware and software firewalls in combination is shown here:

    FIGURE GOES HERE: NDB Cluster with a Combination of Hardware and Software Firewalls

    In this case, you can set the rules in the hardware firewall to deny any external traffic except to SQL nodes and API nodes, and then permit traffic to them only on the ports required by your application.

Whatever network configuration you use, remember that your objective from the viewpoint of keeping the cluster secure remains the same--to prevent any unessential traffic from reaching the cluster while ensuring the most efficient communication between the nodes in the cluster.

Because NDB Cluster requires large numbers of ports to be open for communications between nodes, the recommended option is to use a segregated network. This represents the simplest way to prevent unwanted traffic from reaching the cluster.

Note:

If you wish to administer an NDB Cluster remotely (that is, from outside the local network), the recommended way to do this is to use 'ssh' or another secure login shell to access an SQL node host. From this host, you can then run the management client to access the management server safely, from within the cluster's own local network.

Even though it is possible to do so in theory, it is not recommended to use *note 'ndb_mgm': mysql-cluster-programs-ndb-mgm. to manage a Cluster directly from outside the local network on which the Cluster is running. Since neither authentication nor encryption takes place between the management client and the management server, this represents an extremely insecure means of managing the cluster, and is almost certain to be compromised sooner or later.

 File: manual.info.tmp, Node: mysql-cluster-security-mysql-privileges, Next: mysql-cluster-security-mysql-security-procedures, Prev: mysql-cluster-security-networking-issues, Up: mysql-cluster-security

21.6.18.2 NDB Cluster and MySQL Privileges ..........................................

In this section, we discuss how the MySQL privilege system works in relation to NDB Cluster and the implications of this for keeping an NDB Cluster secure.

Standard MySQL privileges apply to NDB Cluster tables. This includes all MySQL privilege types ('SELECT' privilege, 'UPDATE' privilege, 'DELETE' privilege, and so on) granted on the database, table, and column level. As with any other MySQL Server, user and privilege information is stored in the 'mysql' system database. The SQL statements used to grant and revoke privileges on note 'NDB': mysql-cluster. tables, databases containing such tables, and columns within such tables are identical in all respects with the note 'GRANT': grant. and note 'REVOKE': revoke. statements used in connection with database objects involving any (other) MySQL storage engine. The same thing is true with respect to the note 'CREATE USER': create-user. and *note 'DROP USER': drop-user. statements.

It is important to keep in mind that, by default, the MySQL grant tables use the note 'MyISAM': myisam-storage-engine. storage engine. Because of this, those tables are not normally duplicated or shared among MySQL servers acting as SQL nodes in an NDB Cluster. In other words, changes in users and their privileges do not automatically propagate between SQL nodes by default. If you wish, you can enable automatic distribution of MySQL users and privileges across NDB Cluster SQL nodes; see note mysql-cluster-privilege-distribution::, for details.

Conversely, because there is no way in MySQL to deny privileges (privileges can either be revoked or not granted in the first place, but not denied as such), there is no special protection for *note 'NDB': mysql-cluster. tables on one SQL node from users that have privileges on another SQL node; (This is true even if you are not using automatic distribution of user privileges. The definitive example of this is the MySQL 'root' account, which can perform any action on any database object. In combination with empty '[mysqld]' or '[api]' sections of the 'config.ini' file, this account can be especially dangerous. To understand why, consider the following scenario:

If these conditions are true, then anyone, anywhere can start a MySQL Server with '--ndbcluster' '--ndb-connectstring=MANAGEMENT_HOST' and access this NDB Cluster. Using the MySQL 'root' account, this person can then perform the following actions:

In sum, you cannot have a safe NDB Cluster if it is directly accessible from outside your local network.

Important:

Never leave the MySQL root account password empty. This is just as true when running MySQL as an NDB Cluster SQL node as it is when running it as a standalone (non-Cluster) MySQL Server, and should be done as part of the MySQL installation process before configuring the MySQL Server as an SQL node in an NDB Cluster.

If you wish to employ NDB Cluster's distributed privilege capabilities, you should not simply convert the system tables in the 'mysql' database to use the note 'NDB': mysql-cluster. storage engine manually. Use the stored procedure provided for this purpose instead; see note mysql-cluster-privilege-distribution::.

Otherwise, if you need to synchronize 'mysql' system tables between SQL nodes, you can use standard MySQL replication to do so, or employ a script to copy table entries between the MySQL servers.

Summary

The most important points to remember regarding the MySQL privilege system with regard to NDB Cluster are listed here:

  1. Users and privileges established on one SQL node do not automatically exist or take effect on other SQL nodes in the cluster. Conversely, removing a user or privilege on one SQL node in the cluster does not remove the user or privilege from any other SQL nodes.

  2. You can distribute MySQL users and privileges among SQL nodes using the SQL script, and the stored procedures it contains, that are supplied for this purpose in the NDB Cluster distribution.

  3. Once a MySQL user is granted privileges on an *note 'NDB': mysql-cluster. table from one SQL node in an NDB Cluster, that user can 'see' any data in that table regardless of the SQL node from which the data originated, even if you are not using privilege distribution.

 File: manual.info.tmp, Node: mysql-cluster-security-mysql-security-procedures, Prev: mysql-cluster-security-mysql-privileges, Up: mysql-cluster-security

21.6.18.3 NDB Cluster and MySQL Security Procedures ...................................................

In this section, we discuss MySQL standard security procedures as they apply to running NDB Cluster.

In general, any standard procedure for running MySQL securely also applies to running a MySQL Server as part of an NDB Cluster. First and foremost, you should always run a MySQL Server as the 'mysql' operating system user; this is no different from running MySQL in a standard (non-Cluster) environment. The 'mysql' system account should be uniquely and clearly defined. Fortunately, this is the default behavior for a new MySQL installation. You can verify that the *note 'mysqld': mysqld. process is running as the 'mysql' operating system user by using the system command such as the one shown here:

 $> ps aux | grep mysql
 root     10467  0.0  0.1   3616  1380 pts/3    S    11:53   0:00 \
   /bin/sh ./mysqld_safe --ndbcluster --ndb-connectstring=localhost:1186
 mysql    10512  0.2  2.5  58528 26636 pts/3    Sl   11:53   0:00 \
   /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql \
   --datadir=/usr/local/mysql/var --user=mysql --ndbcluster \
   --ndb-connectstring=localhost:1186 --pid-file=/usr/local/mysql/var/mothra.pid \
   --log-error=/usr/local/mysql/var/mothra.err
 jon      10579  0.0  0.0   2736   688 pts/0    S+   11:54   0:00 grep mysql

If the note 'mysqld': mysqld. process is running as any other user than 'mysql', you should immediately shut it down and restart it as the 'mysql' user. If this user does not exist on the system, the 'mysql' user account should be created, and this user should be part of the 'mysql' user group; in this case, you should also make sure that the MySQL data directory on this system (as set using the '--datadir' option for note 'mysqld': mysqld.) is owned by the 'mysql' user, and that the SQL node's 'my.cnf' file includes 'user=mysql' in the '[mysqld]' section. Alternatively, you can start the MySQL server process with '--user=mysql' on the command line, but it is preferable to use the 'my.cnf' option, since you might forget to use the command-line option and so have note 'mysqld': mysqld. running as another user unintentionally. The note 'mysqld_safe': mysqld-safe. startup script forces MySQL to run as the 'mysql' user.

Important:

Never run *note 'mysqld': mysqld. as the system root user. Doing so means that potentially any file on the system can be read by MySQL, and thus--should MySQL be compromised--by an attacker.

As mentioned in the previous section (see *note mysql-cluster-security-mysql-privileges::), you should always set a root password for the MySQL Server as soon as you have it running. You should also delete the anonymous user account that is installed by default. You can accomplish these tasks using the following statements:

 $> mysql -u root

 mysql> UPDATE mysql.user
     ->     SET Password=PASSWORD('SECURE_PASSWORD')
     ->     WHERE User='root';

 mysql> DELETE FROM mysql.user
     ->     WHERE User='';

 mysql> FLUSH PRIVILEGES;

Be very careful when executing the *note 'DELETE': delete. statement not to omit the 'WHERE' clause, or you risk deleting all MySQL users. Be sure to run the 'FLUSH PRIVILEGES' statement as soon as you have modified the 'mysql.user' table, so that the changes take immediate effect. Without 'FLUSH PRIVILEGES', the changes do not take effect until the next time that the server is restarted.

Note:

Many of the NDB Cluster utilities such as note 'ndb_show_tables': mysql-cluster-programs-ndb-show-tables, note 'ndb_desc': mysql-cluster-programs-ndb-desc, and *note 'ndb_select_all': mysql-cluster-programs-ndb-select-all. also work without authentication and can reveal table names, schemas, and data. By default these are installed on Unix-style systems with the permissions 'wxr-xr-x' (755), which means they can be executed by any user that can access the 'mysql/bin' directory.

See *note mysql-cluster-programs::, for more information about these utilities.

 File: manual.info.tmp, Node: mysql-cluster-replication, Next: mysql-cluster-news, Prev: mysql-cluster-management, Up: mysql-cluster