Menu:
group-replication-deploying-locally:: Deploying Group Replication Locally
MySQL Group Replication is provided as a plugin for the MySQL server; each server in a group requires configuration and installation of the plugin. This section provides a detailed tutorial with the steps required to create a replication group with at least three members.
Tip:
An alternative way to deploy multiple instances of MySQL is by using InnoDB Cluster, which uses Group Replication and wraps it in a programmatic environment that enables you to easily work with groups of MySQL server instances in the MySQL Shell 8.0 (https://dev.mysql.com/doc/mysql-shell/8.0/en/). In addition, InnoDB Cluster interfaces seamlessly with MySQL Router and simplifies deploying MySQL with high availability. See MySQL AdminAPI (https://dev.mysql.com/doc/mysql-shell/8.0/en/admin-api-userguide.html).
File: manual.info.tmp, Node: group-replication-deploying-in-single-primary-mode, Next: group-replication-deploying-locally, Prev: group-replication-getting-started, Up: group-replication-getting-started
Menu:
group-replication-adding-instances:: Adding Instances to the Group
Each of the MySQL server instances in a group can run on an independent physical host machine, which is the recommended way to deploy Group Replication. This section explains how to create a replication group with three MySQL Server instances, each running on a different host machine. See *note group-replication-deploying-locally:: for information about deploying multiple MySQL server instances running Group Replication on the same host machine, for example for testing purposes.
FIGURE GOES HERE: Group Architecture
This tutorial explains how to get and deploy MySQL Server with the Group Replication plugin, how to configure each server instance before creating a group, and how to use Performance Schema monitoring to verify that everything is working correctly.
File: manual.info.tmp, Node: group-replication-deploying-instances, Next: group-replication-configuring-instances, Prev: group-replication-deploying-in-single-primary-mode, Up: group-replication-deploying-in-single-primary-mode
17.2.1.1 Deploying Instances for Group Replication ..................................................
The first step is to deploy at least three instances of MySQL Server, this procedure demonstrates using multiple hosts for the instances, named s1, s2, and s3. It is assumed that MySQL Server is installed on each host (see note installing::). The Group Replication plugin is provided with MySQL Server 5.7.17 and later; no additional software is required, although the plugin must be installed in the running MySQL server. See note group-replication-deploying-instances::; for additional information, see *note server-plugins::.
In this example, three instances are used for the group, which is the minimum number of instances to create a group. Adding more instances increases the fault tolerance of the group. For example if the group consists of three members, in event of failure of one instance the group can continue. But in the event of another failure the group can no longer continue processing write transactions. By adding more instances, the number of servers which can fail while the group continues to process transactions also increases. The maximum number of instances which can be used in a group is nine. For more information see *note group-replication-failure-detection::.
File: manual.info.tmp, Node: group-replication-configuring-instances, Next: group-replication-user-credentials, Prev: group-replication-deploying-instances, Up: group-replication-deploying-in-single-primary-mode
17.2.1.2 Configuring an Instance for Group Replication ......................................................
This section explains the configuration settings required for MySQL Server instances that you want to use for Group Replication. For background information, see *note group-replication-requirements-and-limitations::.
*note group-replication-storage-engines::
*note group-replication-configure-replication-framework::
*note group-replication-configure-plugin::
Storage Engines
For Group Replication, data must be stored in the InnoDB transactional storage engine (for details of why, see note group-replication-requirements::). The use of other storage engines, including the temporary note 'MEMORY': memory-storage-engine. storage engine, might cause errors in Group Replication. Set the 'disabled_storage_engines' system variable as follows to prevent their use:
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
Note that with the note 'MyISAM': myisam-storage-engine. storage engine disabled, when you are upgrading a MySQL instance to a release where note 'mysql_upgrade': mysql-upgrade. is still used (before MySQL 8.0.16), note 'mysql_upgrade': mysql-upgrade. might fail with an error. To handle this, you can re-enable that storage engine while you run note 'mysql_upgrade': mysql-upgrade, then disable it again when you restart the server. For more information, see *note mysql-upgrade::.
Replication Framework
The following settings configure replication according to the MySQL Group Replication requirements.
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
These settings configure the server to use the unique identifier number 1, to enable global transaction identifiers and to store replication metadata in system tables instead of files. Additionally, it instructs the server to turn on binary logging, use row-based format and disable binary log event checksums. For more details see *note group-replication-requirements::.
Group Replication Settings
At this point the option file ensures that the server is configured and is instructed to instantiate the replication infrastructure under a given configuration. The following section configures the Group Replication settings for the server.
plugin_load_add='group_replication.so'
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "s1:33061"
group_replication_group_seeds= "s1:33061,s2:33061,s3:33061"
group_replication_bootstrap_group=off
'plugin-load-add' adds the Group Replication plugin to the list of plugins which the server loads at startup. This is preferable in a production deployment to installing the plugin manually.
Configuring 'group_replication_group_name' tells the plugin that the group that it is joining, or creating, is named "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa".
The value of 'group_replication_group_name' must be a valid UUID. This UUID is used internally when setting GTIDs for Group Replication events in the binary log. You can use 'SELECT UUID()' to generate a UUID.
Configuring the 'group_replication_start_on_boot' variable to 'off' instructs the plugin to not start operations automatically when the server starts. This is important when setting up Group Replication as it ensures you can configure the server before manually starting the plugin. Once the member is configured you can set 'group_replication_start_on_boot' to 'on' so that Group Replication starts automatically upon server boot.
Configuring 'group_replication_local_address' sets the network address and port which the member uses for internal communication with other members in the group. Group Replication uses this address for internal member-to-member connections involving remote instances of the group communication engine (XCom, a Paxos variant).
Important:
This address must be different to the 'hostname' and 'port' used for SQL and it must not be used for client applications. It must be only be used for internal communication between the members of the group while running Group Replication.
The network address configured by 'group_replication_local_address' must be resolvable by all group members. For example, if each server instance is on a different machine with a fixed network address, you could use the IP address of the machine, such as 10.0.0.1. If you use a host name, you must use a fully qualified name, and ensure it is resolvable through DNS, correctly configured '/etc/hosts' files, or other name resolution processes. From MySQL 8.0.14, IPv6 addresses (or host names that resolve to them) can be used as well as IPv4 addresses. A group can contain a mix of members using IPv6 and members using IPv4. For more information on Group Replication support for IPv6 networks and on mixed IPv4 and IPv6 groups, see Support For IPv6 And For Mixed IPv6 And IPv4 Groups (https://dev.mysql.com/doc/refman/8.0/en/group-replication-ipv6.html).
The recommended port for 'group_replication_local_address' isConfiguring 'group_replication_group_seeds' sets the hostname and port of the group members which are used by the new member to establish its connection to the group. These members are called the seed members. Once the connection is established, the group membership information is listed at *note 'performance_schema.replication_group_members': performance-schema-replication-group-members-table. Usually the 'group_replication_group_seeds' list contains the 'hostname:port' of each of the group member's 'group_replication_local_address', but this is not obligatory and a subset of the group members can be chosen as seeds.
Important:
The 'hostname:port' listed in 'group_replication_group_seeds' is the seed member's internal network address, configured by 'group_replication_local_address' and not the SQL 'hostname:port' used for client connections, and shown for example in *note 'performance_schema.replication_group_members': performance-schema-replication-group-members-table. table.
The server that starts the group does not make use of this option, since it is the initial server and as such, it is in charge of bootstrapping the group. In other words, any existing data which is on the server bootstrapping the group is what is used as the data for the next joining member. The second server joining asks the one and only member in the group to join, any missing data on the second server is replicated from the donor data on the bootstrapping member, and then the group expands. The third server joining can ask any of these two to join, data is synchronized to the new member, and then the group expands again. Subsequent servers repeat this procedure when joining.
Warning:
When joining multiple servers at the same time, make sure that they point to seed members that are already in the group. Do not use members that are also joining the group as seeds, because they might not yet be in the group when contacted.
It is good practice to start the bootstrap member first, and let it create the group. Then make it the seed member for the rest of the members that are joining. This ensures that there is a group formed when joining the rest of the members.
Creating a group and joining multiple members at the same time is not supported. It might work, but chances are that the operations race and then the act of joining the group ends up in an error or a time out.
Configuring 'group_replication_bootstrap_group' instructs the plugin whether to bootstrap the group or not. In this case, even though s1 is the first member of the group we set this variable to off in the option file. Instead we configure 'group_replication_bootstrap_group' when the instance is running, to ensure that only one member actually bootstraps the group.
Important:
The 'group_replication_bootstrap_group' variable must only be enabled on one server instance belonging to a group at any time, usually the first time you bootstrap the group (or in case the entire group is brought down and back up again). If you bootstrap the group multiple times, for example when multiple server instances have this option set, then they could create an artificial split brain scenario, in which two distinct groups with the same name exist. Always set 'group_replication_bootstrap_group=off' after the first server instance comes online.
Configuration for all servers in the group is quite similar. You need to change the specifics about each server (for example 'server_id', 'datadir', 'group_replication_local_address'). This is illustrated later in this tutorial.
File: manual.info.tmp, Node: group-replication-user-credentials, Next: group-replication-launching, Prev: group-replication-configuring-instances, Up: group-replication-deploying-in-single-primary-mode
17.2.1.3 User Credentials .........................
Group Replication uses the asynchronous replication protocol to achieve *note group-replication-distributed-recovery::, synchronizing group members before joining them to the group. The distributed recovery process relies on a replication channel named 'group_replication_recovery' which is used to transfer transactions from donor members to members that join the group. Therefore you need to set up a replication user with the correct permissions so that Group Replication can establish direct member-to-member recovery replication channels.
Start the MySQL server instance and then connect a client to it. Create a MySQL user with the 'REPLICATION SLAVE' privilege. This process can be captured in the binary log and then you can rely on distributed recovery to replicate the statements used to create the user. Alternatively, you can disable binary logging using 'SET SQL_LOG_BIN=0;' and then create the user manually on each member, for example if you want to avoid the changes being propagated to other server instances. If you do decide to disable binary logging, ensure you renable it once you have configured the user.
In the following example the user RPL_USER with the password PASSWORD is shown. When configuring your servers use a suitable user name and password.
mysql> CREATE USER RPL_USER@'%' IDENTIFIED BY 'PASSWORD';
mysql> GRANT REPLICATION SLAVE ON *.* TO RPL_USER@'%';
mysql> FLUSH PRIVILEGES;
If binary logging was disabled, enable it again once the user has been created using 'SET SQL_LOG_BIN=1;'.
Once the user has been configured, use the *note 'CHANGE MASTER TO': change-master-to. statement to configure the server to use the given credentials for the 'group_replication_recovery' replication channel the next time it needs to recover its state from another member. Issue the following, replacing RPL_USER and PASSWORD with the values used when creating the user.
mysql> CHANGE MASTER TO MASTER_USER='RPL_USER', MASTER_PASSWORD='PASSWORD' \\
FOR CHANNEL 'group_replication_recovery';
Distributed recovery is the first step taken by a server that joins the group and does not have the same set of transactions as the group members. If these credentials are not set correctly for the 'group_replication_recovery' replication channel and the 'rpl_user' as shown, the server cannot connect to the donor members and run the distributed recovery process to gain synchrony with the other group members, and hence ultimately cannot join the group. See *note group-replication-distributed-recovery::.
Similarly, if the server cannot correctly identify the other members via the server's 'hostname' the recovery process can fail. It is recommended that operating systems running MySQL have a properly configured unique 'hostname', either using DNS or local settings. This 'hostname' can be verified in the 'Member_host' column of the *note 'performance_schema.replication_group_members': performance-schema-replication-group-members-table. table. If multiple group members externalize a default 'hostname' set by the operating system, there is a chance of the member not resolving to the correct member address and not being able to join the group. In such a situation use 'report_host' to configure a unique 'hostname' to be externalized by each of the servers.
File: manual.info.tmp, Node: group-replication-launching, Next: group-replication-bootstrap, Prev: group-replication-user-credentials, Up: group-replication-deploying-in-single-primary-mode
17.2.1.4 Launching Group Replication ....................................
It is first necessary to ensure that the Group Replication plugin is installed on server s1. If you used 'plugin_load_add='group_replication.so'' in the option file then the Group Replication plugin is already installed, and you can proceed to the next step. Otherwise, you must install the plugin manually; to do this, connect to the server using the *note 'mysql': mysql. client, and issue the SQL statement shown here:
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Important:
The 'mysql.session' user must exist before you can load Group Replication. 'mysql.session' was added in MySQL version 5.7.19. If your data dictionary was initialized using an earlier version you must perform the MySQL upgrade procedure (see *note upgrading::). If the upgrade is not run, Group Replication fails to start with the error message 'There was an error when trying to access the server with user: mysql.session@localhost. Make sure the user is present in the server and that mysql_upgrade was ran after a server update'.
To check that the plugin was installed successfully, issue 'SHOW PLUGINS;' and check the output. It should show something like this:
mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+-------------+
| Name | Status | Type | Library | License |
+----------------------------+----------+--------------------+----------------------+-------------+
| binlog | ACTIVE | STORAGE ENGINE | NULL | PROPRIETARY |
(...)
| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | PROPRIETARY |
+----------------------------+----------+--------------------+----------------------+-------------+
File: manual.info.tmp, Node: group-replication-bootstrap, Next: group-replication-adding-instances, Prev: group-replication-launching, Up: group-replication-deploying-in-single-primary-mode
17.2.1.5 Bootstrapping the Group ................................
The process of starting a group for the first time is called bootstrapping. You use the 'group_replication_bootstrap_group' system variable to bootstrap a group. The bootstrap should only be done by a single server, the one that starts the group and only once. This is why the value of the 'group_replication_bootstrap_group' option was not stored in the instance's option file. If it is saved in the option file, upon restart the server automatically bootstraps a second group with the same name. This would result in two distinct groups with the same name. The same reasoning applies to stopping and restarting the plugin with this option set to 'ON'. Therefore to safely bootstrap the group, connect to s1 and issue:
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Once the *note 'START GROUP_REPLICATION': start-group-replication. statement returns, the group has been started. You can check that the group is now created and that there is one member in it:
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | ce9be252-2b71-11e6-b8f4-00212844f856 | s1 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
The information in this table confirms that there is a member in the group with the unique identifier 'ce9be252-2b71-11e6-b8f4-00212844f856', that it is 'ONLINE' and is at 's1' listening for client connections on port '3306'.
For the purpose of demonstrating that the server is indeed in a group and that it is able to handle load, create a table and add some content to it.
mysql> CREATE DATABASE test;
mysql> USE test;
mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL);
mysql> INSERT INTO t1 VALUES (1, 'Luis');
Check the content of table 't1' and the binary log.
mysql> SELECT * FROM t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 1 | 123 | Server ver: 5.7.44-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 1 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724817264259180:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 899 | BEGIN |
| binlog.000001 | 899 | Table_map | 1 | 942 | table_id: 108 (test.t1) |
| binlog.000001 | 942 | Write_rows | 1 | 984 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 984 | Xid | 1 | 1011 | COMMIT /* xid=38 */ |
+---------------+-----+----------------+-----------+-------------+--------------------------------------------------------------------+
As seen above, the database and the table objects were created and their corresponding DDL statements were written to the binary log. Also, the data was inserted into the table and written to the binary log. The importance of the binary log entries is illustrated in the following section when the group grows and distributed recovery is executed as new members try to catch up and become online.
File: manual.info.tmp, Node: group-replication-adding-instances, Prev: group-replication-bootstrap, Up: group-replication-deploying-in-single-primary-mode
17.2.1.6 Adding Instances to the Group ......................................
Menu:
group-replication-adding-additional-instances:: Adding Additional Instances
At this point, the group has one member in it, server s1, which has some data in it. It is now time to expand the group by adding the other two servers configured previously.
File: manual.info.tmp, Node: group-replication-adding-a-second-instance, Next: group-replication-adding-additional-instances, Prev: group-replication-adding-instances, Up: group-replication-adding-instances
17.2.1.7 Adding a Second Instance .................................
In order to add a second instance, server s2, first create the configuration file for it. The configuration is similar to the one used for server s1, except for things such as the 'server_id'.
[mysqld]
#
# Disable other storage engines
#
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
#
# Replication configuration parameters
#
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
#
# Group Replication configuration
#
transaction_write_set_extraction=XXHASH64
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "s2:33061"
group_replication_group_seeds= "s1:33061,s2:33061,s3:33061"
group_replication_bootstrap_group= off
Similar to the procedure for server s1, with the option file in place you launch the server. Then configure the recovery credentials as follows. The commands are the same as used when setting up server s1 as the user is shared within the group. This member needs to have the same replication user configured in *note group-replication-user-credentials::. If you are relying on distributed recovery to configure the user on all members, when s2 connects to the seed s1 the replication user is relicated to s1. If you did not have binary logging enabled when you configured the user credentials on s1, you must create the replication user on s2. In this case, connect to s2 and issue:
SET SQL_LOG_BIN=0;
CREATE USER RPL_USER@'%' IDENTIFIED BY 'PASSWORD';
GRANT REPLICATION SLAVE ON *.* TO RPL_USER@'%';
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='RPL_USER', MASTER_PASSWORD='PASSWORD' \\
FOR CHANNEL 'group_replication_recovery';
If necessary, install the Group Replication plugin, see *note group-replication-launching::.
Start Group Replication and s2 starts the process of joining the group.
mysql> START GROUP_REPLICATION;
Unlike the previous steps that were the same as those executed on s1, here there is a difference in that you do not need to boostrap the group because the group already exiists. In other words on s2 'group_replication_bootstrap_group' is set to off, and you do not issue 'SET GLOBAL group_replication_bootstrap_group=ON;' before starting Group Replication, because the group has already been created and bootstrapped by server s1. At this point server s2 only needs to be added to the already existing group.
Tip:
When Group Replication starts successfully and the server joins the group it checks the 'super_read_only' variable. By setting 'super_read_only' to ON in the member's configuration file, you can ensure that servers which fail when starting Group Replication for any reason do not accept transactions. If the server should join the group as read-write instance, for example as the primary in a single-primary group or as a member of a multi-primary group, when the 'super_read_only' variable is set to ON then it is set to OFF upon joining the group.
Checking the *note 'performance_schema.replication_group_members': performance-schema-replication-group-members-table. table again shows that there are now two ONLINE servers in the group.
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | s1 | 3306 | ONLINE |
| group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | s2 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
When s2 attempted to join the group, *note group-replication-distributed-recovery:: ensured that s2 applied the same transactions which s1 had applied. Once this process completed, s2 could join the group as a member, and at this point it is marked as ONLINE. In other words it must have already caught up with server s1 automatically. Once s2 is ONLINE, it then begins to process transactions with the group. Verify that s2 has indeed synchronized with server s1 as follows.
mysql> SHOW DATABASES LIKE 'test';
+-----------------+
| Database (test) |
+-----------------+
| test |
+-----------------+
mysql> SELECT * FROM test.t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 2 | 123 | Server ver: 5.7.44-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 2 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 890 | BEGIN |
| binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
| binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 975 | Xid | 1 | 1002 | COMMIT /* xid=30 */ |
| binlog.000001 | 1002 | Gtid | 1 | 1063 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:5' |
| binlog.000001 | 1063 | Query | 1 | 1122 | BEGIN |
| binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
| binlog.000001 | 1261 | Query | 1 | 1326 | COMMIT |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
As seen above, the second server has been added to the group and it has replicated the changes from server s1 automatically using distributed recovery. In other words, the transactions applied on s1 up to the point in time that s2 joined the group have been replicated to s2.
File: manual.info.tmp, Node: group-replication-adding-additional-instances, Prev: group-replication-adding-a-second-instance, Up: group-replication-adding-instances
17.2.1.8 Adding Additional Instances ....................................
Adding additional instances to the group is essentially the same sequence of steps as adding the second server, except that the configuration has to be changed as it had to be for server s2. To summarise the required commands:
1. Create the configuration file
[mysqld]
#
# Disable other storage engines
#
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
#
# Replication configuration parameters
#
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW
#
# Group Replication configuration
#
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
group_replication_start_on_boot=off
group_replication_local_address= "s3:33061"
group_replication_group_seeds= "s1:33061,s2:33061,s3:33061"
group_replication_bootstrap_group= off
2. Start the server and connect to it. Configure the recovery credentials for the group_replication_recovery channel.
SET SQL_LOG_BIN=0;
CREATE USER RPL_USER@'%' IDENTIFIED BY 'PASSWORD';
GRANT REPLICATION SLAVE ON *.* TO RPL_USER@'%';
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='RPL_USER', MASTER_PASSWORD='PASSWORD' \\
FOR CHANNEL 'group_replication_recovery';
4. Install the Group Replication plugin and start it.
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
START GROUP_REPLICATION;
At this point server s3 is booted and running, has joined the group and caught up with the other servers in the group. Consulting the *note 'performance_schema.replication_group_members': performance-schema-replication-group-members-table. table again confirms this is the case.
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
| group_replication_applier | 395409e1-6dfa-11e6-970b-00212844f856 | s1 | 3306 | ONLINE |
| group_replication_applier | 7eb217ff-6df3-11e6-966c-00212844f856 | s3 | 3306 | ONLINE |
| group_replication_applier | ac39f1e6-6dfa-11e6-a69d-00212844f856 | s2 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+---------------+
Issuing this same query on server s2 or server s1 yields the same result. Also, you can verify that server s3 has caught up:
mysql> SHOW DATABASES LIKE 'test';
+-----------------+
| Database (test) |
+-----------------+
| test |
+-----------------+
mysql> SELECT * FROM test.t1;
+----+------+
| c1 | c2 |
+----+------+
| 1 | Luis |
+----+------+
mysql> SHOW BINLOG EVENTS;
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
| binlog.000001 | 4 | Format_desc | 3 | 123 | Server ver: 5.7.44-log, Binlog ver: 4 |
| binlog.000001 | 123 | Previous_gtids | 3 | 150 | |
| binlog.000001 | 150 | Gtid | 1 | 211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1' |
| binlog.000001 | 211 | Query | 1 | 270 | BEGIN |
| binlog.000001 | 270 | View_change | 1 | 369 | view_id=14724832985483517:1 |
| binlog.000001 | 369 | Query | 1 | 434 | COMMIT |
| binlog.000001 | 434 | Gtid | 1 | 495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:2' |
| binlog.000001 | 495 | Query | 1 | 585 | CREATE DATABASE test |
| binlog.000001 | 585 | Gtid | 1 | 646 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:3' |
| binlog.000001 | 646 | Query | 1 | 770 | use `test`; CREATE TABLE t1 (c1 INT PRIMARY KEY, c2 TEXT NOT NULL) |
| binlog.000001 | 770 | Gtid | 1 | 831 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:4' |
| binlog.000001 | 831 | Query | 1 | 890 | BEGIN |
| binlog.000001 | 890 | Table_map | 1 | 933 | table_id: 108 (test.t1) |
| binlog.000001 | 933 | Write_rows | 1 | 975 | table_id: 108 flags: STMT_END_F |
| binlog.000001 | 975 | Xid | 1 | 1002 | COMMIT /* xid=29 */ |
| binlog.000001 | 1002 | Gtid | 1 | 1063 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:5' |
| binlog.000001 | 1063 | Query | 1 | 1122 | BEGIN |
| binlog.000001 | 1122 | View_change | 1 | 1261 | view_id=14724832985483517:2 |
| binlog.000001 | 1261 | Query | 1 | 1326 | COMMIT |
| binlog.000001 | 1326 | Gtid | 1 | 1387 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:6' |
| binlog.000001 | 1387 | Query | 1 | 1446 | BEGIN |
| binlog.000001 | 1446 | View_change | 1 | 1585 | view_id=14724832985483517:3 |
| binlog.000001 | 1585 | Query | 1 | 1650 | COMMIT |
+---------------+------+----------------+-----------+-------------+--------------------------------------------------------------------+
File: manual.info.tmp, Node: group-replication-deploying-locally, Prev: group-replication-deploying-in-single-primary-mode, Up: group-replication-getting-started
The most common way to deploy Group Replication is using multiple server instances, to provide high availability. It is also possible to deploy Group Replication locally, for example for testing purposes. This section explains how you can deploy Group Replication locally.
Important:
Group Replication is usually deployed on multiple hosts because this ensures that high-availability is provided. The instructions in this section are not suitable for production deployments because all MySQL server instances are running on the same single host. In the event of failure of this host, the whole group fails. Therefore this information should be used for testing purposes and it should not be used in a production environments.
This section explains how to create a replication group with three MySQL Server instances on one physical machine. This means that three data directories are needed, one per server instance, and that you need to configure each instance independently. This - procedure assumes that MySQL Server was downloaded and unpacked - into the directory named 'mysql-5.7'. Each MySQL server instance requires a specific data directory. Create a directory named 'data', then in that directory create a subdirectory for each server instance, for example s1, s2 and s3, and initialize each one.
mysql-5.7/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-5.7 --datadir=$PWD/data/s1
mysql-5.7/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-5.7 --datadir=$PWD/data/s2
mysql-5.7/bin/mysqld --initialize-insecure --basedir=$PWD/mysql-5.7 --datadir=$PWD/data/s3
Inside 'data/s1', 'data/s2', 'data/s3' is an initialized data directory, containing the mysql system database and related tables and much more. To learn more about the initialization procedure, see *note data-directory-initialization::.
Warning:
Do not use '-initialize-insecure' in production environments, it is only used here to simplify the tutorial. For more information on security settings, see *note group-replication-security::.
Configuration of Local Group Replication Members
When you are following *note group-replication-configuring-instances::, you need to add configuration for the data directories added in the previous section. For example:
[mysqld]
# server configuration
datadir=<full_path_to_data>/data/s1
basedir=<full_path_to_bin>/mysql-8.0/
port=24801
socket=<full_path_to_sock_dir>/s1.sock
These settings configure MySQL server to use the data directory created earlier and which port the server should open and start listening for incoming connections.
Note:
The non-default port of 24801 is used because in this tutorial the three server instances use the same hostname. In a setup with three different machines this would not be required.
Group Replication requires a network connection between the members, which means that each member must be able to resolve the network address of all of the other members. For example in this tutorial all three instances run on one machine, so to ensure that the members can contact each other you could add a line to the option file such as 'report_host=127.0.0.1'.
Then each member needs to be able to connect to the other members on their 'group_replication_local_address'. For example in the option file of member s1 add:
group_replication_local_address= "127.0.0.1:24901"
group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
This configures s1 to use port 24901 for internal group communication with seed members. For each server instance you want to add to the group, make these changes in the option file of the member. For each member you must ensure a unique address is specified, so use a unique port per instance for 'group_replication_local_address'. Usually you want all members to be able to serve as seeds for members that are joining the group and have not got the transactions processed by the group. In this case, add all of the ports to 'group_replication_group_seeds' as shown above.
The remaining steps of *note group-replication-deploying-in-single-primary-mode:: apply equally to a group which you have deployed locally in this way.
File: manual.info.tmp, Node: group-replication-requirements-and-limitations, Next: group-replication-monitoring, Prev: group-replication-getting-started, Up: group-replication