From 08cad954a5033cfc12ad76210d6923c58e68d1d6 Mon Sep 17 00:00:00 2001 From: Patrick Birch <48594400+patrickbirch@users.noreply.github.com> Date: Fri, 6 Sep 2024 05:00:10 -0500 Subject: [PATCH] PXC-4488 Update topics for deprecated mysql_native_password method - 8.4 modified: docs/load-balance-proxysql.md modified: docs/upgrade-guide.md modified: docs/virtual-sandbox.md --- docs/load-balance-proxysql.md | 159 +++++++++-------- docs/upgrade-guide.md | 108 ++++++------ docs/virtual-sandbox.md | 324 ++++++++++++++++++---------------- 3 files changed, 315 insertions(+), 276 deletions(-) diff --git a/docs/load-balance-proxysql.md b/docs/load-balance-proxysql.md index a0f82326..d3e0f781 100644 --- a/docs/load-balance-proxysql.md +++ b/docs/load-balance-proxysql.md @@ -7,9 +7,9 @@ of a crash to minimize downtime. The daemon accepts incoming traffic from MySQL clients and forwards it to backend MySQL servers. -The proxy is designed to run continuously without needing to be restarted. Most +The proxy is designed to run continuously without needing to be restarted. Most configuration can be done at runtime using queries similar to SQL statements in -the ProxySQL admin interface. These include runtime parameters, server +the ProxySQL admin interface. These include runtime parameters, server grouping, and traffic-related settings. !!! admonition "See also" @@ -21,18 +21,18 @@ grouping, and traffic-related settings. !!! important - In version {{vers}}, Percona XtraDB Cluster does not support ProxySQL v1. + In version {{vers}}, Percona XtraDB Cluster does not support ProxySQL v1. ## Manual configuration This section describes how to configure ProxySQL with three Percona XtraDB Cluster nodes. -| Node| Host Name| IP address| -| ---- | -------- | --------- | -| Node 1| pxc1| 192.168.70.71| -| Node 2| pxc2| 192.168.70.72| -| Node 3| pxc3 | 192.168.70.73| -| Node 4| proxysql| 192.168.70.74| +| Node | Host Name | IP address | +| ------ | --------- | ------------- | +| Node 1 | pxc1 | 192.168.70.71 | +| Node 2 | pxc2 | 192.168.70.72 | +| Node 3 | pxc3 | 192.168.70.73 | +| Node 4 | proxysql | 192.168.70.74 | ProxySQL can be configured either using the `/etc/proxysql.cnf` file or through the admin interface. The admin interface is recommended because this interface can dynamically change the configuration without restarting the proxy. @@ -45,20 +45,20 @@ For this tutorial, install Percona XtraDB Cluster on Node 4: **Changes in the installation procedure** -In Percona XtraDB Cluster {{vers}}, ProxySQL is not installed automatically as a dependency of the ``percona-xtradb-cluster-client-8.0`` package. You should install the ``proxysql`` package separately. +In Percona XtraDB Cluster {{vers}}, ProxySQL is not installed automatically as a dependency of the `percona-xtradb-cluster-client-8.4` package. You should install the `proxysql` package separately. -!!! note +!!! note ProxySQL has multiple versions in the version 2 series. -* On Debian or Ubuntu for ProxySQL 2.x: +- On Debian or Ubuntu for ProxySQL 2.x: ```{.bash data-prompt="root@proxysql:~#"} root@proxysql:~# apt install percona-xtradb-cluster-client root@proxysql:~# apt install proxysql2 ``` -* On Red Hat Enterprise Linux or CentOS for ProxySQL 2.x: +- On Red Hat Enterprise Linux or CentOS for ProxySQL 2.x: ```{.bash data-prompt="$"} $ sudo yum install Percona-XtraDB-Cluster-client-80 @@ -150,8 +150,8 @@ The following output shows the ProxySQL tables: For more information about admin databases and tables, see [Admin Tables](https://github.com/sysown/proxysql/blob/master/doc/admin_tables.md) -!!! note - +!!! note + The ProxySQL configuration can reside in the following areas: @@ -172,8 +172,8 @@ see [Admin Tables](https://github.com/sysown/proxysql/blob/master/doc/admin_tabl To configure the backend Percona XtraDB Cluster nodes in ProxySQL, insert corresponding records into the `mysql_servers` table. -!!! note - +!!! note + ProxySQL uses the concept of *hostgroups* to group cluster nodes. This enables you to balance the load in a cluster by routing different types of traffic to different groups. @@ -217,10 +217,21 @@ To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with `USAGE` privilege on any node in the cluster and configure the user in ProxySQL. -The following example shows how to add a monitoring user on Node 2: +The following example shows how to add a monitoring user on Node 2 if you are using the depreated`mysql_native_password` authentication method: ```{.bash data-prompt="mysql@pxc2>"} mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password by '$3Kr$t'; +``` + +The following example adds a monitoring user on Node 2 if you are using the `caching_sha2_password` authentication method: + +```{.bash data-prompt="mysql@pxc2>"} +mysql@pxc2> CREATE USER 'proxysql'@'%' IDENTIFIED WITH caching_sha2_password by '$3Kr$t'; +``` + +Grant the user account privileges: + +```{.bash data-prompt="mysql@pxc2>"} mysql@pxc2> GRANT USAGE ON *.* TO 'proxysql'@'%'; ``` @@ -373,13 +384,13 @@ mysql@pxc3> GRANT ALL ON *.* TO 'sbuser'@'192.168.70.74'; You can install `sysbench` from Percona software repositories: -* For Debian or Ubuntu: +- For Debian or Ubuntu: ```{.bash data-prompt="root@proxysql:~#"} root@proxysql:~# apt install sysbench ``` -* For Red Hat Enterprise Linux or CentOS +- For Red Hat Enterprise Linux or CentOS ```{.bash data-prompt="root@proxysql:~#"} root@proxysql:~# yum install sysbench @@ -391,31 +402,31 @@ root@proxysql:~# yum install sysbench 1. Create the database that will be used for testing on one of the Percona XtraDB Cluster nodes: - ```{.bash data-prompt="mysql@pxc1>"} - mysql@pxc1> CREATE DATABASE sbtest; - ``` + ```{.bash data-prompt="mysql@pxc1>"} + mysql@pxc1> CREATE DATABASE sbtest; + ``` 2. Populate the table with data for the benchmark on the ProxySQL node: - ```{.bash data-prompt="root@proxysql:~#"} - root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \ - --num-requests=0 --max-time=20 \ - --test=/usr/share/doc/sysbench/tests/db/oltp.lua \ - --mysql-user='sbuser' --mysql-password='sbpass' \ - --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \ - prepare - ``` + ```{.bash data-prompt="root@proxysql:~#"} + root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \ + --num-requests=0 --max-time=20 \ + --test=/usr/share/doc/sysbench/tests/db/oltp.lua \ + --mysql-user='sbuser' --mysql-password='sbpass' \ + --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \ + prepare + ``` 3. Run the benchmark on the ProxySQL node: - ```{.bash data-prompt="root@proxysql:~#"} - root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \ - --num-requests=0 --max-time=20 \ - --test=/usr/share/doc/sysbench/tests/db/oltp.lua \ - --mysql-user='sbuser' --mysql-password='sbpass' \ - --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \ - run - ``` + ```{.bash data-prompt="root@proxysql:~#"} + root@proxysql:~# sysbench --report-interval=5 --num-threads=4 \ + --num-requests=0 --max-time=20 \ + --test=/usr/share/doc/sysbench/tests/db/oltp.lua \ + --mysql-user='sbuser' --mysql-password='sbpass' \ + --oltp-table-size=10000 --mysql-host=127.0.0.1 --mysql-port=6033 \ + run + ``` ProxySQL stores collected data in the `stats` schema: @@ -567,51 +578,51 @@ Initiating `pxc_maint_mode=MAINTENANCE` does not disconnect existing connections Assisted maintenance mode is controlled via the `pxc_maint_mode` variable, which is monitored by ProxySQL and can be set to one of the following values: -* `DISABLED`: This value is the default state -that tells ProxySQL to route traffic to the node as usual. +- `DISABLED`: This value is the default state + that tells ProxySQL to route traffic to the node as usual. + +- `SHUTDOWN`: This state is set automatically when you initiate node shutdown. -* `SHUTDOWN`: This state is set automatically when you initiate node shutdown. + You may need to shut down a node when upgrading the OS, adding resources, + changing hardware parts, relocating the server, etc. - You may need to shut down a node when upgrading the OS, adding resources, - changing hardware parts, relocating the server, etc. + When you initiate node shutdown, Percona XtraDB Cluster does not initiate the server shutdown process immediately. + Intead, it changes the state to `pxc_maint_mode=SHUTDOWN` + and waits for a predefined period (10 seconds by default). + When ProxySQL detects that the mode is set to `SHUTDOWN`, + it changes the status of this node to `OFFLINE_SOFT`. This status stops creating new node connections. + After the transition period, long-running active transactions are aborted. - When you initiate node shutdown, Percona XtraDB Cluster does not send the signal immediately. - Intead, it changes the state to `pxc_maint_mode=SHUTDOWN` - and waits for a predefined period (10 seconds by default). - When ProxySQL detects that the mode is set to `SHUTDOWN`, - it changes the status of this node to `OFFLINE_SOFT`. This status stops creating new node connections. - After the transition period, long-running active transactions are aborted. +- `MAINTENANCE`: You can change to this state + if you need to perform maintenance on a node without shutting it down. -* `MAINTENANCE`: You can change to this state -if you need to perform maintenance on a node without shutting it down. + You may need to isolate the node for a specific time + so that it does not receive traffic from ProxySQL + while you resize the buffer pool, truncate the undo log, + defragment, or check disks, etc. - You may need to isolate the node for a specific time - so that it does not receive traffic from ProxySQL - while you resize the buffer pool, truncate the undo log, - defragment, or check disks, etc. + To do this, manually set `pxc_maint_mode=MAINTENANCE`. + Control is not returned to the user for a predefined period + (10 seconds by default). You can increase the transition period + using the `pxc_maint_transition_period` variable + to accommodate long-running transactions. + If the period is long enough for all transactions to finish, + there should be little disruption in the cluster workload. If you increase + the transition period, the packaging script may determine the wait as a server stall. - To do this, manually set `pxc_maint_mode=MAINTENANCE`. - Control is not returned to the user for a predefined period - (10 seconds by default). You can increase the transition period - using the `pxc_maint_transition_period` variable - to accommodate long-running transactions. - If the period is long enough for all transactions to finish, - there should be little disruption in the cluster workload. If you increase - the transition period, the packaging script may determine the wait as a server stall. + When ProxySQL detects that the mode is set to `MAINTENANCE`, + it stops routing traffic to the node. During the transition period, + any existing connections continue, but ProxySQL avoids opening new connections and starting transactions. + Still, the user can open connections to monitor status. - When ProxySQL detects that the mode is set to `MAINTENANCE`, - it stops routing traffic to the node. During the transition period, - any existing connections continue, but ProxySQL avoids opening new connections and starting transactions. - Still, the user can open connections to monitor status. - - Once control is returned, you can perform maintenance activity. + Once control is returned, you can perform maintenance activity. - !!! note + !!! note - Data changes continue to be replicated across the cluster. + Data changes continue to be replicated across the cluster. - After you finish maintenance, set the mode back to `DISABLED`. - When ProxySQL detects this, it starts routing traffic to the node again. + After you finish maintenance, set the mode back to `DISABLED`. + When ProxySQL detects this, it starts routing traffic to the node again. **Related sections** diff --git a/docs/upgrade-guide.md b/docs/upgrade-guide.md index eecd1ff4..0ce5f8f8 100644 --- a/docs/upgrade-guide.md +++ b/docs/upgrade-guide.md @@ -6,13 +6,14 @@ (*rolling upgrade*) to the Percona XtraDB Cluster 8.0. --> + The following documents contain details about relevant changes in the 8.0 series of MySQL and Percona Server for MySQL. Make sure you deal with any incompatible features and variables mentioned in these documents when upgrading to Percona XtraDB Cluster 8.0. -* [Upgrading MySQL](http://dev.mysql.com/doc/refman/8.0/en/upgrading.html) +- [Upgrading MySQL](http://dev.mysql.com/doc/refman/8.0/en/upgrading.html) -* [Upgrading from MySQL 5.7 to 8.0](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) +- [Upgrading from MySQL 5.7 to 8.0](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) ## Important changes in Percona XtraDB Cluster 8.0 @@ -22,8 +23,8 @@ and variables mentioned in these documents when upgrading to Percona XtraDB Clus - [Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes](#not-recommended-to-mix-pxc-57-nodes-with-pxc-80-nodes) - [PXC strict mode is enabled by default](#strict-mode-is-enabled-by-default) - [The configuration file layout has changed in PXC 8.0](#the-configuration-file-layout-has-changed-in-pxc-80) - - [caching\_sha2\_password is the default authentication plugin](#caching_sha2_password-is-the-default-authentication-plugin) - - [mysql\_upgrade is part of SST](#mysql_upgrade-is-part-of-sst) + - [caching_sha2_password is the default authentication plugin](#caching_sha2_password-is-the-default-authentication-plugin) + - [mysql_upgrade is part of SST](#mysql_upgrade-is-part-of-sst) - [Major upgrade scenarios](#major-upgrade-scenarios) - [Scenario: No active parallel workload or with read-only workload](#scenario-no-active-parallel-workload-or-with-read-only-workload) - [Scenario: Upgrade from PXC 5.6 to PXC 8.0](#scenario-upgrade-from-pxc-56-to-pxc-80) @@ -50,13 +51,13 @@ error. sections [Encrypting PXC Traffic](encrypt-traffic.md#encrypt-traffic), [Configuring Nodes for Write-Set Replication](configure-nodes.md#configure) - + ### Not recommended to mix PXC 5.7 nodes with PXC 8.0 nodes Shut down the cluster and upgrade each node to PXC 8.0. It is @@ -69,6 +70,7 @@ Shut down the cluster and upgrade all nodes to PXC The rolling upgrade is supported but ensure the traffic is controlled during the upgrade and writes are directed only to 5.7 nodes until all nodes are upgraded to 8.0. --> + ### PXC strict mode is enabled by default Percona XtraDB Cluster in 8.0 runs with [PXC Strict Mode](strict-mode.md#pxc-strict-mode) enabled by default. This will deny any unsupported operations and may halt the server if [a strict mode validation fails](strict-mode.md#validations). It is recommended to first start the node with @@ -77,9 +79,9 @@ configuration file. All configuration settings are stored in the default MySQL configuration file: -* Path on Debian and Ubuntu: `/etc/mysql/mysql.conf.d/mysqld.cnf` +- Path on Debian and Ubuntu: `/etc/mysql/mysql.conf.d/mysqld.cnf` -* Path on Red Hat and CentOS: `/etc/my.cnf` +- Path on Red Hat and CentOS: `/etc/my.cnf` After you check the log for any tech preview features or unsupported features and you have fixed any of the encountered incompatibilities, set the variable @@ -95,9 +97,9 @@ Restarting the node with the updated configuration file also sets variable to `E All configuration settings are stored in the default MySQL configuration file: -* Path on Debian and Ubuntu: `/etc/mysql/mysql.conf.d/mysqld.cnf` +- Path on Debian and Ubuntu: `/etc/mysql/mysql.conf.d/mysqld.cnf` -* Path on Red Hat and CentOS: /etc/my.cnf +- Path on Red Hat and CentOS: /etc/my.cnf Before you start the upgrade, move your custom settings from `/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf` (on Debian and @@ -110,21 +112,21 @@ and CentOS) to the new location accordingly. ### caching_sha2_password is the default authentication plugin -In Percona XtraDB Cluster 8.0, the default authentication plugin is -`caching_sha2_password`. The ProxySQL option -[–syncusers](proxysql-v2.md#pxc-proxysql-v2-admin-tool-syncusers) will not work if the Percona XtraDB Cluster user is +In Percona XtraDB Cluster 8.4, the default authentication plugin is +`caching_sha2_password`. In ProxySQL 2.6.2 or later, use the `caching_sha2_password` authentication method. + +If you are using a version before ProxySQL 2.6.2, the option [–syncusers](proxysql-v2.md#pxc-proxysql-v2-admin-tool-syncusers) would not work if the Percona XtraDB Cluster user is created using `caching_sha2_password`. Use the `mysql_native_password` -authentication plugin in these cases. +authentication plugin in these cases. You must load the authentication plugin. -Be sure you are running on the latest 5.7 version before you upgrade to 8.0. +Be sure you are running on the latest 8.0 version before you upgrade to 8.4. ### mysql_upgrade is part of [SST](glossary.md#sst) **mysql_upgrade** is now run automatically as part of [SST](glossary.md#sst). You do not have to run it manually when upgrading your system from an older version. - +#. Stop the `mysql` service: --> + @@ -179,13 +182,14 @@ each node accordingly and avoid joining a cluster with unencrypted cluster traffic: all nodes in your cluster must have traffic encryption enabled. For more information, see :ref:`upgrade-guide-changed-traffic-encryption`. --> + ## Major upgrade scenarios Upgrading PXC from 5.7 to 8.0 may have slightly different strategies depending on the configuration and workload on your PXC cluster. -Note that the new default value of `pxc-encrypt-cluster-traffic` (set to *ON* -versus *OFF* in PXC 5.7) requires additional care. You cannot join a 5.7 node +Note that the new default value of `pxc-encrypt-cluster-traffic` (set to _ON_ +versus _OFF_ in PXC 5.7) requires additional care. You cannot join a 5.7 node to a PXC 8.0 cluster unless the node has traffic encryption enabled as the cluster may not have some nodes with traffic encryption enabled and some nodes with traffic encryption disabled. For more information, see @@ -289,8 +293,8 @@ of each replica node (via `RESET SLAVE ALL - - + +## + ### Scenario: Upgrade from PXC 5.6 to PXC 8.0 First, upgrade PXC from 5.6 to the latest version of PXC 5.7. Then proceed @@ -323,45 +328,44 @@ with the upgrade using the procedure described in To upgrade the cluster, follow these steps for each node: -1. Make sure that all nodes are synchronized. +1. Make sure that all nodes are synchronized. -2. Stop the `mysql` service: +2. Stop the `mysql` service: ```{.bash data-prompt="$"} $ sudo service mysql stop ``` -3. Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. -For more information, see [Installing Percona XtraDB Cluster](index.md#install). +3. Upgrade Percona XtraDB Cluster and Percona XtraBackup packages. + For more information, see [Installing Percona XtraDB Cluster](index.md#install). -1. Back up `grastate.dat`, so that you can restore it -if it is corrupted or zeroed out due to network issue. +4. Back up `grastate.dat`, so that you can restore it + if it is corrupted or zeroed out due to network issue. -1. Now, start the cluster node with 8.0 packages installed, PXC will upgrade -the data directory as needed - either as part of the startup process or a -state transfer (IST/SST). +5. Now, start the cluster node with 8.0 packages installed, PXC will upgrade + the data directory as needed - either as part of the startup process or a + state transfer (IST/SST). - In most cases, starting the `mysql` service should run the node with your - previous configuration. For more information, see [Adding Nodes to Cluster](add-node.md#add-node). - - ```{.bash data-prompt="$"} - $ sudo service mysql start - ``` + In most cases, starting the `mysql` service should run the node with your + previous configuration. For more information, see [Adding Nodes to Cluster](add-node.md#add-node). - !!! note + ```{.bash data-prompt="$"} + $ sudo service mysql start + ``` - On CentOS, the /etc/my.cnf configuration file is renamed to `my.cnf.rpmsave`. Make sure to rename it back before joining the upgraded node back to the cluster. + !!! note + On CentOS, the /etc/my.cnf configuration file is renamed to `my.cnf.rpmsave`. Make sure to rename it back before joining the upgraded node back to the cluster. - [PXC Strict Mode](strict-mode.md#pxc-strict-mode) is enabled by default, which may result in denying any - unsupported operations and may halt the server. For more information, see - [pxc-strict-mode is enabled by default](#pxc-strict-mode-is-enabled-by-default). - `pxc-encrypt-cluster-traffic` is enabled by default. You need to configure - each node accordingly and avoid joining a cluster with unencrypted cluster - traffic. For more information, see - [Traffic encryption is enabled by default](#traffic-encryption-is-enabled-by-default). + [PXC Strict Mode](strict-mode.md#pxc-strict-mode) is enabled by default, which may result in denying any + unsupported operations and may halt the server. For more information, see + [pxc-strict-mode is enabled by default](#pxc-strict-mode-is-enabled-by-default). -1. Repeat this procedure for the next node in the cluster -until you upgrade all nodes. + `pxc-encrypt-cluster-traffic` is enabled by default. You need to configure + each node accordingly and avoid joining a cluster with unencrypted cluster + traffic. For more information, see + [Traffic encryption is enabled by default](#traffic-encryption-is-enabled-by-default). +6. Repeat this procedure for the next node in the cluster + until you upgrade all nodes. diff --git a/docs/virtual-sandbox.md b/docs/virtual-sandbox.md index 82a95bae..1da15414 100644 --- a/docs/virtual-sandbox.md +++ b/docs/virtual-sandbox.md @@ -5,7 +5,7 @@ based on ProxySQL. To test the cluster, we will use the sysbench benchmark tool. It is assumed that each PXC node is installed on Amazon EC2 micro instances -running CentOS 7. However, the information in this section should apply if you +running CentOS 7. However, the information in this section should apply if you used another virtualization technology (for example, VirtualBox) with any Linux distribution. @@ -16,144 +16,144 @@ more virtual machine has ProxySQL, which redirects requests to the nodes. Running ProxySQL on an application server, instead of having it as a dedicated entity, removes the unnecessary extra network roundtrip, because the load balancing layer in Percona XtraDB Cluster scales well with application servers. -1. Install Percona XtraDB Cluster on three cluster nodes, as described in [Configuring Percona XtraDB Cluster on CentOS](configure-cluster-rhel.md#centos-howto). +1. Install Percona XtraDB Cluster on three cluster nodes, as described in [Configuring Percona XtraDB Cluster on CentOS](configure-cluster-rhel.md#centos-howto). -2. On the client node, install [ProxySQL](load-balance-proxysql.md#load-balancing-with-proxysql) and `sysbench`: +2. On the client node, install [ProxySQL](load-balance-proxysql.md#load-balancing-with-proxysql) and `sysbench`: ```{.bash data-prompt="$"} $ yum -y install proxysql2 sysbench ``` -3. When all cluster nodes are started, configure ProxySQL using the admin -interface. +3. When all cluster nodes are started, configure ProxySQL using the admin + interface. - !!! admonition "Tip" + !!! admonition "Tip" - To connect to the ProxySQL admin interface, you need a ``mysql`` client. - You can either connect to the admin interface from Percona XtraDB Cluster nodes - that already have the ``mysql`` client installed (Node 1, Node 2, Node 3) - or install the client on Node 4 and connect locally. + To connect to the ProxySQL admin interface, you need a ``mysql`` client. + You can either connect to the admin interface from Percona XtraDB Cluster nodes + that already have the ``mysql`` client installed (Node 1, Node 2, Node 3) + or install the client on Node 4 and connect locally. - To connect to the admin interface, use the credentials, host name and port - specified in the [global variables](https://github.com/sysown/proxysql/blob/master/doc/global_variables.md). + To connect to the admin interface, use the credentials, host name and port + specified in the [global variables](https://github.com/sysown/proxysql/blob/master/doc/global_variables.md). - !!! warning + !!! warning - Do not use default credentials in production! + Do not use default credentials in production! - The following example shows how to connect to the ProxySQL admin interface - with default credentials (assuming that ProxySQL IP is 192.168.70.74): + The following example shows how to connect to the ProxySQL admin interface + with default credentials (assuming that ProxySQL IP is 192.168.70.74): - ```{.bash data-prompt="root@proxysql:~#"} - root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032 - ``` + ```{.bash data-prompt="root@proxysql:~#"} + root@proxysql:~# mysql -u admin -padmin -h 127.0.0.1 -P 6032 + ``` - ??? example "Expected output" + ??? example "Expected output" - ```{.text .no-copy} - Welcome to the MySQL monitor. Commands end with ; or \g. - Your MySQL connection id is 2 - Server version: 5.5.30 (ProxySQL Admin Module) + ```{.text .no-copy} + Welcome to the MySQL monitor. Commands end with ; or \g. + Your MySQL connection id is 2 + Server version: 5.5.30 (ProxySQL Admin Module) - Copyright (c) 2009-2020 Percona LLC and/or its affiliates - Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. + Copyright (c) 2009-2020 Percona LLC and/or its affiliates + Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. - Oracle is a registered trademark of Oracle Corporation and/or its - affiliates. Other names may be trademarks of their respective - owners. + Oracle is a registered trademark of Oracle Corporation and/or its + affiliates. Other names may be trademarks of their respective + owners. - Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - mysql> - ``` + mysql> + ``` - To see the ProxySQL databases and tables use the `SHOW DATABASES` and - `SHOW TABLES` commands: - - ```{.bash data-prompt="mysql>"} - mysql> SHOW DATABASES; - ``` + To see the ProxySQL databases and tables use the `SHOW DATABASES` and + `SHOW TABLES` commands: - The following output shows the list of the ProxySQL databases: - - ??? example "Expected output" - - ```{.text .no-copy} - +-----+---------------+-------------------------------------+ - | seq | name | file | - +-----+---------------+-------------------------------------+ - | 0 | main | | - | 2 | disk | /var/lib/proxysql/proxysql.db | - | 3 | stats | | - | 4 | monitor | | - | 5 | stats_monitor | /var/lib/proxysql/proxysql_stats.db | - +-----+---------------+-------------------------------------+ - 5 rows in set (0.00 sec) + ```{.bash data-prompt="mysql>"} + mysql> SHOW DATABASES; ``` - ```{.bash data-prompt="mysql>"} - mysql> SHOW TABLES; - ``` - - The following output shows the list of tables: - - ??? example "Expected output" - - ```{.text .no-copy} - +----------------------------------------------------+ - | tables | - +----------------------------------------------------+ - | global_variables | - | mysql_aws_aurora_hostgroups | - | mysql_collations | - | mysql_firewall_whitelist_rules | - | mysql_firewall_whitelist_sqli_fingerprints | - | mysql_firewall_whitelist_users | - | mysql_galera_hostgroups | - | mysql_group_replication_hostgroups | - | mysql_query_rules | - | mysql_query_rules_fast_routing | - | mysql_replication_hostgroups | - | mysql_servers | - | mysql_users | - | proxysql_servers | - | restapi_routes | - | runtime_checksums_values | - | runtime_global_variables | - | runtime_mysql_aws_aurora_hostgroups | - | runtime_mysql_firewall_whitelist_rules | - | runtime_mysql_firewall_whitelist_sqli_fingerprints | - | runtime_mysql_firewall_whitelist_users | - | runtime_mysql_galera_hostgroups | - | runtime_mysql_group_replication_hostgroups | - | runtime_mysql_query_rules | - | runtime_mysql_query_rules_fast_routing | - | runtime_mysql_replication_hostgroups | - | runtime_mysql_servers | - | runtime_mysql_users | - | runtime_proxysql_servers | - | runtime_restapi_routes | - | runtime_scheduler | - | scheduler | - +----------------------------------------------------+ - 32 rows in set (0.00 sec) + The following output shows the list of the ProxySQL databases: + + ??? example "Expected output" + + ```{.text .no-copy} + +-----+---------------+-------------------------------------+ + | seq | name | file | + +-----+---------------+-------------------------------------+ + | 0 | main | | + | 2 | disk | /var/lib/proxysql/proxysql.db | + | 3 | stats | | + | 4 | monitor | | + | 5 | stats_monitor | /var/lib/proxysql/proxysql_stats.db | + +-----+---------------+-------------------------------------+ + 5 rows in set (0.00 sec) + ``` + + ```{.bash data-prompt="mysql>"} + mysql> SHOW TABLES; ``` - For more information about admin databases and tables, see [Admin Tables](https://github.com/sysown/proxysql/blob/master/doc/admin_tables.md) - - !!! note - - ProxySQL has 3 areas where the configuration can reside: - - * MEMORY (your current working place) - - * RUNTIME (the production settings) - - * DISK (durable configuration, saved inside an SQLITE database) - - When you change a parameter, you change it in MEMORY area. - That is done by design to allow you to test the changes - before pushing to production (RUNTIME), or saving them to disk. + The following output shows the list of tables: + + ??? example "Expected output" + + ```{.text .no-copy} + +----------------------------------------------------+ + | tables | + +----------------------------------------------------+ + | global_variables | + | mysql_aws_aurora_hostgroups | + | mysql_collations | + | mysql_firewall_whitelist_rules | + | mysql_firewall_whitelist_sqli_fingerprints | + | mysql_firewall_whitelist_users | + | mysql_galera_hostgroups | + | mysql_group_replication_hostgroups | + | mysql_query_rules | + | mysql_query_rules_fast_routing | + | mysql_replication_hostgroups | + | mysql_servers | + | mysql_users | + | proxysql_servers | + | restapi_routes | + | runtime_checksums_values | + | runtime_global_variables | + | runtime_mysql_aws_aurora_hostgroups | + | runtime_mysql_firewall_whitelist_rules | + | runtime_mysql_firewall_whitelist_sqli_fingerprints | + | runtime_mysql_firewall_whitelist_users | + | runtime_mysql_galera_hostgroups | + | runtime_mysql_group_replication_hostgroups | + | runtime_mysql_query_rules | + | runtime_mysql_query_rules_fast_routing | + | runtime_mysql_replication_hostgroups | + | runtime_mysql_servers | + | runtime_mysql_users | + | runtime_proxysql_servers | + | runtime_restapi_routes | + | runtime_scheduler | + | scheduler | + +----------------------------------------------------+ + 32 rows in set (0.00 sec) + ``` + + For more information about admin databases and tables, see [Admin Tables](https://github.com/sysown/proxysql/blob/master/doc/admin_tables.md) + + !!! note + + ProxySQL has 3 areas where the configuration can reside: + + * MEMORY (your current working place) + + * RUNTIME (the production settings) + + * DISK (durable configuration, saved inside an SQLITE database) + + When you change a parameter, you change it in MEMORY area. + That is done by design to allow you to test the changes + before pushing to production (RUNTIME), or saving them to disk. ### Adding cluster nodes to ProxySQL @@ -166,7 +166,7 @@ INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.7 INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.70.73',10,3306,1000); ``` -ProxySQL v2.0 supports PXC natlively. It uses the concept of *hostgroups* +ProxySQL v2.0 supports PXC natlively. It uses the concept of _hostgroups_ (see the value of hostgroup_id in the mysql_servers table) to group cluster nodes to balance the load in a cluster by routing different types of traffic to different groups. @@ -175,17 +175,17 @@ This information is stored in the [runtime_]mysql_galera_hostgroups table. **Columns of the `[runtime_]mysql_galera_hostgroups` table** -|Column name|Description| -| ---------- | ----------- | -|writer_hostgroup:|The ID of the hostgroup that refers to the WRITER node| -|backup_writer_hostgroup|The ID of the hostgroup that contains candidate WRITER servers| -|reader_hostgroup|The ID of the hostgroup that contains candidate READER servers| -|offline_hostgroup|The ID of the hostgroup that will eventually contain the WRITER node that will be put OFFLINE| -|active|`1` (Yes) to inidicate that this configuration should be used; `0` (No) - otherwise| -|max_writers|The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is `1`. The value in this column may not exceed the total number of nodes.| -|writer_is_also_reader|`1` (Yes) to keep the given node in both `reader_hostgroup` and `writer_hostgroup`. `0` (No) to remove the given node from `reader_hostgroup` if it already belongs to `writer_hostgroup`.| -|max_transactions_behind|As soon as the value of :variable:`wsrep_local_recv_queue` exceeds the number stored in this column the given node is set to `OFFLINE`. Set the value carefully based on the behaviour of the node.| -|comment|Helpful extra information about the given node| +| Column name | Description | +| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| writer_hostgroup: | The ID of the hostgroup that refers to the WRITER node | +| backup_writer_hostgroup | The ID of the hostgroup that contains candidate WRITER servers | +| reader_hostgroup | The ID of the hostgroup that contains candidate READER servers | +| offline_hostgroup | The ID of the hostgroup that will eventually contain the WRITER node that will be put OFFLINE | +| active | `1` (Yes) to inidicate that this configuration should be used; `0` (No) - otherwise | +| max_writers | The maximum number of WRITER nodes that must operate simultaneously. For most cases, a reasonable value is `1`. The value in this column may not exceed the total number of nodes. | +| writer_is_also_reader | `1` (Yes) to keep the given node in both `reader_hostgroup` and `writer_hostgroup`. `0` (No) to remove the given node from `reader_hostgroup` if it already belongs to `writer_hostgroup`. | +| max_transactions_behind | As soon as the value of :variable:`wsrep_local_recv_queue` exceeds the number stored in this column the given node is set to `OFFLINE`. Set the value carefully based on the behaviour of the node. | +| comment | Helpful extra information about the given node | Make sure that the variable mysql-server_version refers to the correct version. For Percona XtraDB Cluster {{vers}}, set it to {{vers}} accordingly: @@ -272,7 +272,6 @@ mysql> select hostgroup_id,hostname,port,status,weight from runtime_mysql_server ProxySQL Documentation: `mysql_query_rules` table https://github.com/sysown/proxysql/wiki/Main-(runtime)#mysql_query_rules - ### ProxySQL failover behavior Notice that all servers were inserted into the mysql_servers table with the @@ -304,15 +303,29 @@ node is put back online), the node with the highest weight is automatically elected for write requests. + ### Creating a ProxySQL monitoring user To enable monitoring of Percona XtraDB Cluster nodes in ProxySQL, create a user with `USAGE` privilege on any node in the cluster and configure the user in ProxySQL. -The following example shows how to add a monitoring user on Node 2: +The following example shows how to add a monitoring user on Node 2 if you are using the deprecated `mysql_native_password` authentication method: ```{.bash data-prompt="mysql>"} mysql> CREATE USER 'proxysql'@'%' IDENTIFIED WITH mysql_native_password BY 'ProxySQLPa55'; +``` + +The following example adds a monitoring user on Node 2 if you are using the `caching_sha2_password` authentication method: + +```{.bash data-prompt="mysql>"} +mysql> CREATE USER 'proxysql'@'%' \ + IDENTIFIED WITH caching_sha2_password \ + BY 'ProxySQLPa55'; +``` + +For either authentication method, run the following command to give the user account named 'proxysql' permission to connect to any database and perform basic actions like checking if the database is read-only. This privilege is often used for tools that need to monitor or interact with a MySQL server. + +````{.bash data-prompt="mysql>"} mysql> GRANT USAGE ON *.* TO 'proxysql'@'%'; ``` @@ -324,11 +337,11 @@ WHERE variable_name='mysql-monitor_username'; mysql> UPDATE global_variables SET variable_value='ProxySQLPa55' WHERE variable_name='mysql-monitor_password'; -``` +```` ### Saving and loading the configuration -To load this configuration at runtime, issue the `LOAD` command. To save these +To load this configuration at runtime, issue the `LOAD` command. To save these changes to disk (ensuring that they persist after ProxySQL shuts down), issue the `SAVE` command. @@ -406,14 +419,14 @@ The example of the output is the following: Query OK, 1 row affected (0.00 sec) ``` -!!! note +!!! note ProxySQL currently doesn’t encrypt passwords. !!! admonition "See also" [More information about password encryption in ProxySQL](https://github.com/sysown/proxysql/wiki/MySQL-{{vers}}) - + Load the user into runtime space and save these changes to disk (ensuring that they persist after ProxySQL shuts down): @@ -445,13 +458,24 @@ root@proxysql:~# mysql -u appuser -p$3kRetp@$sW0rd -h 127.0.0.1 -P 6033 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. ``` - To provide read/write access to the cluster for ProxySQL, add this user on one - of the Percona XtraDB Cluster nodes: +The following example adds an `appuser` user account, if you are using the deprecated `mysql_native_password` authentication method: ```{.bash data-prompt="mysql>"} mysql> CREATE USER 'appuser'@'192.168.70.74' IDENTIFIED WITH mysql_native_password by '$3kRetp@$sW0rd'; +``` + +The following example adds an `appuser` user account if you are using the `caching_sha2_password` authentication method: +```{.bash data-prompt="mysql>"} +mysql> CREATE USER 'appuser'@'192.168.70.74' \ + IDENTIFIED WITH caching_sha2_password \ + BY '$3kRetp@$sW0rd'; +``` + +The following example command grants the `appuser` account all privileges on all databases and tables. + +```{.bash data-prompt="mysql>"} mysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74'; ``` @@ -460,22 +484,22 @@ mysql> GRANT ALL ON *.* TO 'appuser'@'192.168.70.74'; After you set up Percona XtraDB Cluster in your testing environment, you can test it using the `sysbench` benchmarking tool. -1. Create a database (sysbenchdb in this example; you can use a -different name): +1. Create a database (sysbenchdb in this example; you can use a + different name): - ```{.bash data-prompt="mysql>"} - mysql> CREATE DATABASE sysbenchdb; - ``` + ```{.bash data-prompt="mysql>"} + mysql> CREATE DATABASE sysbenchdb; + ``` - The following output confirms that a new database has been created: + The following output confirms that a new database has been created: - ??? example "Expected output" + ??? example "Expected output" - ```{.text .no-copy} - Query OK, 1 row affected (0.01 sec) - ``` + ```{.text .no-copy} + Query OK, 1 row affected (0.01 sec) + ``` -2. Populate the table with data for the benchmark. Note that you +2. Populate the table with data for the benchmark. Note that you should pass the database you have created as the value of the `--mysql-db` parameter, and the name of the user who has full access to this database as the value of the `--mysql-user` @@ -488,7 +512,7 @@ different name): --table-size=1000 prepare ``` -3. Run the benchmark on port 6033: +3. Run the benchmark on port 6033: ```{.bash data-prompt="$"} $ sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-db=sysbenchdb \ @@ -501,7 +525,7 @@ different name): **Related sections and additional reading** -* [Load balancing with ProxySQL](load-balance-proxysql.md#load-balancing-with-proxysql) -* [Configuring Percona XtraDB Cluster on CentOS](configure-cluster-rhel.md#centos-howto) -* [Percona Blog post: ProxySQL Native Support for Percona XtraDB Cluster (PXC)](https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/) -* [GitHub repository for the sysbench benchmarking tool](https://github.com/akopytov/sysbench/) \ No newline at end of file +- [Load balancing with ProxySQL](load-balance-proxysql.md#load-balancing-with-proxysql) +- [Configuring Percona XtraDB Cluster on CentOS](configure-cluster-rhel.md#centos-howto) +- [Percona Blog post: ProxySQL Native Support for Percona XtraDB Cluster (PXC)](https://www.percona.com/blog/2019/02/20/proxysql-native-support-for-percona-xtradb-cluster-pxc/) +- [GitHub repository for the sysbench benchmarking tool](https://github.com/akopytov/sysbench/)