Каналы новостей
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Azure VM Application Consistent MySQL DB Disk Snapshots
    Backup of Database is the pillar of our system which is necessary and mandatory to provide us data incase of crash, new machine provisioning and many other scenarios listed here.  As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault with no impact on production workloads. The snapshot provides different levels of consistency, as described below: 1. Application-consistent: App-consistent backups capture memory content and pending I/O operations. App-consistent snapshots use a VSS writer (or pre/post scripts for Linux) to ensure the consistency of the app data before a backup occurs. When you're recovering a VM with an app-consistent snapshot, the VM boots up. There's no data corruption or loss. The apps start in a consistent state. 2. File-system consistent: File-system consistent backups provide consistency by taking a snapshot of all files at the same time. When you're recovering a VM with a file-system consistent snapshot, the VM boots up. There's no data corruption or loss. Apps need to implement their own "fix-up" mechanism to make sure that restored data is consistent. 3. Crash-consistent: Crash-consistent snapshots typically occur if an Azure VM shuts down at the time of backup. Only the data that already exists on the disk at the time of backup is captured and backed up. Starts with the VM boot process followed by a disk check to fix corruption errors. Any in-memory data or write operations that weren't transferred to disk before the crash are lost. Apps implement their own data verification. For example, a database app can use its transaction log for verification. If the transaction log has entries that aren't in the database, the database software rolls transactions back until the data is consistent. So now after reading snapshot consistency we should choose application consistent snapshot which doesn't let guarantees no data loss. So below are the steps for setting up application consistent snapshots for a MySQL VM. Schedule backup on the VM using procedure listed here. Setup pre and post backup hooks using below procedure: Step 1: Create necessary dir and files: mkdir -p /etc/azure/cd  /etc/azure/mkdir logs Step 2: create a user in Mysql with super privilege to stop and start slave. GRANT SUPER ON *.* TO 'bkpuser'@'localhost' IDENTIFIED BY 'secretpass'; Step 3: Create below files required to configure pre and post backup hooks: vim VMSnapshotScriptPluginConfig.json{    "pluginName" : "ScriptRunner",    "preScriptLocation" : "/etc/azure/pre-mysql-backup.sh",    "postScriptLocation" : "/etc/azure/post-mysql-backup.sh",    "preScriptParams" : ["", ""],    "postScriptParams" : ["", ""],    "preScriptNoOfRetries" : 0,    "postScriptNoOfRetries" : 0,    "timeoutInSeconds" : 30,    "continueBackupOnFailure" : true,    "fsFreezeEnabled" : true} vim pre-mysql-backup.sh#!/bin/bash# variable declarationPROGNAME=$(basename $0)host=`hostname`IP=`hostname -I | xargs`USER="bkpuser"PASS="secretpass"SENDTO='recipients@company.com'# functionsfunction error_exit{       if [ $? -ne 0 ]; then           echo "${PROGNAME}: Step $1:- Failed at `($date date +"%Y_%m_%d")`" | mail -s "${PROGNAME} failed for Host : $host" -r "dba-alerts <dba-alerts@comany.com>" $SENDTO           exit 1       fi}touch /etc/azure/logs/$(basename $0).logmysql -u$USER -p$PASS -e "stop slave;" 2>&1 | tee /etc/azure/logs/$(basename $0).logerror_exit "stopping slave" vim post-mysql-backup.sh#!/bin/bash# variable declarationPROGNAME=$(basename $0)host=`hostname`IP=`hostname -I | xargs`USER="bkpuser"PASS="secretpass"SENDTO='recipients@company.com'# functionsfunction error_exit{       if [ $? -ne 0 ]; then           echo "${PROGNAME}: Step $1:- Failed at `($date date +"%Y_%m_%d")`" | mail -s "${PROGNAME} failed for Host : $host" -r "dba-alerts <dba-alerts@company.com>" $SENDTO           exit 1       fi}touch /etc/azure/logs/$(basename $0).logmysql -u$USER -p$PASS -e "start slave;" 2>&1 | tee /etc/azure/logs/$(basename $0).logerror_exit "starting slave" Step 4: Configure required permission on the files by azure backup chmod 700 *.shchmod 600 VMSnapshotScriptPluginConfig.json Ref:  [1]. https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-introduction#snapshot-creation

  • Orchestrator RAFT Leader Check with Proxy pass with Basic Auth Using Nginx
    Recently we have setup Orchestrator in High Availability mode using RAFT. We are running a 3 node setup in which there used to be a leader and rest 2 are Healthy raft member. So To access orchestrator service we may only speak to the leader node using /api/leader-check as HTTP health check for our proxy. This url returns http 200 on leader and 404 on members. So using below code in open nginx we have setup http health check with basic auth. Prerequisite: Lua support should be enabled in nginx. Below code is to define upstreams with healthcheck: upstream orchestrator { server 10.xx.xx.35:3000 max_fails=2; server 10.xx.xx.37:3000 max_fails=2; server 10.xx.xx.40:3000 max_fails=2; } lua_shared_dict myhealthcheck 1m; lua_socket_log_errors off; include /etc/nginx/lua/active_health_checks.lua;Lua Script for health check: Before creating script we will need a hash with base64 encoding below is the command to create it: echo -n 'user:password' | base64 Lua script defined in HealthCheck: /etc/nginx/lua/active_health_checks.lua init_worker_by_lua_block { local hc = require "resty.upstream.healthcheck" local ok, err = hc.spawn_checker{ shm = "myhealthcheck", -- defined by "lua_shared_dict" upstream = "orchestrator", -- defined by "upstream" type = "http", http_req = "GET /api/leader-check HTTP/1.0\r\nHost: orchestrator.domain.com\r\nAuthorization: Basic {output of command}\r\n\r\n", interval = 3000, -- run the check cycle every 3 sec timeout = 3000, -- 3 sec is the timeout for network operations fall = 3, -- # of successive failures before turning a peer down rise = 2, -- # of successive successes before turning a peer up valid_statuses = {200, 302, 301}, -- a list valid HTTP status code concurrency = 10, -- concurrency level for test requests } if not ok then ngx.log(ngx.ERR, "failed to spawn health checker: ", err) return end } Note: I am quite new to Nginx So I would love to see comments in improving it.

  • Q & A on Webinar “How Safe is Asynchronous Master-Master Setup in MySQL?”
    First I want to thank everyone who attended my May 21, 2020 webinar “How Safe is Asynchronous Master-Master Setup in MySQL?“. Recording and slides are available on the webinar page. Here are answers to the questions from participants which I was not able to provide during the webinar. Q: What do you generally think of hosting Relational Databases on VM’s as opposed to Bare metals? A: With modern hardware and modern virtual machines this is absolutely possible. I know about many successful high loaded applications that run MySQL on VMs. Just note that running a few VMs on a single physical machine may lead to resource loss rather than saving. For example, MySQL uses the InnoDB buffer pool to optimize operations on InnoDB. This option is essential for performance and should be big enough. Ideally to store whole active working set in memory. Still, MySQL uses memory for other operations. By installing a few MySQL instances on the same physical machine you will have a smaller amount of data in those individual buffer pools than if you’d run a single server. The same reasoning applies to CPU cores which are used to serve active threads. Q: The ORACLE RDBMS currently has an ACTIVE/ACTIVE synchronous tool, called the Active Data Guard, to ensure deduplication amongst other functions. Are there any such tools in production, for MySQL? A: Active Data Guard is a physical replication solution which uses redo log to transfer data: WHITE PAPER/ Oracle (Active) Data Guard 19cHOW DATA GUARD SYNCHRONIZES STANDBYDATABASE(S)A Data Guard configuration includes a production database referred to as the primary database, and up to 30 directly connected replicas referred to as standby databases. Primary and standby databases connect over TCP/IP using Oracle Net Services. There are no restrictions on where the databases are physically located provided they can communicate with each other. A standby database is created from a backup of the primary database without requiring any downtime of the Production application or database. Once a standby database has been created and configured, Data Guard automatically synchronizes the primary database and the standby database by transmitting the primary database redo -the change vector information used by every Oracle Database to protect transactions –as it is generated at the Primary database and applying it to the standby database Oracle (Active) Data Guard19c White Paper Open Source versions of MySQL do not support physical replication. Rather they support only logical replication when updates, stored in the special update log files, called binary logs, are replayed to the slave server. Physical replication is by nature synchronous and will simply stop working if data mismatches. With logical replication, data mismatch does not immediately lead to work interruption. However, if a data mismatch is undesirable, there are few things that you can do. First, think about switching to synchronous replication, such as Galera, InnoDB clusters, or Percona XtraDB Cluster (PXC). If synchronous replication cannot be used, consider the following safety measures for the built-in, asynchronous, replication: Start the slave server with the option [super-]read-only (available in all supported versions of Percona Server) Use row-based replication to avoid data mismatch in case of not safe statements Use GTIDs to prevent applying the same transaction twice Periodically run pt-table-checksum to check if data mismatch happened Fix errors with the help of pt-table-sync Note that physical replication does not support active-active setup. Active Data Guard redirects DML statements to the master node to imitate active-active behavior. For MySQL, when you do not have to write directly to two nodes, you can use ProxySQL for the same purpose. Q: Also there is no equivalent to active data guard in MySQL. Oracle has MySQL utilities called mysqlfailover and mysqlreplicate that allow for automatic failover and fast promotion of former master to slave A: mysqlfailover and mysqlreplicate are part of MySQL Utilities package which is currently deprecated. For failover capabilities, we currently recommend using ProxySQL and Orchestrator. Check also this blog post with details on how to set up both products. Q: What is the best architecture to use master-master replication with active-active mode? A: It depends on your needs. If you can install your nodes close enough to use synchronous replication – do it. If nodes should be geographically distributed, the only option is asynchronous replication. In this case, consider safety recommendations that I discussed at the webinar.

  • Percona Projects for Google Summer of Code – 2020
    We are proud to announce that Percona was selected as a participating organization for the Google Summer of Code (GSoC) 2020 program, this is our second year as a participating org with the GSoC program. GSoC is a great program to involve young student developers in open source projects. We participated in the program in 2019 for the first time and we were really happy and satisfied with the results. Percona Platform Engineering team decided to participate again for the 2020 program and we are glad and really happy to inform you that we were selected and welcome the student to work with our team during the summer of 2020 on their GSoC Project. Preparations We started planning for GSoC around November-December 2019, with the help from our Product Management team, we were able to shortlist a few ideas which we thought were really the right fit for our students, with Google Summer of Code, we realized it is very important to select projects which fit the timeline of the program and justify the purpose of the project for both the student and organization, with the help of our Marketing and HR department were able to prepare a landing page for our potential GSoC Students with all relevant information about projects and communication platforms, from our past year’s experience and observation from other organizations,  we realized most of the students start their preparations right from the mid of January. Since this is just our second year as a participating organization we are really happy with the response we got from students, let’s look at the numbers and compare them with 2019, these numbers are based on org data exported from https://summerofcode.withgoogle.com/ Student and Projects The student intern who will be working with us is Meet Patel, This is the first time for Meet to be selected as a student intern with the GSoC Program. We selected two students for the program but unfortunately, one of our students failed to meet the eligibility criteria of the program and was dropped later. Meet Patel Meet is a 2nd year undergraduate at DAIICT, Gandhinagar, India; pursuing a bachelor’s degree in Information and Communication Technology with a minor in Computational Science. Meet is an open-source enthusiast and an avid developer, who is always excited to learn about new technologies. Meet will work on the GSoC project for the Refactoring of PMM Framework. PMM Framework is an automated testing framework that is used to set up PMM with various databases and their multiple instances, perform load tests and wipe everything after tests are done. One of the major objectives of this project is to make a well-documented script that helps easily set up PMM to the new users as well as refactoring it to make it more usable for internal testing. To track the progress of the project, please follow the GSoC Project Branch. The Percona mentors for the project are Puneet Kala, Frontend/Web QA Automation Engineer, Nailya Kutlubaeva, QA Engineer The GSoC team at Percona is thankful to everyone involved in this year’s application and selection process. We are excited to have a team of mentors helping students learn about our products and working in open source. We’re looking forward to enjoying the two-way dialogue and guiding the students to hone their skills as they experience working on these valuable PMM developments. If you have any questions about GSoC Program please feel free to write to us on gsoc@percona.com The post Percona Projects for Google Summer of Code – 2020 appeared first on Percona Community Blog.

  • Removing Constraint Checks
    Constraint Checks that actually checked the constraints were introduced last year with MySQL 8.0.16 and they really do help you keep bad data out of your database.  I have found them very handy in the past year but today I spotted a question on a website about how to remove a Constraint Check.What is a Constraint Check?It is an integrity check.  In the following example a constraint is set up to make sure the calues of the column 'is' are greater than one hundred.CREATE TABLE c1 (id INT NOT NULL,    CONSTRAINT id_gt_100 CHECK (('id' > 100)));A simple test with a value in range runs perfectly fine.INSERT INTO c1 (id) VALUES (10000);Query OK, 1 row affected (0.0037 sec)But you will receive an error if the value of 'is' is less than  100.INSERT INTO c1 (id) VALUES (10);ERROR: 3819: Check constraint 'id_gt_100' is violated.Remembers it is cheaper and easier to keep bad data out of your databases than it is to correct it later.Stopping the CheckIf you like the check but have some reason to turn it off, say to bulk load some data in a situation where you can clean up the upload later, then use ALTER TABLE to turn off enforcement.ALTER TABLE c1 ALTER CHECK id_gt_100 NOT ENFORCED;Query OK, 0 rows affected (0.0150 sec)Records: 0  Duplicates: 0  Warnings: 0INSERT INTO c1 (id) VALUES (10);Query OK, 1 row affected (0.0038 sec)Or you can remove the constraint check.ALTER TABLE c1 DROP CHECK id_gt_100;Query OK, 0 rows affected (0.0091 sec)Records: 0  Duplicates: 0  Warnings: 0 SHOW CREATE TABLE c1\G*************************** 1. row ***************************       Table: c1Create Table: CREATE TABLE `c1` (  `id` int NOT NULL) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci1 row in set (0.0005 sec)Further reading  MySQL 8.0.16 Introducing CHECK constraintMySQL Manual CHECK ConstraintsAll opinions expressed in this blog are those of Dave Stokes who is actually amazed to find anyone else agreeing with him