Mar 14, 2012

How to Setup MySQL Replication in 11 Easy Steps

I've provided a link few weeks ago to a great complete guide for MySQL replication. Yet, since then I got many requests to provide a short and east guide for a new MySQL replication setup.

Well, Fasten Your Seat-belts
  1. Install a MySQL instance that will serve as a Master
  2. Install a MySQL instance that will serve as a Slave
  3. Configure the Master my.cnf file (located at /etc/ at CentOS) with the server id and the log file name: 
    1. [mysqld]
    2. server­-id = 1
    3. log­-bin   = master­-bin.log
  4. Configure the Slave my.cnf with the server id, reply logs and key configuration databases;
    1. server­-id = 2 
    2. relay­-log-­index = slave-­relay-­bin.index
    3. relay­-log = slave­-relay­-bin
    4. replicate-wild-ignore-table=mysql.%
    5. replicate-wild-ignore-table=information_schema.%
    6. replicate-wild-ignore-table=performance_schema.%
  5. For MySQL 5.6: Delete the auto.cnf from your MySQL directory slave> rm -rf /var/lib/mysql/auto.cnf
  6. Restart the MySQL daemons on both servers to apply the my.cnf changes.
  7. If both servers were just installed there is no need to sync their data files (since they  should have the same data files). O/w you should either stop the slave and master and copy the data files using SCP or perform just perform a mysqldump. Notice! Before copying the files, get the Master location describe below.
  8. Get the master location, so we can sync the slave to it:
    1. master> FLUSH TABLES WITH READ LOCK;
    2. master> SHOW MASTER STATUS;
    3. +-------------------+----------+--------------+------------------+
    4. | File              | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    5. +-------------------+----------+--------------+------------------+
    6. | master-bin.000013 |  1233453 |              |                  |
    7. +-------------------+----------+--------------+------------------+
    8. master> UNLOCK TABLES;
  9. Provide the slave with replication permissions:
    1. master> GRANT REPLICATION SLAVE ON *.* to `repl`@`%` IDENTIFIED BY 'slavepass';
    2. master> FLUSH PRIVILEGES;
  10. Setup the slave using the chosen user/password and the master location we found before:
    1. slave> CHANGE MASTER TO
    2.     ->     MASTER_HOST='10.17.16.253',
    3.     ->     MASTER_PORT=3306,
    4.     ->     MASTER_USER='repl',
    5.     ->     MASTER_PASSWORD='slavepass',
    6.     ->     MASTER_LOG_FILE='master-bin.000013',
    7.     ->     MASTER_LOG_POS=1233453;
  11. Now, start the slave and verify it's running: 
    1. slave> start slave;show slave status\G
  12. If everything is Okay, verify the replication really works by:
    1. Creating a table and inserting values to it in the master:
      1. master> CREATE DATABASE a;
      2. master> CREATE TABLE a.b (c int);
      3. master> INSERT INTO a.b (c) VALUES (1);
    2. Verifying that SLAVE/MASTER> SELECT * FROM a.b; return the same values in the master and slave;
    3. Dropping the database in the master: master> DROP DATABASE IF EXISTS a;
P.S If you are interested in implementing the replication using SSL, follow these instructions.

Bottom Line
Only few minutes and you got a running MySQL replication. Even these kind of tasks can be so simple and smooth...  and leaving you time to drink a cup of coffee!

Keep Performing,

Mar 6, 2012

Amazon Latest Price Reduction: News for Large Accounts and Linux based Users

Amazon announced today a price reduction on the EC2 instances prices. The price reduction does not include storage or traffic so you may expect that effective cost recduction numbers will be half on your montly bill.


On Demand Pricing Effects
Minor price reduction on the on demand instances: 5% on small instances, 10% on larger instances. Price Reduction Target: Traditional hosting offers.


Reserved Instances Pricing Effects
Major price reduction on the reserved instances: 20% effective reduction. Price Reduction Target: Traditional hosting offers.


Volume discounts
Finally, Amazon offers volume discounts. The volume discounts are focused on large accounts that spent at least $1M annually at AWS services. The volume discount are focused on the the reserved instances (the equivalent of traditional hosting). You can expect 10% reduction (5% off bill's bottom line) for $250K upfront bill ($1M annual AWS bill) and 20% reduction (10% of bill's bottom line) for $2M upfront bill ($8M annual AWS bill). Price Reduction Target: Traditional hosting offers.


Linux or Windows?
The price reduction increases the price differences between Windows based instances and Linux based. Price Reduction Target: Avoid price wars with Microsoft.


Why Amazon Made These Changes?
The price reduction signals that Amazon has a larger price pressure from Linux cloud providers rather than from Microsoft. It also signals that traditional hosting providers offers focus on large customers and larger servers accounts. These price reductions will help Amazon reduce larger customers migration out of its services.

Keep Performing,

Feb 13, 2012

MySQL Replication Complete Guide

This is probably my shortest post

Mats Kindahl MySQL Replication guide is short, concise and includes (almost) everything you need.


Keep Perfomring,
Moshe Kaplan

Feb 11, 2012

Do You Need More Disk Space for the MySQL Server at Amazon Web Services (AWS/EC2)?

If you deal with large databases, you will probably need more than the provisioned storage of the default Amazon AMI (8GB).


UPDATE: Since in many cases you may need to swap between the current disk and a new larger one, I've added in green comments regarding this case.


The following 10 steps guide will help you expand your disk meet your business needs:
  1. Create a new storage volume at the Amazon console (or using AWS API). Make sure it is big enough to accommodate storage, log files, bin files and future growth. Notice! verify the new drive is in the same availability zone as the target instance.
  2. Write down the mount name you chosen. For example: /dev/sdf.
  3. Mount the disk to your system according to Chris Dean post:
    1. sudo mkfs.ext4 /dev/sdf where /dev/sdf is the mount name you chose.
  4. Shutdown the MySQL daemon: sudo /etc/init.d/mysql stop
  5. Unmount the current disk umount /dev/sdf
  6. Move your current MySQL data directory to a temporary directory: sudo mv /var/lib/mysql  /var/lib/mysql.old or mount the previous disk to mysql.old by creating a new folder and changing the /etc/fstab as described bellow.
  7. Recreate the MySQL data directory and provide the needed permissions:
    1. sudo mkdir /var/lib/mysql
    2. sudo chown mysql:mysql  /var/lib/mysql
  8. Mount the new disk to the new directory:
    1. Add another line to /etc/fstab (sudo vi /etc/fstab): /dev/sdf /var/lib/mysql ext4 defaults 1 2
    2. Mount the disk: sudo mount /var/lib/mysql
    3. Verify everything is Okay with df
  9. Move the files from the temporary directory back to the new data directory sudo mv /var/lib/mysql.old/* /var/lib/mysql
  10. Remove the old directory: sudo rmdir /var/lib/mysql.old and unmount the old disk umount /dev/sdf
  11. Restart you MySQL: sudo /etc/init.d/mysql start
Note:
xfs today is the new kid in block w/ better performance w/ SSD disks and already stable.
Use the following commands if you choose it:
sudo apt-get install -y xfsprogs
sudo mkfs.xfs -f /dev/xvdf
vi /etc/fstab
/dev/xvdf /var/lib/mysql/ xfs defaults 0 2

Bottom Line
A combination of cloud knowledge, Linux sysadmin, MySQL expertise and careful work will help you go through the scale and growth challenges.

Keep Performing,

Feb 5, 2012

MySQL Replication Deadlock Detection and Recovery (The code inside)

The best practice to handle deadlocks (after eliminating them from the code) is recovering from them. Unfortunately, if you ever came across the "SHOW SLAVE STATUS\G" Error 'Deadlock found when trying to get lock: try restarting transaction', you probably know that MySQL Replication does not recover automatically from deadlocks.


Therefore, I implemented a Nagios plugin. This Nagios plugin detects the MySQL Replication deadlock and recover from it by restart the slave ("STOP SLAVE; START SLAVE;"). 


Technical
Brief: This a shell script (/bin/sh) that connects to a specified MySQL slave database, check the replication status for Deadlock errors + Automatically recover from the state by performing "SLAVE STOP; SLAVE START;"
Required parameters: host, user, password.
If no Deadlock is detected, "OK" is returned
O/w, a message is printed, slave restarted and Warning state is returned.


Bottom Line
Fast, Simple and very effective and now available from Nagios Exchange.


Download the MySQL Replication Deadlock Recovery Nagios Plugin.

Suggestions are welcome,


Keep Performing,
Moshe Kaplan

Jan 5, 2012

MySQL and Lucene: Server Side Cursors or Client Side Cursors?

What is the Default?
Client Side Cursors: this method returns the client the whole result set upon a SQL request.


What is Faster?
Client Side Cursors: once you get the result set back to the client, you can manipulate it and read through it using minimal communication with the server.


So Why Should I Choose Server Side Cursors?
If you have a very large result set (say 100GB and millions of records), it might not be wise to fetch all the record set and only then start manipulating it. Lucene is one of the most common use cases for it. 
Server Side Cursors are the solution in this case. Instead of fetching the whole result set to the client, the result set is being stored in a server's In Memory table. Upon request, the server returns to the client a defined number of rows. That way the client can start processing the results, while further fetch is done over the network.


What are the Limitations?

  1. Server Side Cursors support forward only result sets (you must set in JDBC for example fetchSize="NUMBER" resultSetType="FORWARD_ONLY") .
  2. MySQL Server Memory. This method requires a lot of memory. Fortunately, if you don't have enough memory, MySQL will automatically use MyISAM table to store the temporary table.

How should I Configure It?
Client Side Cursors need 0 configuration since it's the original behavior of MySQL.
The Server Side Cursors feature was added at MySQL 5.0 and is nowadays considered stable. Yet, you will have to enable it in your connection definition, and while fetching 


Can I Use Them Both on the Same Connection Pool?
Yes. Server Side Cursors are used only if two terms accomplished: 1) Server Side Cursors are enabled (useCursorFetch) and 2) setFetchSize is positive (>0). Therefore, you can enable Server Side Cursors by setting setFetchSize to positive value and set fetchsize to 0 to keep using Client Side Cursors. If you are tired of setting setFetchSize to 0 every time, just set defaultFetchSize to 0.


Bottom Line
Now, you have the tools and you can better use MySQL for your cloud highly scalable system.


Keep Performing,
Moshe Kaplan

Dec 29, 2011

5 Options to Backup Your MySQL and Keep Your Service Online

An old customer was facing a major issue. They just expanded their service from a local shop to a global "follow the sun" service, providing SaaS to worldwide customers.
Their MySQL server were the finest, and their procedures are the same, including backup procedures. Yet, concepts that matched a local shop my not fit a global firm.

Going from Local to Global
Their current procedures include both binary backup and mysqldump of the server. Although both procedures are aimed to keep the service available while performing the backup, the bottom line is that the CPU is highly utilized and the service is poor.

Backup Methods
There are two major backup methods (you can read more about them and others at Lenz Grimmer's presentation):

  1. mysqldump: This method extracts the database content into a flat file. This file includes all the DDL and DML SQL statements. Although this method is considered as a "hot backup", it tends to reduce the SLA and leave users unsatisfied. Moreover, if the database is not locked during the process, you may result an inconsistent backup.
  2. Binary files backup: This method requires stopping the MySQL server. On the other hand if you copy the binary file to a disk on the same physical server, it could be a very short downtime. In order to keep the files in a safe place, you should later on backup these copied files over the network. Please notice that this method has a another disadvantage: one of the binary files may be corrupted and you may not notice that till getting to the money time.
Where the Backup Can Be Performed?
There are three possible locations:

  1. The Master Server: The safest since we know that data was not "disappeared" during replication. The significant drawback is service downtime...
  2. The Existing Slave Server: At the risk losing information that was not replicated correctly, we get a service with no downtime. Yet, notice that if you use this server for other tasks you may affect these tasks SLA.
  3. Dedicated Slave: This can be a lightweight machine (yes, even a VPS) that can perform backups once or more times a day without causing downtime and without affecting the service at all.
The Bottom Line or How to Keep Their Service Available?
Well, in this case, I suggested the customer choosing the Dedicated Slave option that in a very small price helped him reduce the service downtime and provide great SLA to their global customers.

Keep Performing,
Moshe Kaplan

ShareThis

Intense Debate Comments

Ratings and Recommendations