Jul 24, 2014

9 steps to Implement Offline Backup at Azure IaaS: Backup a MySQL to Azure Storage

A good backup is probably something you will thank for when s%$t hits the fan.
If you chose MySQL as your data infrastructure and Microsoft Azure as your cloud infrastructure, you will probably thank this procedure (that I actually turned into a a script).

Chosen Products
Our first task is choosing the backup and automation products. I selected the following two:
  1. Percona XtraBackup: a leading backup product by Percona. The product creates an hot backup of the database that is equivalent to disk copy. This method is much faster to backup and recover than mysqldump. It also support increment backup.
  2. Azure SDK Tools Xplat: a node.js based SDK that enables command line interface to the various Azure services including Azure Storage.
Backup Implementation Guide
  1. Install Percona XtraBackup
    sudo wget http://www.percona.com/redir/downloads/XtraBackup/XtraBackup-2.2.3/binary/debian/precise/x86_64/percona-xtrabackup_2.2.3-4982-1.precise_amd64.deb
    sudo dpkg -i percona-xtrabackup_2.2.3-4982-1.precise_amd64.deb
    apt-get update
    sudo apt-get install -f
    sudo dpkg -i percona-xtrabackup_2.2.3-4982-1.precise_amd64.deb
  2. Install Azure SDK Tools Xplat
    sudo apt-get update
    sudo apt-get -y install nodejs python-software-properties
    sudo add-apt-repository ppa:chris-lea/node.js
    sudo wget https://npmjs.org/install.sh --no-check-certificate | sudo sh
    sudo apt-get install npm
    sudo npm config set registry http://registry.npmjs.org/
    sudo npm install -g azure-cli 
  3. Install a backup procedure
    1. Get a publish settings file from Azure (can be done from the console).
    2. Get account name and the matching base64 key from the Azure console.
    3. Import the publish setting filesudo azure account import /opt/mysqlbackup/mysqlbackup.publishsettings
    4. Create a Storage Container
      sudo azure storage container create --container container_name -a account_name -k base64_account_key
  4. Run a full backup and prepare it twice to make it ready for recovery
    sudo xtrabackup --backup
    sudo xtrabackup --prepare
    sudo xtrabackup --prepare
  5. Add the frm files and mysql database to your backup
    sudo chmod -R +r /var/lib/mysql/
    sudo cp -R /var/lib/mysql/myql/* /mnt/backup/mysql/
    sudo cp -R /var/lib/mysql/yourdb/*.frm /mnt/backup/yourdb/
  6. Tar the files into a unique daily name_now=$(date +"%Y_%m_%d")
    _file="/mnt/backup/mysqlbackup$_now.tar.gz"
    tar cvzf "$_file" /mnt/datadrive/mysqlbackup/
  7. Copy the folder to Azure Storage using
  8. azure storage blob upload -f "$_file" -q --container container_name -a account_name -k base64_account_key
  9. Create a cron that will run it daily:
    > sudo cron -e
    * 0 * * * /opt/mysqlbackup/daily.backup.sh >/dev/null 2>&1
Recovery Guide
  1. Bring back the files from the Azure storage to /mnt/backup/
    sudo cd /mnt/backup
    sudo azure storage blob download --container container_name -a account_name -k base64_account_key -b $file_name
  2. Uncompress the files
    sudo tar xvfz $file_name
  3. Copy the files to your data folder (/var/lib/mysql) after shutting down the MySQL
    sudo service mysql stop
    sudo rsync -avrP /mnt/backup/ /var/lib/mysql/
  4. Verify the folder permissions
    sudo chown -R mysql:mysql /var/lib/mysql
  5. Restart the MySQL and verify everything is working.
    sudo service mysql start
Bottom Line 
It may be tough. It may be resource and time consuming. However, you must have good recovery process to keep your a$$...

Keep Performing,

Jul 17, 2014

How to Disable MySQL Binlog

@ MySQL, clean the the old bin log files 
SQL> SHOW MASTER STATUS\G
Take the File name
SQL> PURGE BINARY LOGS TO 'file';

@ The /etc/my.cnf
Comment out log-bin
> #log-bin = ....

Restart the mysql
> sudo service mysql restart

Keep Performing,

Jul 14, 2014

Keep Your Website Up and Running During an Upgrade

All of us want to have our website up and running.
However, upgrading a software version requires downtime (even if it minimal).
What should we do?

The answer is simple: follow the following process and gain a highly available system based on load balancing and session off loading, as well as a no downtime procedure to upgrade your system.

Keep Your Site Up and Running @ AWS

  1. Place more than a single web instance behind a load balancer. The Elastic Load Balancer (ELB) will be fine for that.
  2. Make sure you are using session offloading in your system implementation.
  3. Install elbcli on your machine sudo apt-get -y install elbcli
  4. Create a credentials file /var/keys/aws_credential_file and secure it sudo chmod 600 /var/keys/aws_credential_file
    AWSAccessKeyId=AKIAblablabla
    AWSSecretKey=blablabla
    blablablablablabla
  5. Upgrade the servers one after the another.
  6. Before upgrading a server, take it out of the ELB
    sudo elb-deregister-instances-from-lb MyLoadBalancer --instances i-4e05f721  --aws-credential-file=/var/keys/aws_credential_file
  7. And after completing the server upgrade take it back in.
    sudo elb-register-instances-with-lb MyLoadBalancer --instances i-4e05f721  --aws-credential-file=/var/keys/aws_credential_file
Bottom Line
A simple process and careful design will keep you system up and running.

Keep Performing,

Jul 10, 2014

Scale Out Patterns for OpenStack (and other Cloud) based Systems

It is a common question these days how one should design their next system to support elastic and growth requirements in the cloud era.

As I got this specific query today, I would like to share with you my answer based on various materials I created in the last few years:


1. Avoid File Storage SPOF: Use AWS S3 or its equivalent open source OpenStack Swift as a central file server repository when needed

How to use OpenStack Swift for your business case

2. Avoid Data Store SPOF: Use clustered data store that recovers automatically w/o a DBA or an operator intervention. Fine examples are AWS MySQL RDS, MongoDB and Cassandra

MongoDB HA concepts

3. Avoid Static Servers: Leverage Autoscale features or customize it for your needs

How to implement your application logic to auto scale your app

4. Avoid Messing with Cache: Use a central sharded cache to avoid cache revocation and reduce data stores load using MongoDB, CouchBase or Redis.




5. Offload your servers sessions: in order to avoid users log off and lost transactions:

How to implement a session offloading using a central store

6. Avoid Service Downtime due to Servers Downtime: More issues that can be found in my extensive presentation:

 - Use DNS Load Balancing for Geo LB and DRP (Slide 10)
 - Use CDN to offload network traffic (Slides 16-19)
 - Perform Session Offloading by cookies or a central store (Slides 62-65)



Bottom Line
You can scale your app! just follow the right recommendations

Keep Performing,

Moshe Kaplan

Jun 15, 2014

Auto Scaling your Workers based on Queue Length

Having an image or video processing engine?
Having a web crawling solution?
Doing OCR on media?
If so, this post is probably for you.

The Producer-Consumer Auto Scaling Problem
Back at school, you probably learnt how to manage the queue of waiting tasks in a producer-consumer problem. You may learnt how to avoid expired tasks and double processing of same tasks.
However, back then we all assumed the number of workers and producers is fixed.

Well, it is not true any more...
If you system includes a backend processor that its load may vary based on clients demand, you may need to auto scale it. Back then, you needed to purchase new servers. These days it just about launching new instances in your favorite cloud.

Adjusting AWS Auto Scaling to Support this Pattern
AWS Auto Scaling solution is known as the best of its kind in the industry. It supports:
  1. Dynamic scaling based on instances load (e.g CPU).
  2. Automatically launching and terminating instances.
  3. Supporting both on demand (expensive) instances and Spot (marginal cost) instances.
However, it does not support complex decisions such as queue length/instances length ratio out of the box.

Fortunately, AWS provides a complete set of API that let us define a solution that its general flow is described here and here:
  1. Create a base AMI to a worker/consumer machine
    1. If your code tends to change you may define a method that will update to the latest code when machine is being launched
    2. A better solution may be using a DevOps platform such as Puppet or Chef.
  2. Define auto scaling initial configuration with an initial size of minimum and maximum number of instances (N = 1).
  3. Define a spot instance price: make it reasonable (not too high to avoid extra cost, and not too low to actually launch instances when needed).
  4. Create a cron that will run every 1 min and will check the queue length. This cron will terminate or launch instances by adjusting the auto scaling configuration minimum and maximum number of instances (N).
The Algorithm
You need to take several decisions in order to decide how many instances you want on air:
  1. What is the ratio of tasks in queue (Q) to running instances (for example R = 2)
  2. How quick do you want to reach this ratio. The quicker you do, the higher it will cost (instances that are billed hourly, may be terminated and relaunched again only few minutes after that).
Two examples for a possible algorithms:
  1. Keeping the ratio of queue length and running instances constant: for example: N = max[1, Q/R]
  2. Soften the ratio by using previous calculated number N': N = (N' + max[1, Q/R])/2
Bottom Line
Auto Scaling is an amazing tool, and by adjusting it we can easily solve complex issues.

Keep Performing,

May 21, 2014

Introduction to MongoDB: The Complete Presentation

I had two great lectures regarding MongoDB and how to best utilize it in the last week. Therefore, I decided to share the presentation with you.

Key Presentation Topics
  1. MongoDB Background: Company, Customers and the roots of NoSQL
  2. Why more people are choosing MongoDB?
  3. Data Design for NoSQL
  4. MongoDB Installation
  5. Basic DDL and DML syntax
  6. MEAN (MongoDB, Express, Angular, node.js)
  7. Best Practices for MongoDB migration




P.S Don't miss my Hebrew Big Data Webinar at July 7th, 2014

Keep Performing,
Moshe Kaplan

May 13, 2014

6 Easy Steps to Configure MongoDB Replication Set

In this tutorial we'll create a 3 nodes cluster, where the first serves as a primary node, and second as a failover node and the third as an Arbiter



1. Setup Mongo and Set a Configuration File
In all the 3 servers adjust the configuration file /etc/mongod.conf:
#Select your replication set name
replSet=[replication_set_name]
#Select the replication log size
oplogSize=1024
Disable the bind_ip parameter to avoid binding to only 127.0.0.1 interface
#bind_ip

2. Restart All 3 mongod Daemons
> sudo service mongod restart

3. Create an Initial Configuration on the Primary
Login to the primary mongo and create an initial configuration. Please notice to use the private IP and not the loopback address (127.0.0.1):
> mongo
Primary> cfg = {"_id" : "[replication_set_name]", "members" : [{"_id" : 0,"host" : "[Primary_Host_IP]:27017"}]}
Primary> rs.initiate(cfg);

4. Add the Failover Instance to the Replication Set
Primary> rs.add("[Failover_Host_IP]:27017")

5. Add the Arbier Instance to the Replication Set
Primary> rs.addArb("[Arbiter_Host_IP]:27017")

6. Verify the Replication Set Status
Primary> rs.status()

Bottom Line
I wish every data cluster setup was as easy as a setup of a MongoDB replication set.

Keep Performing,
Moshe Kaplan

ShareThis

Intense Debate Comments

Ratings and Recommendations