Dec 23, 2013

Can OpenStack Object Store be a Base for a Video CDN?

Video CDN are amazing from technical aspect as we are talking about high scale systems with some unique business cases.
I would like to share with you some design aspects regarding these systems.

The Video CDN Case Studies
Video CDN includes two main case studies:
  1. VOD Case Study: This is a long tail/high throughput scenario where you need a high capacity disks, where only a small portion of it will be used extensively. In order to create a cost effective solution you should have:
    1. High capacity storage system with low IOPS needs. Servers with 24-36 2-3TB SATA disks will provide a up to 100TB raw storage with a tag price of $15K.
    2. Replication and auto failover mechanism that can distribute content between several servers and can save us from using expensive RAID and Cluster solutions.
    3. Caching/Proxy mechanism that will serve the head of the long tail from the memory.
  2. Live Broadcast Case Study: This is a No Storage/high throughput scenario where you actually don't need any persistent storage (if a server fails, as soon as its get up again, the data will be no longer relevant). In order to create a cost effective solution you should have:
    1. No significant storage.
    2. High capacity RAM that should be sized according to:
      1. The number of channels you are going to serve.
      2. The number of resolutions your going to support (most relevant when you plan to support handhelds and not just widescreens).
      3. The amount of time you are going to store (no more that 5 min are needed in case of live, and no more than 4 hours in case of start over).
    3. In memory rapid replication (or ramdisk based) mechanism, that will replicate the incoming video to several machines.
    4. HTTP interface to serve video chunks to end users.
Serving Static Content rather than Dynamic
Modern Video Encoding systems (such as Google's Widevine) support "Encrypt Once, Use Many", where the content is encrypted once, and decryption keys are distributed to secured clients based on a need to know basis.

Why OpenStack Swift/Object Store?

In short, OpenStack Swift is the OSS equivalent for Amazon propriety AWS S3: "Simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web."

OpenStack Swift/Object Store Benefits
  1. Open stack is OSS and therefore easy to evaluate.
  2. Active large scale deployments including RackSpace and Comcast.
  3. Built in content distribution method that let you distribute the load between multiple servers based on rsync.
  4. High availability based on data replication between multiple instances. On a server failure, you just need to take it out of the array and replace with a new one, while other servers keep serving users.
  5. This mechanism help you avoid premium hardware such IO controllers and RAID mechanism.
  6. Built in HA and DRP based on 5 independent zones.
  7. Built in web server, that enables you serving static content as well as HTTP based video streams from the server itself, rather than implementing high end SAN.
  8. Built in reverse proxy service that minimizes IO and maximizes throughput, based on Python and Memcache.
  9. Built in authentication service
  10. Target pricing of $0.4 per 1M servings and $0.055/GB per month if we take AWS as a benchmark.
OpenStack Object Store Architecture
The OpenStack Object Store architecture is well described in two layers:
  1. The logical: Accounts (paying customers), Containers (folders) and Objects (blobs).
  2. The physical: Zones (Independent clusters), Partitions (of data items), Rings (mapping between URLs and partitions and locations on disks) and Proxies.
Key Concepts
  1. Account is actually an independent tenant, as it has its own data store (implemented by SQLite).
  2. Replication is done based on large blocks, quorum and MD5.
  3. Write to Disk before Expose to Users: When files are uploaded, they are first committed to disk at least at two zones, and then database is updated for availability (so don't expect sub second response for a write).
System Sizing 
Before testing the system you should be familiar with some key sizing metrics obtained by Korea Telecom:

  1. Object Store Server Sizing: High capacity storage (36-48 2-3TB SATA drivers that will provide us up to 100TB per server), memory to cache the head of the long tail (24-48GB RAM), 2x1Gbps Eth to support ~500 concurrent requests for long tail request. A single high end CPU should do the work.
  2. Proxy Server Sizing: Little storage (500GB SATA disk will do the work), memory to cache the head of the long tail (24 RAM), 2x10Gbps Eth to support ~5000 concurrent requests to support the head of the long tail requests.
  3. Switches: you will need a nice backbone for this system. In order to avoid a backbone that is too large, splitting the system to several clusters is recommended.
  4. Load Balancing: in order to avoid high end LB, you should use DNS LB, where frequent calls to the DNS are neglect-able relatively to the Media streaming
The Fast Lane: How to Start?
OpenStack Object Store is probably cost effective when looking for large installations as you may need at least 5 physical servers for object and containers store and another 2 for proxies. 
However, you can check the solution based on a single server installation (SAIO):

If you take the fast lane and AWS is the fast lane to your POC, feel free to use the following tips:

  1. In the initial installation, some packages will miss in yum so:
    1. sudo easy_install eventlet
    2. sudo easy_install dnspython
    3. sudo easy_install netifaces
    4. sudo easy_install pastedeploy
  2. No need to start the rsync service (just reboot the machine).
  3. Start the service using sudo ./bin/startmain
  4. Test the service using  the supload bash script to stimulate a client.
Working with the Web Services
There are 3 main ways to work with Swift web services:
  1. AWS tools, as Swift is compliant with AWS S3.
  2. HTTP calls as it is based on HTTP.
  3. Swift client that streams your major needs.
Working with Swift Client
In the following example we assume a given user test.tester was defined with the password testing, and data is served over the proxy at port 8080

Get statistics:
swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing stat

Upload a file to the videos container:
sudo swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing upload videos ./demo.wvm

Provide read only permissions (Please notice that r: is stands to referral domain, so specifying any other * can help you save bandwidth and minimize content stealing):
sudo swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing post videos  -r '.r:*'

Download the file (where AUTH_test is the user account, videos is the containter and anonymous access was provided as detailed below):
curl http://test.example.com:8080/v1.0/AUTH_test/videos/demo.wvm

Public Access
In order to implement a read only public access your will need to take care of the following items
  1. User Management
  2. Define anonymous accessdelay_auth_decision = 1
  3. Configure folder ACL
  4. Enable delayed authentication at the proxy configuration (/etc/swift/proxy-server.conf):
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers (‘.r:*’).
delay_auth_decision = 1

Working with Direct HTTP Calls
Get user/pwd
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://test.example.com:8080/auth/v1.0

> X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
> X-Auth-Token: AUTH_tk551e69a150f4439abf6789409f98a047
> Content-Type: text/html; charset=UTF-8
> X-Storage-Token: AUTH_tk551e69a150f4439abf6789409f98a047

Upload file
curl –X PUT -i \
    -H "X-Auth-Token: AUTH_tk26748f1d294343eab28d882a61395f2d" \
    -T /tmp/a.txt \
    https://storage.swiftdrive.com/v1/CF_xer7_343/dogs/JingleRocky.jpg

Bottom Line
OpenStack Object Store (Swift) is exciting tool for anyone who is working with large scale system and especially when talking about CDNs. 

Keep Performing,
Moshe Kaplan

Dec 14, 2013

Do You Really Need NoSQL and Big Data Solutions?

Big Data and NoSQL are the biggest buzz around...
Yet, are they the right solutions for your project?

Are You Eligible for Big Data?
As a rule of thumb, if your database is smaller than 100GB and your biggest table is less than 100M rows, you should avoid seeking Big Data solutions. In that case, make sure you well utilize your current RDBMS investments.

When Should You Choose RDBMS?
There are several other reasons to stick with RDBMS (yes, we are talking about MySQL, SQL Server, Oracle and other):
  1. You must meet compliance and security procedures (cases: PCI compliance).
  2. You need complex reporting based on joins between several tables.
  3. You need transactions (cases: financial transactions).
  4. You have established data analysts group that cannot be trained to other syntax.
When NoSQL Solutions Should Fit?
There are several high reasons to select NoSQL solution. Check if your case is eligible for it:

  1. You are a full stack developer and just look for a persistence storage (cases: blogging system, multi choice exams).
  2. You need a quick response from your storage solution based on a key value store (cases: algo trading and online stock exchanges bidding: DSP).
  3. You must always return an answer, even if it's not the most updated one (cases: social networks, content management).
  4. You need to provide a good enough answer rather the most accurate (cases: search engine).
  5. Your data size is too big to be transferred over network (even over a 80Gbps Infiniband). In this cases a better approach is distributing the computation (cases: analytics, statistics).




Bottom Line
NoSQL solutions have expanded your toolbox. Now, you need to focus on selecting the right tool for your business case.

Keep Performing,

Moshe Kaplan

Dec 9, 2013

JAVA Production Systems Profiling Done Right!

If you are facing a Java system performance issue in production, and JProfiler is not the right tool for it, probably JMX monitoring using the VisualVM will do the work for you.

Technical
JMX usage from a remote machine can be frustrating. Therefore, please make sure that:
  1. Your hostname is included in the /etc/hosts 
    1. Get host name using hostname 
    2. Add the host name after 127.0.0.1 in /etc/hosts
  2. JMX is binded to the external IP:
    1. Verify 127.0.0.1 is not presented at: netstat -na | grep 1099
    2. If it does presented, add to your java command: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=
If everything is Okay, you will be able to run VisualVM from a remote machine and connect to the remote server.

VisualVM
Now, that you have your VisualVM up and running there are some items you should take a look at:
  1. General CPU and memory graphs.
  2. Sampler that enables you taking snapshots.
  3. Snapshot analysis that enables you a hotspot presentation as well as deep.
Bottom Lin
My recommendation is to have snapshot of the process and then look at the hotspots tab for major calls with actual long CPU time. You should focus on these items.

Keep Performing,

Dec 3, 2013

Is There a (Good) Solution for SQL Server HA @ Azure?

Comment: I do not see Windows Azure SQL Database as a feasible solution for a firm that expects its business to scale. The reason is simple: You can not use a component in your system that its replacement will require a long downtime (yes, we are talking about hours if you will have a significant database size). The only way to migrate from Windows Azure SQL Database is to export its data and import it on a regular instance, and it's not acceptable when you have a significant traffic.

High Availability
The requirement for high availability is common: you don't want downtime, as downtime mean less business and it hurts your business image. 

The Azure Catch
Azure SQL Server VM is just like having a SQL Server on a regular VM. 
VM maintenance includes two layers: 1) maintaining the VM (installing patches, hardening...) and 2) doing the same to the host underneath.  
A common large size private and public cloud operation usually include an auto fail over, so when a host is having maintenance or unfortunately fails, the system automatically migrate the running VMs to another host(s) w/o stopping them. You could find this behavior in VMWare VMotion and at Amazon EC2 that runs over XEN.
Well... this is not the case at Microsoft Azure. When  Microsoft updates its hosts, don't expect your instances to be available (and yes the downtime may take dozens of minutes and it is not controlled by you). This is an acceptable practice when dealing with web and application servers (place several instances behind a LB and use a queue mechanism to deal with it). However, it is not good one when you deal with databases it's can be a major issue.

The Solution: Have a Master-Master Configuration
As you may understood, a master-slave solution is not acceptable in this case, and therefore, you will need to avoid Log Shipping (although it can be used various other scenarios).
Therefore, we were left with two solutions:

Mirroring:
This was a solution for HA architectures.
However "This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use AlwaysOn Availability Groups instead."

http://technet.microsoft.com/en-us/library/ms189852.aspx

AlwaysOn Availability Groups
This solution is described by MS as the "enterprise-level alternative to database mirroring. Introduced in SQL Server 2012".
However, "The non-RFC-compliant DHCP service in Windows Azure can cause the creation of certain WSFC cluster configurations to fail, due to the cluster network name being assigned a duplicate IP address (the same IP address as one of the cluster nodes). This is an issue when you implement AlwaysOn Availability Groups, which depends on the WSFC feature."
http://technet.microsoft.com/en-us/library/hh510230.aspx

The Second Catch
According to our analysis it seems that both Mirroring (end of life) and AlwaysOn (severe bugs due to DHCP) are not recommended, so we actually left w/o good HA solution, and therefore with no good MS data store solution for the Azure environment.
We tried to get answers from Microsoft stuff, but we did not get good ones.

Bottom Line
When evaluating Azure as a cloud platform for your needs, you should consider your data solution and how it fits your needs. In this case you may need to consider some open source solutions such as MySQL, Cassandra and MongoDB on a Linux VM, instead of going the MS default stack.

Keep Performing,
Moshe Kaplan

Nov 9, 2013

Azure Monitoring. What to Choose?

You are going live with your Azure system and need to know what is happening there?
Cannot sleep at night as your system crashes from time to time?
Need to know whatever your fast growing system will go out of resources in the near future?
If so, this post is for you!

Decisions

In general you have 3 main options:
  1. Open Source Software that is installed by you and managed by you.
  2. Azure built in monitoring that may require some extra charging and tailoring to your business case.
  3. 3rd Party SaaS offering that provides you an End to End solution.
The Open Source Way
  1. Major OSS monitoring tools are available and widely used in the industry. Some of the leading solutions are Zabbix, Nagios and Ganglia.
  2. Software is available w/ no additional cost
  3. You will need to install the servers by yourself (and of course paying for the instances and manage the data).
  4. Implementing an end to end monitoring (inc. user experience) may require extra servers installation on another data center or using a 3rd party service.
  5. Other downsides are well described by CopperEgg.
The Azure Monitoring Way
  1. If you are already invested in Azure, it's built-in, so why not using it?
  2. The service provides nice utilization graphs, and some minimal alerting system (limited to 10 rules). However, there are no pre configured templates, and some metrics are with additional charging.
  3. Basic website availability is provided using web site monitoring.
  4. Yet, there are several gaps such as VM monitoring, SQL Azure and Azure Storage.
  5. The overall impression is that many features are available, but it's far from being a monitoring product, and you will have to do a lot of integration work by yourself..
3rd Party Monitoring as s Service
  1. CopperEgg: Nice company, the their website could look better. More important, Their features looks pretty similar to the OSS solutions with an integrated website response time and availability service. Pricing: $7-9/server|service/month.
  2. AzureWatch: a very similar product to CopperEgg both in features and pricing (although their landing page looks much better). Pricing: $5-8/server|service/month.
  3. New Relic:Probably the most intersting product around, not only it provides an end to end monitoring, but it also analyzes your performance bottlenecks. Their product is being integrated to your code and provides you with production profiling and alerts you regarding any exception. Pricing: standard version that includes the other providers offers is free. Code level diagnostics and inner level profiling will cost you $25-200/server|service/month
Bottom Line
If you are ready to invest time to get your own solution, the OSS products with Azure integration will probably be your preferred solution. Otherwise, I will recommend you to choose New Relic, that seems to provide the best value in the market for your needs.

Keep Performing

Nov 5, 2013

Apache Session Persistancy Using MongoDB

Your app is booming, you need more web servers and you need to serve users and keep their user experience. When you had a single server you just used session for that, but now, How do you keep sessions across multiple web servers? 
Using a session stickiness load balancer is a good solution, but what happens when a web server needs a restart?

Session Off Loading
Many of you are familiar with the concept of session off loading. The common configuration for it is using Memcached to store the session object, so it is accessible from all web servers. However, this solution lacks the features of high availability by replication and persistence.

The Next Step: High Availablability
Offloading web servers sessions to MongoDB looks like a great solution: key-value store, lazy persistency and built in replication and auto master failover.

Step by Step Solution for LAMP Session Off Loading using MongoDB
  1. Install MongoDB (preferably in a replica set with at least 3 nodes)
  2. Install MongoDB driver for PHP: 
  3. Configure the php.ini: extension=mongo.so
  4. Download and integrate Apache session handler implementation using MongoDB.
  5. Configure your MongoDB setting in the module:
    1. Set the list of nodes in the connectionString setting: 'connectionString' => 'mongodb://SERVER1:27017, mongodb://SERVER2:27017',
    2. Do the same to the list of servers (see at the bottom of this post)
    3. Configure the cookie_domain (or just place a null string there to support all): 'cookie_domain' => ''
    4. Don't forget to enable replica set if you are using one: 'replicaSet' => true,
  6. Add the MongoSession.php to your server and require it: require_once('MongoSession.php');
  7. Replace the session_start() function with $session = new MongoSession();
And that's all. Now you can continue using the session objects as you used to create new great features:
if (!isset($_SESSION['views'])) $_SESSION['views'] = 0;
$_SESSION['views'] = $_SESSION['views'] + 1;
echo $_SESSION['views'];


Bottom Line
Smart work can tackle every scale issue you are facing off...

Keep Performing,

Appendix: 
'servers' => array(
array(
'host' => 'SERVER1',
'port' => Mongo::DEFAULT_PORT,
'username' => null,
'password' => null,
'persistent' => false
),
array(
'host' => 'SERVER2',
'port' => Mongo::DEFAULT_PORT,
'username' => null,
'password' => null,
'persistent' => false

)

Oct 13, 2013

MongoDB Index Tuning and Dex

One of the first things when your system is rolled into production (or better when you have your acceptance tests) is checking the system performance performance and tune it.
When we talk about databases, we usually talk about index tuning, and MongoDB is not different.

How to Detect the Slow Queries?
First enable you MongoDB slow queries log. It's easy, just run db.setProfilingLevel(1, 1) from the command line. 

How to Analyze the Slow Queries?
Well, you can use the built in queries (after all the profiling is saved at a MongoDB collection). However, Dex, a new tool from MongoLab, can help you shorten the time to index...
It's a Python based product that can be easily installed and run and provides a list of recommended indexes based on MongoLab best practices.

How to Install and Run?
  1. Install python: yum -y install python
  2. Install pip
  3. Install Dex using pip: pip install dex
  4. Create a log file: touch mongodb.log
  5. Run it: sudo dex -p -v -f mongodb.log mongodb://user:password@serverDNS:serverPort/database. Don't forget to use the various flags to get errors in case of wrong configuration.

Should We Just Copy and Paste the New Indexes?
As you may know every added index results in a performance degragation when INSERT, DELETE and UPDATE statement are performed. Therefore, we should carefuly select the indexes to be added:

  1. We should check that the new indexes do not overlap existing indexes. If there is an overlap, we should remove/unify the indexes definitions (use the .db.collection.getIndexes() method to retrieve the data.
  2. We should check that the queries are frequently used in order to avoid saving few seconds once a month while doing a SELECT, and pay for that at peak times INSERTs.
Bottom Line
MongoDB queries tuning is easier than ever: Just Tune It!

Keep Performing,
Moshe Kaplan

ShareThis

Intense Debate Comments

Ratings and Recommendations