Jan 2, 2013

Scaling Your Web Farm and Keep Supporting User Generated Content using File Sharing (NFS)


Have you ever wanted to scale your web operations to multiple web servers and still needed a simple common place to share data between servers?

rsync
A very useful way for that is rsync that is commonly used to distribute the web server static content (HTML, PHP, CSS, and JavaScript files) between different servers.

What to do with User Generated Content?
When we want to store user generated content such images, documents or video that are uploaded by them from time to time we have three options:
  1. Push them as BLOBs into the database (or your NoSQL system such as MongoDB).
  2. Store them on disk that is accessible by several servers. 
  3. Store them on a "File Server as a Service" like Amazon S3.
This time we'll focus on the second option:

DIY with Network Services
The basic solution for DIY is installing a file sharing daemon on one of your servers and share a folder of it to the other web servers.
There are three common file sharing protocols that may be used:
  1. SAMBA (that is common and supported by Windows as well).
  2. NFS (supported only on  *NIX machines).
  3. CIFS that mostly used in Windows, but can be mounted to LINUX machines as well.

In this case I decided to focus on the NFS implementation, so stay tuned:

NFS Configuration
First configure the NFS server
  1. Install the NFS RPM:  yum -y install nfs-utils
    1. Start service: /etc/init.d/nfs start
  2. Open the relevant ports in the iptables FW (2049 and 111):
    1. iptables -I INPUT -p tcp -s 192.168.85.0/24 -m state --state NEW,RELATED,ESTABLISHED --dport 2049 -j ACCEPT
    2. iptables -I INPUT -p udp -s 192.168.85.0/24 -m state --state NEW,RELATED,ESTABLISHED --dport 2049 -j ACCEPT
    3. iptables -I INPUT -p tcp -s 192.168.85.0/24 -m state --state NEW,RELATED,ESTABLISHED --dport 111 -j ACCEPT
    4. iptables -I INPUT -p udp -s 192.168.85.0/24 -m state --state NEW,RELATED,ESTABLISHED --dport 111 -j ACCEPT
  3. Configure exported locations by editing /etc/exports according to the following examples:
    1. Provide every server behind the firewall R/W to this folder: /path/to/directory *(rw)
      1. Please notice that in this case you should provide the relevent permissions on disk (since the guest machine will use in default the nobody user to access this disk). For example: chmod -R 777 /path/to/directory
    2. Provide a single server read only permission to this folder: /path/to/directory 192.168.2.21(ro)
  4. Finally load these exported locations: /usr/sbin/exportfs -a

Finally mount this disk to all other servers:
  1. Create local directory: mkdir /local/directory
  2. Add line to /etc/fstab
    1. SOURCE_SERVER:/path/to/directory /local/directory nfs
  3. Mount the folder: mount /local/directory
Bottom Line
Scaling you system is possible with few simple steps, this was one of them

Keep Performing,

P.S In the DevOps world, scripts is everything. So you may use the following: 
Server:

yum -y install nfs-utils
/etc/init.d/nfs start
mkdir /path/to/directory
chmod -R 777 /path/to/directoryecho '/path/to/directory *(rw)' >> /etc/exports
/usr/sbin/exportfs -a


Client:

#!/bin/sh
mkdir /path/to/directory
echo "$1:/path/to/directory /path/to/directory nfs" >> /etc/fstab
mount /path/to/directory

ShareThis

Intense Debate Comments

Ratings and Recommendations