Dec 21, 2009

Lessons from Facebook:

FB maybe is not the best example for quality design (probably each of us is likely to have at least a single error message a day).

However, since it's the 2nd largest site in the world with contsant exponential growth, it's a good lesson to have. Most interesting that their tech is team is only about 250 people and 30K servers, pretty amazing.

Not long ago a Jeff Rothschild, the Vice President of Technology at Facebook gave a presentation in UC San Diego. You can find a detailed summary by Prof. Amin Vahdat, but I'll make several top comments that I found useful.
  1. FB Software development: mostly PHP (complied), but other languages are used as well.
  2. Common interface between services using an internal development that was turned to open source: Thirft. This interface enables easy connections between the different languages.
  3. Logging Framework that is not dependent on a central repository or its availability. FB is using Hadoop and well as Hive (that was developed there as well). The log size is growing at a 25TB a day.
  4. Operational Monitoring: Separated from the logging mechanism 
  5. LAN can be a bottleneck as well: expect for packet loss and packet drops in the LAN if you stress it too much.
  6. CDN: Facebook is using external CDN to images distribution.
  7. Dedicated file system named Haystack that combines simple storage along with cache directory: file system is accessed only once to get images from it, while directory structure is retrieved from the cache.
  8. Most data is served from Memcached. Database is used mostly for Persistency and data replication between sites (Memcached is being heated by the MySQL itself):
    1. Top challenge: keeping data consistent since Memcached can be messed easily (No search for keys is available).
    2. Mixing information including sizes and types is better - making sure that load on CPU, Memory and etc is distributed equally.
  9. Shared nothing - Keep your system independent - avoid a single bottleneck. Therefore, data is saved in a sharded MySQL from day 1. However, MySQL is used mostly for data persistency and not for conservative database usage pattern:
    1. No Joins in the MySQL
    2. Chosen due to good data consistency + Management Software
    3. 4K servers
    4. Data replication between sites is based on MySQL replication
    5. Memcached is being heated based on the MySQL Replication using a custom API
One last thing, if you are interested in Facebook financials, as well the storage machines (NetApp low end 3070), sizing, traffic, servers, storage procurement and data center costs, take a look at TechCrunch's Michael Arrington post.

Keep Performing,
Moshe Kaplan. Performance Expert.

Dec 20, 2009

Expect for the Best. Be Prepared for the Worse.

A major issue in preparing for the disaster is getting clear visibility of your system. Especially if we are talking about distributed multi server and multi data center system.
Paul Venezia from InfoWorld provided a clear and concise description of the major open source tools (yes, free ones), that can give your clear visibility of your system.These tools can help you better know:
  1. When does your CPU utilization get close to 100%?
  2. When should you add more hardware or make an effort to boost your system performance?
  3. What are your high traffic sources?
  4. What is the root cause analysis for system poor performance?
  5. What are the long term treadlines?
 Among the described tools you can find:
  1.  Cacti - System Monitoring Web GUI that provides you graphs to every system metric that can be exposed in SNMP. Most recommended tool.
  2.  Nagios - Network Monitoring tool.
  3. NeDi - Physical network mapping tool (to which port your server is connected). Useful moslty if you are in the on-premise business rather than the cloud business.
  4. Ntop - Traffic analysis tool that can help find what are the traffic sources, and which one keeps you bill so high.
  5. Pancho - Configuring and backup Cisco routers. Again for the on-premise guys.
Using at least some of these tools you can better prepare for the disaster, and many times avoid it.

Keep Performing,
Moshe Kaplan. Performance Expert.

Dec 19, 2009

Load Balacing Support in Dynamic Environments


The Mission
These days we face a challenging task: designing a very large system of scalable instances. Each of these instances may be in a different geographic location, and many of them are on demand instances that are  being started and shutdown instantly.
Another requirement in this system that a given client will be direct to a defined instance, due to system restriction (a round robin is not an option is this case).

One Step Further
Since the number of IP addresses in the internet is limited, we would like to use as few as possible Public IP addresses. This can be done using a load balancer or a proxy.
In the current state we would like to avoid using hardware load balancers in order to keep initial fixed costs minimal, but we may consider to use them in the future.

Is Amazon Cloud Load Balancer Service (AWS) is an Option?

AWS EC2 instances is a feasible option for on demand instances hosing. However, AWS charges $0.025 per a single load balancing rule per hour (+traffic). Therefore, it can be used, but for a large number of rules (>7) or high traffic, better solutions can be found in the market.

So What Can Be Done?
We left with software load balancers. The major ones Apache mod_proxy and HAProxy. Supporting large number of instances behind the load balancer can be done in one of the following two options:
  1. Pre register a large number of DNS addresses (sub domain) and associate them with the load balacner IP. The load balancer will simply redirect the request to the defined instance, based on the a simple rule in the load balancer. For example:  Pros: simple. Cons: not fully dynamic, requires additional DNS registration once in a while to keep up with the application growth.
  2. Performing ProxyPass in the Load balancer: Every request will include in its path an instance identification for example: This method does not require mass DNS declarations, but it requires specific definitions in the load balancers that may be more CPU consuming. In Apache the definition is pretty trivial, however, this product is less scalable from HAProxy. In HAProxy the task can be done as well based on a two phases: switching to the server and rewrite the URI:
    In order to switch to the server, you have to use ACLs to match the path,
    then a use_backend directive to select a server farm ("backend"). Your
    farm may very well support only one server if you want.

    Then in this "backend", you can use a rewrite rule ("reqrep") to replace
    the request line.

    This would basically look like this :

    frontend xxx
           acl path_mirror_foo path_beg /mirror/foo/
           use_backend bk_66 if path_mirror_foo

    backend bk_66
           reqrep ^([^: ]*\ )/instN/\(.*\)  \1/\2
           balance roundrobin
           server srv66

However, Willy Tarreau, HAProxy author who kindly provided this hint for me, recommends that you avoid the second part (rewriting) because :

 1) it requires good regex skills which sometimes makes the configs hard
    to maintain for other people

 2) rewriting URIs in applications is the worst ever thing to do, because
    they never know where they are mapped, and regularly emit wrong links
    and wrong Location headers during redirections.

Willy Tarreau also advices that the best thing to do clearly is to correctly configure your application to be able to respond with the real, original URI. Remapping it can be used
as a transitional setup in order to ease a graceful switchover, though. Bottom line: Pros: No DNS configuration and fully scalable solution, with no dependence on DNS replication. Cons: CPU Consuming and error prone declarations.

So, What to Choose?
The answer is based on your needs, and your believe in your people regex capabilities. We made our choice.

Keep Performing,
Moshe Kaplan. Performance Expert.

Nov 26, 2009

What if I have been in NYC, London and SF in the same day?

When you analyzing a website performance, it's always a mystery what is the performance issue root cause (well at least till we dive in and reveal it :-).

A common issue in this analysis is what is the network round trip time (RTT) in the play, and who is to blame for. Usually we start with following measures:
  1. Analyze the website from our offices. This test includes both Wireshark analysis to analyze the network RTT and FireBug to understand website behavior.
  2. Analyze the website performance in the hosting environment by running a terminal services on a server (or the appropriate measure in Linux or UNIX based environment), were RTT should be zero.
  3. Finally, we usually connect the site from another site around the world. This is done usually by connecting a server in another hosting environment (booting a server for a hour using Amazon AWS is a great solution for that).

    These tests are usually a good method in order to evaluate the communication layer play in the performance issue. However, testing from remote sites is not always accurate and sufficient. In these cases we recommend using a service name, that provides you a FireBug like analysis from several sites around the world.  

    Keep Performing!
    Moshe Kaplan. 9AFW6C8B3GKN

    Nov 18, 2009

    Boost Your Website Performance (Front End)

    This great Yahoo! article is (almost) everything you need in this field.

    Keep Performing,
    Moshe Kaplan

    Nov 9, 2009

    Should MySQL backup be equal to system downtime?

    I had a short conversation with my mate, Romi Kuntsman regarding a common issue in current systems: the database backup.
    Since database systems are very active, the database backup process takes relatively long time and the whole system performance is going down (or in some cases the whole system stops responding and going down...)

    Should I backup or Should I don't?
    Well, first of all, keep backup your system. You never know when your system will corrupt due to hardware failure, when a hacker will decide to "check" your system, or when the regulator will visit your offices.

    So how should I keep my system responding to users?
    Well... lets take a look at storage systems: when you have a large storage machine (SAN) you usually do not backup the primary site machine, but rather backup the secondary storage machine by splitting it from the main site (or by using a snapshot of the secondary site). This way your primary machine keeps serving clients with interference, while the secondary machine is taking care of the backup. When backup is finished, the sync between the machines is being restored.

    So it works for storage systems, how can it work for databases?
    Well lets implement a similar design to your database system:
    1. Install another MySQL instance
    2. Configure this instance to be a slave of your master database
    3. Schedule a job to bring down the sync, backup the slave and bring back the sync

    And what about the hot backup?
    Well, if you still want to maintain high availability during backup (and you should), implement a two slaves configuration, where the first slave is used for high availability and the second for backup.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Nov 5, 2009

    SQL Server NOLOCK: Should I use or should I not?

    Well, the simple answer is NO!

    What is NOLOCK?
    NOLOCKS enables you to make a SELECT statement while avoiding current locks on the tables by other statements such as DELETE and UPDATE.

    Why should you use NOLOCK?
    Well, the answer is simple, you have locks in the database, users in your website receive exceptions and errors instead of answers, and your boss is getting nervous. The simple way is just place an extra WITH(NOLOCK) and things seems to be OK:

    SELECT field_name FROM table_name WITH(NOLOCK)

    Why should you avoid NOLOCK?
    If your database suffers from locks, avoiding these performance issues now, will result in larger problems in the future. Your database is a key feature in your architecture, and your should take care of him and not avoid the problems.
    Moreover, using NOLOCK does not gurrentee that your users will receive updated and current data which may sensitive when financial or sensitive data is getting into place.

    Keep Performing,
    Moshe Kaplan.

    Oct 25, 2009

    The 2009 IGT annual event - "The World Summit of Cloud Computing" is almost here

    Only 5 weeks left to the greatest cloud event of the year with lectures from NYSE, Microsoft, IBM, Amazon, eBay, Mellanox, CloudCamp, Carmel VC and many more.

    The event that is taking place at the "Kibutz Shefayim" in Israel at Dec 2nd and 3rd.

    Take a look the event website

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    * Please notice that the event is organized by the IGT that I'm a board member of and that I have internal info that it's going to be at least as great as last year event.

    Oct 23, 2009

    Service: Hoopoe - GPU Cloud

    I posted several notes in the last year regarding the niche market of GPU cloud services and why current cloud computing providers such as Amazon AWS that are based on the XEN and VMware virtual instances (hypervisor based) cannot fit them.

    Hoopoe is a new service that is focused on this niche market and provides GPU services in the cloud based on NVIDIA TESLA CUDA devices. This service is integrated with Amazon S3.

    Hoopoe will be presented by Mordechai Butrashvily in the next IGT GPGPU Meeting at Nov 10, 2009 14:30-17:00.

    To reserve your place, send your contact details to info at grid dot org dot il

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Sep 26, 2009

    Load Balancer: Pay less Do More

    When we face a buildup of a new mega system that needs to handle dozens of Gb/s of traffic, supporting both HTTPS and HTTP and perform advanced logic such as Comet our main obstacle is how to design it to keep performance high and price low.

    Well if you are dealing with such a system (I seen several of these in the last few months) you probably already familiar with hardware load balancers, software load balancers, Acceleration servers (SSL Proxies), caching servers, DNS round robin and CDNs. If you are not sure yet what to do with these components and why, stay with us...
    However, please notice that the following is a short list of various technologies available in the market with a bottom line, this is not a complete list and each case should be analyzed according to the case (I think I should hire a lawyer next time...). If you want to learn more feel free to read Willy Tarreau article: "Making applications scalable with Load Balancing" which covers the overall aspects of these technologies.

    DNS Round Robin: 
    Major Pros: Cheap and simple.
    Major Cons: Not dynamic (unless you monitor and manually change it) and if a server fails clients will have major problems acquiring new server address.
    Bottom Line: Use it only when servers are for sure up and running (meaning that every DNS node is HA). Very useful to balance several data centers worldwide.

    Hardware Load Balancers:

    Major Pros: High throughput (10Gbs and counting) low latency and built in HA using VRRP.
    Major Cons: Price ($$$).
    Bottom Line: Use it for high throughput low latency load balancing or in other words for layer 3/4 load balancing in the data center gateway to balance the traffic between the servers. Cisco, Radware and F5 are good examples.

    Accelerators (SSL Proxies and Compression):

    Major Pros: Remove overhead from the application servers and keeping traffic as small as possible.
    Major Cons: Another layer in the system.
    Bottom Line: Use a stack of Apache+ModSSL servers to encrypt/decrypt and compress/decompress behind the HW load balancer keeping cost low and performance high.

    Software Load Balancers:

    Major Pros: Low cost.
    Major Cons: Slow and low throughput.
    Bottom Line: Use it when load balancing is almost in the application level (Layer 7) like HTTP Redirect. HAProxy and Apache mod_proxy are good examples.

    CDN (Content Delivery Network or Static Files):

    Major Pros: Reduce the number of servers in the system
    Major Cons: Another layer in the system.
    Bottom Line: Use commercial CDN (Amazon S3 and CloudFront are good examples) or dedicated lightweight HTTP like lighttpd to serve static content. It is also recommended to convert dynamic content to static one if possible.

    Application Server:

    Major Pros: Well we hope at least part of your system is dynamic...
    Major Cons: You know, performance boosting is not a joke after all...
    Bottom Line: Use optimized system from system to application level, including caching and in memory database to server the dynamic part of the application.


    Major Pros: Well we hope at least part of your system is dynamic... (X2).
    Major Cons: You know, performance boosting is not a joke after all... (X2).
    Bottom Line: If you plan a very large database, we recommend planning for HA and Sharding from day 1. It's not so difficult and it'll save you a lot work and sleepless nights in the future. And yes, you can use commodity databases such as MySQL.

    The bottom line: a wise use of each component can lead you to highly available system while keeping your budget low.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Sep 23, 2009

    OpSource Cloud: New Cloud Provider in the Hood

    Just few weeks after Amazon exposed its Virtual Enterprise Cloud, OpSource, a respected hosting provider that is targeted on the SaaS market is exposing its own solution targeted to the same market. OpSource will reveal its product in a Webinar on Wednesday, October 7, 2009 9:00am PDT / Noon EDT.
    Stay tuned,

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Sep 17, 2009

    The Basics of SQL Server Performance

    When you first hit a SQL Server that cannot server even another single hit you should start with following basic methods:
    1. Analyze the application and understand who are the most massive business processes, and analyze their performance. High odds that the problem lies there.
    2. Detect open connections using the Activity Monitor
    3. Check objects execution time using the SQL Server reports (of course assuming  you are lucky enough and all SQL Server access is done using stored procedures).
    4. Implement profiling and check the queries are most frequent and  most time consuming.
    5. Detect dead locks (a good sign for that this is the case is low CPU utilization).
    6. Detect slowdown source by using the following query  (thanks to Henk van der Valk):
    -- Uncomment the reset or capture the wait stats before you start the batch to investigate:
    -- DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)
    -- DBCC SQLPERF('sys.dm_os_latch_stats', CLEAR)

    -- show the sql waitstatistics:
    SELECT wait_type
    , SUM(waiting_tasks_count) AS waiting_tasks_count
    , SUM(wait_time_ms) AS wait_time_ms
    , SUM(max_wait_time_ms) AS max_wait_time_ms
    , SUM(signal_wait_time_ms) AS signal_wait_time_ms
    , SUM(wait_time_ms/NULLIF(waiting_tasks_count,0)) as 'Avg_wait_time_ms per waittype req.'

    FROM sys.dm_os_wait_stats
    WHERE wait_type NOT IN
    AND wait_type NOT LIKE 'XE%'
    AND wait_type NOT LIKE 'PREEMPT%'
    AND wait_type NOT LIKE 'SLEEP%'
    AND wait_type NOT LIKE '%REQ%DEAD%'
    AND waiting_tasks_count > 0
    GROUP BY wait_type
    ORDER BY wait_time_ms DESC

    After doing these initial steps, you finally gathhered enough data to solve the issue...

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.
    moshe at

    Infinispan: Open Source is getting into the Data Grid Market

    I came across Pawel Plaszczak blog post that describes a new product from JBoss named Infinispan (still in beta), which aims the data grid market (seems like things are getting warmer). Pawel also provided a deep comparison between Inifinspan and leading commercial products such as Oracle Coherence and Gigaspaces XAP.

    This product is still in beta, and is missing features that exist in the commercial products, but it's worth to take a look on it,

    Keep Performing!
    Moshe Kaplan. RockeTier. The Performance Experts.

    Sep 15, 2009

    MySQL Slow Conncections

    One of our clients had a performance issue with slow connections to its MySQL database.
    The system configuration is MySQL 5.1 on Red Hat, and the application Java/Tomcat based and ORM is done using iBatis.

    Several things can be done to solve this issue (make the open connection in few ms instead of 9 seconds) :
    1. Use skip-name-resolve in my.cnf file in order to avoid DNS queries that slow down the connections (my.cnf example can be found in,28690). Another work around can be writing the host resolve in the server hosts file.
    2. Use MySQL Connector/j 5.1.17 that makes iBatis run faster twice than 5.1.16.
    That's all for now, feel free to add your recommendations

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Aug 22, 2009

    ITIL, Performance Boosting and Presentations


    In the last past months RockeTier worked (and keep working) with several large organizations in several aspects: 1) design new systems which are capable to do more based on commodity hardware ; 2) boosting existing software performance (lean projects) and 3) establish processes in large organization that support the life cycle of performance from event management, problem management to establishing a continues performance boosting to the organization systems from RFI to production (ITIL oriented projects).

    We decided to share with you a presentation to a large telecom oriented company that is considering these days getting into the performance boosting and Lean projects for its main product lines.

    Keep Performing,
    Moshe Kaplan. RockeTier. The performance and cloud experts.

    Aug 18, 2009

    Shared Cache: The Windows and .Net memcached

    Hi again,

    In memory databases and cache are must in any large scale system these days (you can ask Facebook architects regarding it). The Linux guys have memcached and there are many other products (with much more features of course) such as Gigaspaces XAP, Oracle Coherence and Scaleout. However, since Microsoft did not release yet the long expected Velocity product (will it ever be in production?), it seems there is a clear winner to the Windows environment: SharedCache. This product is some kind of a memcached porting to native managed .Net code and it is licensed under LGPL!.
    SharedCached is already in production in several major sites including and it is fully documented. The product supports all major requirement from a caching product including:
    1. Partitioning - just like sharding, every server is taking care of part of the data, and the application server are talking with every cache server (unless of course you were wise enough to shard the application servers as well).
    2. Replicated Caching - keeping data in multiple instances, making sure data will always be available and lots of reads will be handled correctly.
    I recommend downloading and testing this product and see if it fits your product/service needs,

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Aug 13, 2009

    Cloud Computing and Windows 2008


    It was a long time since the last post... more working, less talking....

    This time we'll talk about a real pain in the Microsoft camp: why Windows 2008 is rarely supported by cloud providers such as Amazon AWS/EC2, AppNexus and Rackspace Cloud?

    One of the few exceptions is Microsoft Azure which is still in CTP and is using a modified Windows 2008 version.

    New features, New challenges...
    Windows 2008 which has a lot of new features such as safer service shutdown and kernel transaction manager also includes a new activation method (which is probably familiar to those of you who use Vista).

    According to my conversation with a leading cloud provider, this new activation method prevents them from generating images with predefined keys. Therefore, they cannot provide on demand Windows 2008 instances. The cloud providers are currently in advanced negotiations with Microsoft and provide only Windows 2003 instances.

    The Question
    Are there hidden causes behind this Microsoft restriction? or will a solution will be found before Azure RTM? Only time will tell...

    Keep Performing,
    Moshe Kaplan. RockeTier - The Performance and Cloud Experts.

    Jul 3, 2009

    Alphageeks Event and Video


    A two weeks ago, just a few days before the Java Technology Day, I had a great presentation in a very lovely new geeks meetup group named "AlphaGeeks". I presented there the key issues in developing high load systems based on the Internet giants architectures including Sharding, In memory databases and cloud computing. I presented there key concepts that helped us design and implement a 1 billion events per day ad network system to one of our clients.
    This presentation was recorded and is available in and also here. Please notice that the lecture is in Hebrew.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Jul 1, 2009

    Getting out more connections from your Windows server


    One of our clients is doing a lot of processing with his servers, managing to count large number of impressions every second. Still every server of whom had a minor CPU utilization (not crossing the 15% limit).
    He wanted to do more! We as well...
    The obvious suspect was the network: The servers were doing a lot of traffic (X0Mb/s per each server) and in peak hours even dropping connections.

    The solution was updating the MaxUserPort registry value in the Windows Registry to more reasonable value which fits the client banner network system behavior. This value controls the number of concurrent TCP session the server can handle. This can be very relevant in application server who are working with back end servers such as databases. See more at David Wang blog.

    Update 1: If you are using IIS, you probably using ASP.NET. In case, take a look at this Technet article to better configure the web.config.

    The MS recommended configuration values are:
    • "Set the values of the maxWorkerThreads parameter and the maxIoThreads parameter to 100.
    • Set the value of the maxconnection parameter to 12*N (where N is the number of CPU cores that you have).
    • Set the values of the minFreeThreads parameter to 88*N and the minLocalRequestFreeThreads parameter to76*N.
    • Set the value of minWorkerThreads to 50. Remember, minWorkerThreads is not in the configuration file by default. You must add it."
    UPDATE: if you are using Windows 2008, you should use an updated API
    1. netsh int ipv4 show dynamicport tcp
    2. netsh int ipv4 show dynamicport udp
    3. netsh int ipv6 show dynamicport tcp
    4. netsh int ipv6 show dynamicport udp
    5. netsh int ipv4 set dynamicport tcp start=10000 num=1000
    6. netsh int ipv4 set dynamicport udp start=10000 num=1000
    7. netsh int ipv6 set dynamicport tcp start=10000 num=1000
    8. netsh int ipv4 set dynamicport udp start=10000 num=1000

    Keep Performing,

    Jun 22, 2009

    Billion Events per Day, Israel 3rd Java Technology Day, June 22, 2009

    We presented today at the Israeli 3rd Java Technology Day, the largest SUN Microsystems/MySQL event in Israel. We presented here the essentials parts of building a real life web/enterprise system that needs to handle the performance needs of 1 billion events per day (a case study from the ad networks billing systems). We presented the adoption rate in the internet, Load Balancers (HAProxy, Apache, Radware, F5, Cisco), Web Servers, In Memory Database (IMDB inc. Memcached, Gigaspaces, Teracotta and Oracle Coherence) and finally Sharding (inc. Veritical, Static Horizontal and dynamic). A great example for a performance boosting architecture.

    Feel free to take a look at the presentation:

    Jun 16, 2009

    Make Your Web Server Do More

    Hi All,
    This time we at RockeTier want to reveal a small secret, you should choose lighhttpd.
    Lighttpd is an optimized web server for high performance environments. It has cpu-load effective management, small memory footprint compared to other web-servers and it was chosen by may Web 2.0 leaders such as meebo, wikipedia and YouTube. It is well optimized to handle AJAX applications and large number of sessions.
    Its license is plural (the revised BSD license) and the product gains a significant acceptance in the market.

    We also recommend you to take a look at their benchmarks. Somehow most of the players in the market usually do not expose theirs...

    Keep Performing,
    Moshe Kaplan. RockeTier. The Cloud and Performance Experts.

    Jun 2, 2009

    Tuning LAMP Architecture

    It seems that more and more firms are moving to the LAMP architecture these days. Therefore, there is a buzz regarding Apache, Linux, PHP and MySQL tuning
    However, since every technology has it own limitations, the need for extreme architectures that overcome their limitations is on the rise as well (I wrote and presented pretty much regarding Sharding, Cloud Computing, In Memory Databases in last few days).

    However, it is always good to get back to basic and remember the small advices from smart people how to tune the products themselves. Therefore, I gathered several recommendations to help you get a little bit more from your own application:
    LAMP tunning: I would add to this set of articles that you should make sure that you place all your rich content in a CDN, Amazon S3 or any other alternative and not on your own server.
    Some information about Apache tunning and even more about HAProxy tuning (RockeTier software based favorite load balancer).

    Keep Performing
    Moshe Kaplan. RockeTier. The Cloud and Performance Experts.

    May 13, 2009

    Cloud Slam 09': The Pareto Illusion

    A few weeks ago we presented in Cloud Slam 09' cloud services cost analysis, and how it can be minimized using performance boosting. We also discussed in this presentation databases in the cloud and Sharding based on participants questions.
    Feel free to take a look at the recorded presentation as well as the abstract, which are available at Cloud Slam site as well.

    The Pareto Illusion - Why we end up paying too much for cloud services and what can we do about it? Moshe Kaplan and Ayal Baron

    Video of the session:

    Presentation Abstract:
    Cloud computing is a whole new game: we are no longer talking about equity and CapEx, but regarding OpEx.

    So why are you still paying so much every month for your cloud services? How can it be reduced? What are the industry state of art methodologies to gain more out of your cloud service provider? How to do more with same, and how to gain a better ROI for your project? and last but not least, how to help saving our world.

    In this presentation we'll discuss how can you make your software more efficient, and how to focus on core components of the system and achieving 80% boost in 20% effort.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Cloud and Performance Experts

    Apr 30, 2009

    Can you monitor Java performance w/o affecting the system itself?

    The Conservative Way
    Well, the popular way to monitor code performance and detect bottlenecks is using Profilers. However, profilers insert small portions of code into the code base in order to measure each function processing time. For functions that require relatively long time to complete, this issue can be neglected. In other cases, when a function requires a very short time to complete and its called many times, this behavior can lead to wrong conclusions.

    Usually in order to overcome this issue, you should carefully tune your measurement period in order to avoid over measurements.

    The Alternative Way
    We at RockeTier, are discussing several business opportunities with IBM these days. During these discussions, we noticed a great tool by Jinwoo Hwang from IBM. IBM offers an alternative way. Its tool analyzes Windows (yes, they support only Windows) performance log and Java thread dumps and automatically detects Java threads that consume the majority of system resources.
    It is even more interesting since this tool supports both system monitoring and Java thread monitoring. Each can be done separately. Therefore, you gain flexibility, detect the problematic code area and focus on it in your debug environment w/o affecting the production system at all.

    You can download this tool from the IBM site.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance and Cloud Computing Experts.

    Apr 28, 2009

    Managing Your Resources in the Cloud

    One of the major issues while working in the cloud is managing your resources.

    When you have your servers in your own data center, you can touch them, see them and count them. And of course you pay a lot of equity for them.
    When migrating to the cloud your CapEx is finally zero, however, if you'll not manage your resources wisely, your OpEx will soon right and be larger than your old CapEx budget. Moreover, when working with a contractor (yes, your cloud computing provider is now your contractor), you should manage it as a contrator. meanning that you should monitor your resources and verify the service level.
    Moreover, many cloud clients are soon will find themselves runing dozens of servers, which in enterprises usually require a significant effort including command and control systems, NOC, helpdesk and so on.

    What should be my requirements?

    So what can I do?
    Well you have several options:
    A. Develop your own managment consule based on the chosen cloud computing provider API, monitoring each server resources.
    B. Use enterprise world command and control systems such as CA Unicenter, IBM Tivoli suite, HP OpenView suite, BMC or Microsoft.
    C. Use cloud monitoring niche systems. The largest and most significant player in this field is RightScale which provides both monitoring and auto scaling service. Another new player in this market is cloudkick. This player backed by the VC botique Y Combinator, provides its monitoring, graphs and alerting system free of charge.

    Bottom line:
    Option A (Develop your own solution) is too tedious and will walk you out of your main course of business. Option B is nice, but most of these players are fouces in an on premise market, and it will take them time to get into the cloud market. Moreover, most cloud computing clients are start ups and SaaS providers, that these massive C&M systems will not fits their needs.
    Option C in the current time seems to be the right one if you established your operations in the cloud. From our analysis cloudkick is a nice start for a free service, but there is way to go. For exmpale, it still missing the auto scalling feature (if my CPU is over 80%, provide me ASAP new EC2 instance). So our recommandations in the current is usign RightScale.

    Keep Performing
    Moshe Kaplan. RockeTier. The Performance and Cloud Experts.

    Apr 22, 2009

    Very large databases on the cloud: The Presentation

    This time I include the presentation that I presented on Monday: "How your very large databases can work in the cloud computing world?". It was a great presentation and the place was loaded with industry experts from various companies such as Amdocs, Panaya, SAP, Superfish and SUN.

    The presentation content can be found bellow as well as the slide show:

    Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"
    The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding? What kinds of Sharding can you implement? What are the best practices?

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance and Cloud Experts.

    Apr 19, 2009

    UPDATE: Very large databases on the cloud

    Due to massive registration, the lecture will be held in SUN Offices, HaManofim 9, 8th floor, Hertzelia Ind. Zone, Israel:

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance and Cloud Experts

    Apr 18, 2009

    Very large databases on the cloud

    This Monday, we'll present in the IGT cloud computing workgroup: "How your very large databases can work in the cloud computing world?". The presentation will be held along with other presentations by Nati Shalom and Haim Yadid, market experts in the field of performance and cloud computing. Therefore, it will be interesting being there.

    How your very large databases can work in the cloud computing world?
    Moshe Kaplan, RockeTier, a performance expert and scale out architect
    Cloud computing is famous for its flexibility, dynamic nature and ability to infinite growth. However, infinite growth means very large databases with billions of records in it. This leads us to a paradox: "How can weak servers support very large databases which usually require several CPUs and dedicated hardware?"
    The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on cloud computing architecture such and sharding. What is Sharding? What kinds of Sharding can you implement? What are the best practices?

    Date: Apr 20, 2009 14:00-17:00
    Location: IGT Offices, Maskit 4, Hertzelia Ind. Zone, Israel.
    Confirmation at:

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance and Cloud Experts.

    Apr 14, 2009

    MySQL Sharding

    A few weeks ago we had a presentation in the Israeli MySQL User Group, where we presented "How Sharding turned MySQL into the Internet de-facto database standard?"
    This presentation dealt with the common belief in the enterprise software world that MySQL cannot scale to large databases sizes. The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on MySQL. Most of these giants were able to turn MySQL into a mighty database machines by implementing Sharding.
    In the attached presentation from SlideShare we answer the following questions: What is Sharding? What kinds of Sharding can you implement? What are the best practices?

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Apr 12, 2009

    Migrating to the Cloud and Staying Connected to the Enterprise...

    Integrating SaaS software with enterprise infrastracture software
    One of the major issues enterprise software companies are facing off, when migrating their product from On-Premise approach to SaaS model, is the tight integration that is required between the software and the the enterprise infrastructure software.

    For example, most software systems require some kind of integration to the LDAP directory. HR software usually need to sync its organizational tree with the LDAP, and knowledge management systems require authentication and authorization based on the LDAP directory as well.

    In most Microsoft oriented organizations the LDAP directory is implemented using Active Directory. This tight integration prevented large organizations from consuming web based software services and forced them to keep their old habits.

    Microsoft Federation Gateway
    However, things are changing as SaaS gets mainstream. Microsoft with its Azure initiative provided a solution for part of this challenge: Microsoft Federation Gateway. This service enables several Active Directory functionalities using stack of web services. These web services were built on the Web service (WS-*) specifications such as WS-Trust and WS-Security so they are fully complient to major standards and can be used easily in the web environment.
    The Federation Gateway enables authentication using a 3rd party directory, retrieving and updating the user profile. Therefore, it can be used as a first step to provide SaaS customers full integration with their existing infrastracture.

    I would like to thank Idan Plotnik who is heading a new exciting startup these days for notifying me regarding this solution

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Apr 4, 2009

    Recent Cloud News


    This time I enclose a short update regarding the cloud:

    1. IGT Cloud Investment Summit for Virtualization and Cloud/SaaS Based startups
    This one should draw some interest in companies that strive for money these days. The IGT is organizing an investment conference for startups in the cloud/SaaS niche market. This one will take place in Tel Aviv at June 1st 2009.

    2. Are games as a Service getting mainstream?
    We get used to interactive casual gaming instead the good old solitaire. However, this time several startups (OnLive, Galkai) and larger firms (Sony, AMD) claim that even the most resource demanding games can be run using the cloud... Meaning that we should no more upgrade to the latest graphic cards and CPUs but consume the best games thorough the web. They are using new compressing methods and get advantage of the NGN networks (50Mbit to the home is almost here). Read more Here (hebrew) and in every blog regarding CES 2009

    3. The standards battle has just began
    As in every new emarging market, those who lag behind or feel threathned by the new trend, are merging into a standard group. This time players such as SUN, IBM, Cisco and HP merged forces to close the gap using the Cloud Computing Manifesto. This group is making its best to keep the cloud open and interoperable between cloud providers.
    The leading cloud computing providers such as Amazon (leader), Microsoft (Open... Linux... usually not), Google (PaaS closed environment) and SalesForce (same) oppose this move.

    Keep performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Mar 18, 2009

    The eBay way


    I received from Shachar Zehavi, our new director of R&D, a link to Chris Kasten's presentation regarding eBay

    In this presentation Chris presents a simple but very clever solution to support 4 billion events per day on 25 commodity servers. Why instead of using complex in memory databases solutions? not using MySQL as the ultimate grid solution?

    What are the key components in eBay solution:
    1. In memory database: implementation using MySQL in memory engine
    2. HA: implemented using MySQL replication between two different MySQL in memory databases
    3. Persistence: implementation using batch process once in 5min. In this case the store is done using InnoDB one.
    4. Scalability: can b achieved using horizontal sharding

    The number described in this presentation (4 billion requests per day using 25 machines) remind me another presenation of Paul Strong, distinguished research scientist from eBay. This presensation at the IGT2008 - The World Summit of Cloud Computing described eBay numbers and challenges including: 150 Billion request per day which is about 200K requests per second, which is a remarkable number.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Mar 14, 2009

    Interview: Crictor and the RockeTier 5 Steps Performance Boosting Methodology


    A week ago Crictor published an interview with me regarding RockeTier 5 steps performance boosting methodology (Hebrew). The interview includes issues regarding our team, the methodology and our success stories.

    Crictor is an online channel focused in the Israeli Hi Tech industry.

    Keep Performing
    Moshe Kaplan. RockeTier. The Performance Experts

    Mar 9, 2009

    Amazon Session Highlights

    As you may read we participated last week in the AWS session hosted by the IGT and SUN.

    Simone Brunozzi, Amazon Web Services Evangelist, was great and he gave several interesting notes including:
    1. Amazon will provide load balancing capability in the near future (you can avoid this issue right now using HAProxy, but it always great to get things out of the box).
    2. Amazon support availability zones: Amazon has several DCs in the US. Each availability zone is in another DC, so can provide multi site architecture in day 1.
    3. Amazon suffered for 12 hours downtime in the last 3 years. The longest was about 2 hours. This numbers are for intended and unintended downtime. It gives us about 99.95%, which a very remarkable number for such a scale and the given cost.

    Simone presented the Animoto case study: "from 80 servers to 3500 in 3 days" as well.

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    Feb 25, 2009

    Updated Knol: Microsoft Velocity


    I just updated my knol regarding Microsoft Velocity, Microsoft IMDB, cache and "Grid" product.
    Have fun reading and commenting,

    Moshe Kaplan. RockeTier. The Performance Experts.

    Feb 23, 2009

    SQL Server Partitioning: The bad, the good and the evil

    Horizontal Sharding is used to separate rows between several tables based on applicative logic. Microsoft introduced in MS SQL Server 2005 a build in mechanism named Partitioning to support this need without extra code in the business logic itself. This mechanism enables you deciding in which filegroup each row will be placed supporting regular queries to retrieve and update the data, while boosting performance.
    Some syntax and code examples are available if you want to master this feature.

    What Can be done with this great feature?
    1. You can break large tables into smaller chunks which fits the application logic (e.g partioning client's data according to its id or data according to date).
    2. You can put heavily accessed parts of the table onto fast storage, and less-accessed data onto slower, cheaper storage.
    3. You can boost the backup time in static date partitioning.
    4. You can boost many queries including DELETE, SELECT, UPDATE and so on based on right design.

    Pros (Why should I use SQL Server Partitioning rather than Horizontal Sharding)
    1. Horizontal Sharding out of the box.
    2. A lot of thought and effort were invested in this feature to make it working in just few lines of code
    1. Relatively new (well not so new since it was presented in SQL Server 2005), and many issues were fixed in SQL Server 2008.
    2. Requires Enterprise Edition (10X licensing cost relative to standard edition or in other words: $25K per CPU)
    3. Relatively complex (no support in enterprise manager, and few DBAs will be able to support it) so you probably should master this white paper before taking it into production
    4. Will bound you to SQL Server (Enterprise Edition).
    Industry Opinions:
    Brent Ozar: "outside of data warehouses, I like to think of partitioning as the nuclear bomb option. When things are going out of control way faster than you can handle with any other strategy, then partitioning works really well. However, it’s expensive to implement (Enterprise Edition plus a SAN) and you don’t want to see it in the hands of people you don’t trust."

    Bottom line:
    Ask your DBA if they feel safe with this feature, and with your clients DBA. If the answer is no, consider choosing another solution.

    I hope now you have all the information to make the decision by yourself. Otherwise, post your comments and we'll be glad to help you,

    Keep Performing,
    Moshe Kaplan. RockeTier. The Performance Experts.

    UPDATE 2: IGT Hosting Amazon AWS Hands-on workshop will start at 18:00


    Great News,
    After all the Amazon meetups, the IGT is going to host Amazon AWS Hands-on workshop,
    It's a great opportunity to meet Simone Brunozzi, Amazon Web Services Evangelist - Europe, and have real life hands on experience as well as asking questions.

    Date: Mar 3, 2009 10:00 13:00
    Location: SUN Offices, Manofim 9, 8th Floor, Hertzelia, Israel
    Organizer: IGT

    The preliminary agenda:
    18:00- 18:30 Reception
    18:30 - 19:00 Intro to Amazon AWS
    19:00 - 20:30 Hands-on AWS Workshop
    Account management
    S3 – details and examples, using Firefox S3 organizer
    EC2 – details and examples, Linux and Windows, using the AWS Console
    Cloudfront – details and examples, using Firefox S3 organizer and/or other tools
    20:30 - 21:00 AWS Q&A

    Moshe Kaplan. RockeTier. The Performance Experts.

    Feb 21, 2009

    Case Study: Handle 1 Billion Events Per Day Using a Memory Grid


    We just published our case study regarding affiliate marketing billing system performance boosting, and we got a great post from Todd Hoff from regarding it.

    The case study main highlights are:
    1. How to grow from 1 million events per day system to 1 billion events per day
    2. How to keep cost low and avoid millions of USD equity investments
    3. How to grow fast keeping close with the business objectives while designing the road map

    Please feel free to read our performance boosting case study and comment regarding it in this blog.

    Moshe Kaplan
    RockeTier. The Performance Experts.

    Update #1: Answers to questions I received through the email:

    How do we provide HA?

    We usually deploy the systems in a active/active configuration.

    What about crash recovery? If counters are kept in memory only there is a window time where a crash will loose the updated counters? Is the client OK with loosing some updates or do you address it someway?
    First of all, many of our clients prefer to avoid data loss and risk the lose of data saved in the memory. This is based on simple arithmetic. If you lose a 1 minute of business operation in single server that is about 400 events/second and in case every 1000 events generates you a revenue of few dozens cents, you will prefer losing these 30 bucks than replicating every part of the system.

    How do you make sure that the other request to the failed server are still getting answer?
    The load balancer is smart enough to detect the server failure, change the rotation algorithm, and making sure an alternative server will take care of the processing.

    How do we support Multi datacenter HA?
    Multi datacenter HA can be achieved using geo clustering

    What about customers that require zero loss of data?
    Other customer that require zero lose of data, get an answer using 1) Gigaspaces XAP, which supports data synchronization on the fly between two servers, keeping the two servers synchronized to the last operation and 2) using RDMA.

    Feb 19, 2009

    Lecture: MySQL Sharding


    In the next Israel MySQL User Group, RockeTier will present:

    "How Sharding turned MySQL into the Internet de-facto Database Standard?"
    A common belief in the enterprise software world is that MySQL cannot scale to large databases sizes. The Internet industry proved it can be done. These days many of the Internet giants, processing billions of events every day, are based on MySQL. Most of these giants were able to turn MySQL into a mighty database machines by implementing Sharding.
    What is Sharding? What kinds of Sharding can you implement? What are the best practices? All these issues will be address in this lecture by Moshe Kaplan from RockeTier, a performance expert and scale out architect

    When: Wed, March 4th
    Where: InterBit, 6 Ha`chilazon St., Ramat Gan, Israel, 03-7529922

    Feb 3, 2009

    Google Performance Lags Behind


    We always tend to think that the mighty great internet giants are free of bottlenecks. Well these guy spend a lot of money and have the best guys out there (except for us of course :-).

    However, if you use Google Analytics at your site (I admit, we at RockeTier measure everything as annonymous profilers) or just happen to use a site that uses it, you probably noticed that it takes a pretty long time to connect and download the js files that measure your stay in the site.

    Well now it's scientific, blogoscoped found out that the Google Analytics scripts load 27% slower at peak hours in North America, and 97% slower at peak hours in Europe!!!
    This behavior is a real headache if you decide your online marketing efforts based on Google Analytics data.

    A nice solution for that is keeping a copy of the script on your firm servers, making sure that you are not depended on Google performance,

    Moshe. RockeTier. The Performance Experts.

    IGT Hosting Amazon AWS Hands-on workshop


    Great News,
    After all the Amazon meetups, the IGT is going to host Amazon AWS Hands-on workshop,
    It's a great opportunity to meet Simone Brunozzi, Amazon Web Services Evangelist - Europe, and have real life hands on experience as well as asking questions.

    Date: Mar 3, 2009 10:00 13:00
    Location: IGT Office, Maskit 4, 5th Floor, Hertzelia
    Organizer: IGT

    The preliminary agenda:
    10:00 - 10:30 Intro to Amazon AWS
    10:30 - 12:00 Hands-on AWS Workshop
    12.00 - 13:00 AWS Q&A

    Moshe Kaplan. RockeTier. The Performance Experts.

    Feb 2, 2009

    The Mystery of System Calls


    It always a pleasure to have real life contributions from colleagues in the industry. This time, Rubi Dagan, a system architect and senior team leader at Metacafe, one of the world's largest video sites ww, shares with us "the mystery of system calls".

    Metacafe's software system had many calls to time(), and during stress it was felt much stronger. For example, in the figure 1 you can see that 35% of the syscall time was wasted on time().

    However, when taking a look on several other servers, it was found that all requests are being processes without calls to time at all!!! (see figure 2). Hint: use strace -cp `ps ax | grep [h]ttpd | awk '{ print $1 }' | tr '\n' ',' | sed 's/,/ -p /g'` -f to get this information

    Solving the mystery...

    The solution of it is based on the BIOS, in an option named HPET – High Precision Event Timer which when enabled, the kernel do fast lookups without a need to use the time() syscall. This method is able to track the time instead of the kernel. Please notice that this function should be enabled on the kernel.

    That’s it, instead of Apache or other program deals with time the HPET mechanism do that. The bottom line is reduce time system calls. See also the thread in StackOverflow.

    Bottom Line
    This new configuraion reduced syscalls time by ~30% and more which is being translated to a great performance impact on Metacafe servers.

    P.S We'll be glad to expose here other cases from the industry. Don't be shame to submit your case and contribute the community.

    Best Regards,
    Moshe Kaplan. RockeTier. The Performance Experts

    Jan 27, 2009

    Hibernate Composite Keys Bug


    While working on the last few weeks on the last Gigaspaces XAP version, we were upset to find that it does not support composite keys (2 or more fields that serve as the table primary key) when using the default Hibernate connector.

    When we investigated the issue, we found out that since Gigaspaces XAP is heavily relayed on Hibernate, and since Hibernate does not support this, Gigaspaces does not support this as well. UPDATE: Gigaspaces is not heavily relayed on Hibernate, rather than provides Hibernate as its default out of the box ORM mapping solution.

    It seems that we are not the only one who suffers from this issue in Hibernate (see the error code: Expected positional parameter count: 1, actual parameters: [Parent@bec357b] [from Child this where = ?])

    We are working on an applicative solution to solve this issue, but not found time yet to get deep into the Hibernate source code and solve this issue due time stress.

    Hope somebody will solve that, before we'll get to it again :-)

    UPDATE: Please notice the thread regarding this post which discuss Gigaspaces other external data sources, virtual fields, how to overcome this Hibernate composite keys limitation and their drawbacks.

    Best Regards,
    Moshe. RockeTier. The Performance Experts.

    Jan 26, 2009

    How much can you get out of your MySQL


    As you probably understand, our team, as always, is taking these days MySQL to its limits. It is our pleasure to share the insights with you.

    First lets make a small assumptions to the process: We were dealing with InnoDB configuration (which MySQL engine should I choose?)

    MySQL Performance Benchmarks:
    I would like to refer you to several case studies and benchmarks of MySQL:
    1. In our tests while boosting an OLAP mechanism, we reached a number of 12 Group by queries/second on a 1 Million records.
    2. 17 Transactions/second were reached on a basic machine (PIII, 256MB RAM on a 100K records table). However, the benchmark performer never revealed his exact benchmark.
    3. A MySQL performance benchmark paper from 2005, reached 500 reads on a 10 Million records table on a 8 CPU machine (8 MySQL instances). Meaning that MySQL reached a 60 reads/seconds on a single instance MySQL in a well optimized benchmark.
    4. Sun performed on a Sun Solaris based 4 AMD Opteron Dual Core Model 875 (8 MySQL instances) 16GB RAM reached 1800 (RW) and 2900 (RO) transactions/seconds or 220 (RW) and 350 (RO) transactions per second on a 1 Million records table.

    Bottom Line:
    As a rule of thumb we recommand to not use tables with more than few dozens million of records, and not to expect to more than few dozends reads per second per MySQL instance

    Other useful issues for building your scalable software system:
    1. MySQL supports triggers. However, we recommend to avoid this feature due to performance issues. Please, if you fill any need to use triggers, please implement it in the BLL level, rather then using triggers.
    2. MySQL supports Identity using @@Identity. However, please notice that this one is not well working with triggers (noitce our hazard before).
    3. MySQL supports XML, and both SQL Server "FOR XML" and "OPEN XML" can be implemented using various methods (critics). However, we do NOT recommand to use these methods either in MySQL and SQL Server. Usually the database is the bottleneck of any system; Therefore, you would like to avoid any unnecesary operation in the database.
    4. INSERT DELAYED: very useful in cases, when you would like to make an insert to a table (e.g log tables and queue like tables) and wants to avoid the wait till the INSERT is processes. INSERT DELAYED are performed on the server open windows and not immidiatly.
    5. Multiple inserts: multi valued INSERT performs best in MySQL
    6. Transactions are supported in MySQL, use them when needed.
    7. Try Catch are supported as well.
    8. MySQL row sizes:
    - Regular fields take up to 8KB.
    - You can use VARBINARY, VARCHAR, BLOB and TEXT columns to get more.
    - 1000 columns is the limit per table

    Moshe. RockeTier. The Performance Experts.

    Jan 25, 2009



    It was only a matter of time since MapReduce concept was presented by Google, and Hadoop (ya, still owe you a post regarding it) was turned into public by Yahoo! and Apache, that a new solution that fits online queries and not only batch processing will be presented to the market.

    Greenplum, a silicon valley based startup, providings a datawarehouse and analytical database, that can scale to petabytes while processing single query in parallel (or in other words, do SQL in the MapReduce way).

    Greenplum already has several interesting clients including NYSE, Reliance communications (1 billion CDRs per day) and FOX.

    Take a look at this company,

    Moshe. RockeTier, The Performance Experts.

    Jan 24, 2009

    Java, MySQL and Large Datasets Retrieval


    As told before, it was a MySQL week,

    We had a major work this week solving a performance issue in a reporting component to one of our clients. Since its current component worked directly against the raw database, it was facing degragated performance as database and business get larger.

    Therefore, we designed an OLAP solution that extracts information from the raw tables, group and summarizes the data and then created a compact table, which data can be easily read from.

    However, the database is MySQL, and we used Java to implement this mechanism. Unfortunately, it seems that Java and MySQL don't really each other or at least like large tables: When you try to extract records our of large MySQL table you receive out of memory error in the execute and executeQuery methods.

    How to overcome this?
    1. As suggested by databases&life, set the fetchSize to Integer.MIN_VALUE. Yes, I know it a bug, not a feature, but yet it solves this issue:

    The reason for this bug is the code in of the MySQL JDBC driver code:
    protected boolean createStreamingResultSet() {
    return ((resultSetType == ResultSet.TYPE_FORWARD_ONLY)
    && (resultSetConcurrency == ResultSet.CONCUR_READ_ONLY)
    && (fetchSize == Integer.MIN_VALUE));

    And the solution is:

    public void processBigTable() {
    PreparedStatement stat = c.prepareStatement(
    "SELECT * FROM big_table",

    ResultSet results = stat.executeQuery();

    while ( {

    2. The other option is doing this fetch applicative, meaning that each time setMaxRows will set to N and reocrds will be extracted only if their id is larger than the extracted before

    public void processBigTable() {
    long nRowsNumber = 1;
    long nId = 0;
    while (nRowsNumber > 0) {
    nRowsNumber = 0;
    PreparedStatement stat = c.prepareStatement(
    "SELECT * FROM big_table WHERE big_table_id > " + nId ,
    ResultSet results = stat.executeQuery();

    while ( {
    nId = results.getLong(1);

    Hope you find it useful as we found it, and thanks again to databases&life,

    Moshe. RockeTier. The Performance Experts.

    Jan 20, 2009

    Does MySQL 5.0 work with multi-core processors? yes???


    I think this week can truly be named: "The MySQL week". So many issues regarding this product. You know, sometimes it is the cost of getting things for free...

    Well, first of all, MySQL is a great product. Many start ups started with this product and many giants are using it for their billions events per day systems.

    However, MySQL has several limitations. One of them is that it not really supports multi core processors. Yes, I know, MySQL definitly tell that they do support multi threading. However, our analysis found out that this is not the case. As you can see in the attached graph, the MySQL machine really works hard. However, since it's a quad core machine, it is reaching only 25% CPU utilization

    To be more accorate, too many people around the globe like Jeremy Kusnetz, peter with who Sun reach performance on 256 cores and starnixhacks got to the same conclusion: MySQL do use multi threading to it periphrial components, but when it get to do real work, it not really supports multi core processors.

    This can be really defined as a major issue, since modern CPUs are based on slower clocks and more cores... So what can we do?

    Well the answer is Sharding...
    In few words, sharing is the internet companies way to install many weak type databases, each dealing with a vertical or horizental part of the database. This way you can install on a dual qual core CPUs machine, 8 MySQLs instances, each dealing with a single partition of your application (for example the first stores customer with name starts with 'A', while the second stores these that start with 'B', 'C' and 'D' and so on). A broader review of this method will be provided in the next weeks.

    Anyhow, I recommend MySQL Optimization guide, as a must to any MySQL would be tuning professional

    RockeTier, The Performance Experts

    Jan 19, 2009

    RockeTier Agile Development Methodology


    We at RockeTier, believe that almost any software system can do better by changing and modifying only a small portion of existing code base.
    We proved in several cases that using this methodology you can gain major business value in short time. or in other words Agile.

    Therefore, it was only a matter of time before we migrate our development team to Agile methodology (you guess right, our software team develops high load software systems which process hundreds of millions of events per day for our clients). This change enabled us providing business value in shorter time frames.

    Our development methodology is based on the following basics:

    1. System wide design - the product owner is responsible to provide long term road map for the system, including software architecture, database architecture, middleware, non functional requirements, timeline and so on based on inputs from the client or internal product manager. This is an important component which sometimes Agile evalgelists tends to neglect.

    2. Product backlog - product owner is responsible for breaking the long term architecture into business processes. Each business process is analysed, confirmed by the client and used as an input for the sprint backlog. Many times business processes are not complete, but describing current days needs in order to gain business value, knowing that these requirements will be changed in the future.

    3. Sprint backlog - each business process is broken into a list of tasks (usually by the programmer). This list is documented in a central repository and being written on the team whiteboard.

    4. Daily Cycle Status Review- each day a peer meeting is being handled on 10AM, syncronizing open issues with closed issued, adding high priority tasks (usually bugs)

    5. Daily Version - each day on 4PM we upload a new "for review" version which includes all the commited new features added to the software in the last 24 hours.

    6. Weekly Version - based on the last week "for review" versions we upload a new version on Wednesday 12:00, enabling our team and the client examine the system before weekend.

    7. Quality - We believe that people achieve best results when they are responsible for their products. Therefore, most QA is done by the programmers themselves and the team leader:
    - Peer review and pair programming - we often use these methods when developing sensitive components and when time is short, to achieve best quality in short time, and in order to reduce the risks.
    - Code review - done by the team leader before a new feature is committed to the SVN
    - Code management - we use SVN as company standard, but also consider to use GIT, in order commit (locally on your computer) every change you make that way you can return to any point in time but do not break everybody's code if your changes are not complete. and only when code is ready, commit your changes in svn.
    - TDD - we use test driven development to make sure that code is not broken between cycles, and making sure that new changes will not harm current business processes

    8. Knowledge management - knowing that sharing a knowledge will help our people gain best results, we opened few weeks ago a new code snippets and best methodologies blog. This is an open blog by nature so the community can enjoy our insights and experience.

    We seen a great improvment in our development products since implementing this method including: reducing the needed time to get new people into productivity, reducing error and misunderstandings, reducing time to market and reducing bugs,

    Keep agile,
    Moshe, RockeTier. The performance experts.


    Intense Debate Comments

    Ratings and Recommendations