Nov 28, 2008

Garage Cloud Computing Event - Short Summary

It was a great evening last night in the GarageGeeks event along with industry fellow and mates: Avner Algom (IGT), Guy Nirpaz and Nati Shalom (Gigaspaces) and Shlomo Swidler. Each one of us has a great presentation related to Cloud Computing. I preseted how a startup can take advantage of Cloud Computing. Please find atttached this video


Nov 25, 2008

How to start up using 10 bucks NRE and Cloud Computing

I'm very excited to announce that RockeTier, the performance experts, are gonna present in the coming GarageGeeks event. We will present there how start ups can take advantage of Cloud Computing during development, beta sites, presentations and simulations, and yes even for production.

We found out that this model enables start up cut there costs, the needed equity to be raised and shorten the time to market (TTM)

Don't forget to register

See you there

Nov 24, 2008

Online advertisement. How do you handle billion events per day?


It was an interesting day today. I'm taking a part in a two days conference named Affilicon. This is the first affiliates networks and affiliates conference in Israel.

Why do I care?
Well, we have several clients in this field.

Why these clients are interested in our services?
Well, it seems that online advertisement systems is a major source for high end system that must handle thousands of events per seconds, or billions of events per day!

Are there so many?
Ya, these systems are counting every banner, text ad or video impression. Think for a second about Google Adwords. Have you taken a look on your conversation rates? Did you think how many text ads do they serve per day? Did you think how do they count all these impressions?

How do they handle these rates?
Well, these players have very large farms (Google for example has around 1 Millions servers in its data centers). They also implement complex solutions to handle these stress in complex ways, doing it all using commodity servers

So how do RockeTier help them?
Well in various ways: 1) boosting their performance, reducing their number of servers by factor of up to 20; 2) implementing shredding, load balancers and grid solutions in order to split the processing between several servers and reducing the stress on each server and 3) use better algorithms to better their system


Nov 21, 2008

How to minimize testing costs using cloud computing

Hi All,

Development of high quality software requires testing. A lot of testings. Usually it requires verifying the new software version on a large number of operating systems. Moreover, if your software system had miles on the road, you usually need to support several previous versions of the software system.

How do handle this large number of servers each required to handle each version, test scenario and operating system? If you are using test driven development it probably even harder... You should every time get back to the base version...

Few years ago, the only option to support such a case scenario was having a large server farm, where servers are restored from backup in the end of each test. The bottom line: this old fashion solution required significant equity and a lot of manual work to perform.

Virtualization changed the market. On a single physical server you can load several logical machines and perform tests. Moreover, if don't need to verify some of the tested versions, you can avoid turning on these logical machines. You thought it could not get better? since virtualization platforms enable getting back to defined points in time, a.k.a checkpoints, you could restore base version by a simple click! The bottom line: virtualization enables software developers and ISVs cut their costs and manual work and get much more using few machines.

These days you can even get better. Using CLI you can automate this process, raise your logical machines just before starting the build and tests, and restore to the relevant checkpoint automatically. More information can be found in Jani Järvinen's article.

What about doing all these testing and performing all these tests automatically without a single penny NRE? Yes we can! using cloud computing can perform all of these by just paying for the used CPU hours and a neglectable payment for the on going storage. Cloud computing providers such as Amazon EC2, Flexiscale and AppNexus enables you allocate servers on demand, attach to them a snapshot and start running your test in short time. Did you know that in just 10 USD you can perform a one hour test cycle on 30 different machines?

During our development when we need to simulate software systems which handle hundreds of millions of events per day. Since several software engineers work on the same project and sometimes on the same file, we need to make several builds every day, in order to make sure that the end of day version will be stable. we found these methods useful. Hope it help you as well.

Best Regards,

Nov 11, 2008

The world summit of cloud computing and the cloud computing directory

Hi again,

As you may know, I'm a board member at the IGT (the Israeli association of grid technologies). The IGT annual conference is taking place in few weeks. This year IGT2008 is focused on cloud computing, as you may conclude from its formal name "The world summit of cloud computing". You may find in this conference every major player in the field like Amazon EC2, eBay, Google, Microsoft, Sun, HP, Gigaspaces, Intel and many more. You will even find us, RockeTier, speaking regarding our field experience and our strategic aspects regarding cloud computing. We will have a booth as well, so we'll be glad if you'll go by and visit us.

While arranging the conference we noticed that there is no formal directory. which map the players in this emerging field. Therefore, the IGT created an interactive cloud computing players directory that you'll can find here. This directory includes an interactive map so you can easily dive and find the relevant players. We hope you'll find this tool useful, and we'll be glad if you'll update us if we missed any player in the field.

Waiting for your feedback,


Nov 7, 2008

Green IT (Green Computing) - IT as an environmental issue

Environment is a major global issue of the 21st century due to growing air and water pollution, toxic wastes and consumption of non – renewable natural resources causing global warming.

The IT industry help to the global effort of protecting the environment is becoming eminent as IT systems are the fastest growing segment of electrical power consumption worldwide.

The building of IT systems infrastructure requires expensive non-renewable natural resources for electrical power distribution systems, backup power systems, cooling systems and fire suppression systems. In addition, there is a need to consider the large electricity consumption.

As technology evolves servers' cost reduces and many find it easier to buy another server for a new application, instead of using their computing resources efficiently. However, the industry is changing and preference of "green" products over not "green" products is a growing trend.

Green Computing requires new thinking in order to reduce the use of hazardous materials, to maximize energy efficiency and to promote recycling.

The IT industry made numerous changes to improve the systems' overall energy efficiency:

1st Generation Green Computing

The first generation of Green IT is consolidation. Using this methodology major enterprises have gathered servers from different departments in the enterprise into the enterprise data center. The consolidation enabled these organizations consolidate several clients from various departments using the same application or software architecture into a single server, reducing the number of servers and the environmental costs.

2nd Generation Green Computing

These days we are in the middle of the second generation of Green Computing, which is better known as virtualization. Servers’ virtualization enables enterprises gathering several different servers which do not share the same architecture or vendor and place them on a single physical server. Virtualization reduces the number of needed physical servers, which usually leads to high CPU and Memory usage from few percents to high utilization. This technology enables you doing more with every dollar you spent on your servers' hardware and data center floor space. Therefore, Virtualization has a proven 5 months ROI and it leads to average 30% data center cost reduction.

3rd Generation Green Computing

Virtualization made a great change in enterprises, turning low utilization production servers into high utilization servers.

Today, every new virtual machine you place on an existing physical server, leads to immediate savings, but the savings are limited by the server resources utilization.

The obstructing components for economical and environmental savings are the CPU, memory and network usage efficiency of every logical machine.

The components' efficiency is directly connected to the software performance. It is common to see that efficient implementation of a key business process, can lead to 50% reduction in the resource utilization. Software performance boost is considered to be the 3rd Generation Green Computing, and will enable enterprises reduce both economical and environmental costs

We at RockeTier, are leading the 3rd generation Green Computing by providing novel methods and methodology to boost software performance. Our methodology enables hardware utilization, reduction of servers, floor space usage and electricity consumption.

I highly recommend that you go over Gartner review, which identified the top 10 strategic technologies for 2009. Performance is a key component in Green Computing, one of the top 10 strategic technologies in both 2008 and 2009.

Latest Green Computing news are available at CNET and ZDNet

Save Earth,



.NET Web Application Boosting


We are involved these days in a large ASP.Net project for a new company in the online advertising field (Great company with a great innovative product, which I'll clearly present when time will come). The product includes both a large back office system, and an impressive real time/black box server system which is designed to support 200 million events per day (and counting).

I wanted to share several concepts we are using in order to boost their system and reach these numbers:
1. Using a grid infrastructure - we are using Gigaspaces XAP. This is a great product, which enables us reaching 20 Million events per day on a single commodity server and achieve linear growth. Since our customer is a startup, it is registered to Gigaspaces startup program, meaning it gets the product free out of charge.
2. Using custom http handlers - this technique enables creating a class library assembly that can be linked directly to, without going through a standard .aspx page. This method reaches 5-10% boost over the standard ASPX method.
3. Using async pages - This method turns the regular syncronized ASPX web pages into async ones. Why is it so important? since even interactive web pages may include calls to relatively time consuming methods like reading/storing files on disk or consuming web services. In these cases the request will wait in the thread pool till the time consuming method will finish its job. The result is clear: preventing another web requests to be handled. Async pages releases the bottleneck by processing these time consuming methods in another thread pool. By doing that, it enables us serving more requests on a single server. Read some more at: Pluralsight, ASP.NET Resources blog and this one.

Have a great weekend,

Nov 3, 2008

The Cloud Computing Triple Play

I just read a great article by Ted Dziuba at the register. In this article Ted covers the three different attidutes of Amazon, Google and Microsoft to Cloud Computing. This 3 large players, each provides its own attidude to this topic: EC2 with cheap books... oops.. servers, App Engine with lightweight web applications and Azure heavy weight windows/SQL server/.Net based applications.
Ted also described the three different marketing attitudes of these companies.
It is clearly that we are getting into a battlefield between cloud providers who provide open root access servers (Amzaon, Flexiscale and AppNexus) and the providers who give you a closed environment (Walled garden?) which is heavily depended on using the provider web services and API (Yap, Microsoft and Google definitly seems to share the same prespective...)

Have a bright and shiny day,


Intense Debate Comments

Ratings and Recommendations