Pat Helland from Microsoft gave a presentation last year at TechEd in EMEA called “The Irresistable Forces Meet the Movable Objects“. I received a copy from Kala at work. Not long after reading it I wanted to find out more. This lead to finding the video version. Click on the link with the videoid=706 tag. You will also need to sign on with a Microsoft passport ID (hotmail id works).
This presentation is compelling not only because it includes lots of trend analysis but also because it answers questions to problems that are only beginning to happen. Pat had worked for years for Microsoft before going to Amazon. Only recently he has returned to Microsoft. I believe the experiences from both Microsoft and Amazon has helped him reach some unique insights.
I highly recommend either watching the PowerPoint presentation or the video. The video has the extra bonus of including an internal Microsoft video about devices and the cloud. It almost is ad quality but reveals how Microsoft has invested heavily in their perceived future in the cloud computing space.
Ever wonder why the CPU clock rates aren’t going up much higher? That question is answered towards the beginning. The speed curve is supposed around 3.8 GHz in 2009 with the slope getting flatter. The reason is that the performance of the CPU is impacted by heat and the amount of power needed for that speed. The hotter the chip, the more power it takes to operate which leads to more heat. The smaller the transistors, the more power that is needed. The faster the frequency, the more power is needed. The smaller and faster you get, the more power you need and the hotter you get. It is a good example of diminishing returns. As you go further and further, the heat and power grow exponentially without much to show for it.
Pat predicts a 10% speed gain maximum per year over the next several years.
Memory continues to support a latency of around 60ns for retrieval. There does not appear to be anything that will improve this in the near term. The processors always want more data but the memory relatively slow.
The size of transistors will continue to shrink. Currently we are at 45nm with a projected size of 8nm in 2018. This translates to having many more cores per chip. Currently we can have 8 cores per chip but it projected by 2018 we will have upwards of 256 cores. By having multiple cores is a way around the frequency limitations. The idea is to give multiple engines to drive the applications which will make the applications run faster as if they were on a faster single processor. Another possible trend is to put memory onto to the CPU to allow faster access and have it be shared between the cores. It is much cheaper to buy and support multiple core chips than buy a must faster single core CPU.
With data centers, 40% of the cost is power. The building around the data center is only around 15% of the cost. Saving power translates to saving power for air conditioning as well. Backup power supplies take around 20% of the budget.
In storage, It is projected that there will be 10 Terabyte disks in 2010 for around $100. Flash will equal SATA storage costs in around 2012. Flash runs much cheaper and cooler than standard disk storage. It is projected that a 128GB Flash disk will cost $40 in 2010. The performance of Flash is better than disk and relatively low power. Flash also can have a much wider pipe.
In communications, 100Gbit/s LAN speeds will be viable in 2011. Total bandwidth is seen to triple every 12 months. Latency will continue to be a problem. Wireless will continue to grow but will not cover everyone. Signal loss will still be possible even in populated areas. Being offline will still be important.
Given the wide ranging topics, it would be better for you to see this information for yourself. What I have summarized is really just a taste. Hopefully you will be inspired to find out more.
I’ve been interested in cloud computing over the last year. It didn’t seem clear how certain problems were going to be solved in this space. With Pat’s presentation, many aspects of this are now much clearer. It does indicate some pretty big shifts that are about to happen to both the producers and consumers of this kind of technology range. Strangely enough, I see what Pat suggests as making computer systems behave a bit more like biological elements. By this, I mean that it is new to allow a computer system to act more autonomously with the possibility for making a mistake. When things become more decentralized, computer systems are going to have to make educated guesses without some central server telling them what is right and wrong. In fact, there will be no central server. It becomes more a living system with divergent results which ultimately come together in the end. It might seem like science fiction but in reality this kind of working is not far off from becoming real.