I was just clearing out my Google Reader, and came across this fascinating post on David Linthicum's Cloud Computing
blog at InfoWorld:Solved! How to make Google's cloud 20 percent more efficient
Seriously interesting stuff. I've done work on optimising operating system queuing processes and large batch mainframe processing streams, and even tweaking optimising compilers, but, not having been involved, I had not realised that huge Cloud IT operations were open to such large
performance improvements - I mean, 20% is a heck
of a lot. How did that
escape notice before now?