Sounds... convoluted. And one ting is "crashes", another your standard run-of-the-mill logic bugs - nothing can really do anything about those.
The executive overview doesn't really give much info, anyway. But it sounds like something that's going to be hard to make reach the speeds we're currently seeing - lots of duplicated units (that might be idle when not selected by the RNG, and thus wasting expensive silicon?), how is the data transfered back and forth, etc.
Also:
Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program," Bentley says.
...has Bentley been living under a rock for the last, dunno, 10 years? We've had multicore machines for quite a while. Sure, each core runs one step at a time, but cores do run in parallel :-)
Might be an interesting idea, but even in the newscientist story that gizmodo links to, there's not a lot of information. How is the system different from clock-synchronized failover systems that have also been around for quite a while (albeit in the pretty high-end of computing)?