From the specs and interviews with the designer, it's a 64-core Epiphany co-processor sitting next to a dual-core ARM CPU, so I wonder how much is managed by the system, and how much is bare metal. It's got expected virtual speeds up to 50GHz. I know, in modern computing terms, Gigahertz is a trivial benchmark, but for a chip that consumes less than 2 watts, it's impressive.-Edvard
I'm not saying it's not a nice chip, and it does sound like it packs quite a punch - and I love that they're saying they want to be open about it all. But you're not going to reach that top speed unless you have something that's
massively parallel - lots of things are hard to split across threads (and the single-core performance of Parallella is
low compared to x86). Other things are hard to do without synchronization which, apart from hard to program correctly, can mean massive performance drops (I hope the shared memory / inter-core communication is very efficient!). Then there's also the thing about GHz by itself being mostly meaningless, you also need to know how many cycles the various instructions take, and an amount of other factors
Apparently though, the object isn't exactly speed, but functionality; an inexpensive platform for learning how to program for parallel computing, almost like a hardware emulator of more serious iron, to make it easier for students to get into parallel and multi-threading concepts now, just when it's starting to grow.-Edvard
And that's what I wish people would focus on, instead of the silly "supercomputer" claims
- your quote from
Supercomputing for the masses is spot on the sugar. Parallel computing is important, and reducing what used to take a cluster down to a single chip is awesome! Heck, even if the chip didn't deliver more performance than an octa-core x86, it would still be more usable for teaching scale-out parallelism.