UGH. The Intel CPU naming system is pretty much the most f****d it's ever been. If it were just i3, i5, i7 in that order it'd be great. Theoretically it is, actually. But in reality Intel appears to contradict the rules of their own naming schemes constantly. There are i5s that are well faster than i7s, for example. Or dual core i7s when they were all supposed to be quad. It really must be deliberate, but I'm not entirely sure why. But then I'm not a multi-billion dollar company, so I guess I just wouldn't understand. I try though, I do want to understand why it makes economic sense, quite aside from my frustration about it not making *consumer* sense. I don't like that reality, but I feel like it'd be easier to stomach if I understood the business case for it. But inciting consumer confusion doesn't really seem like a solid business plan to me.
Another thing that bugs me. The whole "Ghz doesn't mean anything anymore". Yeah, sorry, that's kind of BS. It still means a lot. You just can't try to compare across CPU architectures. Fortunately there are only 2-4 (maybe 6 at max) different fundamental architectures at a given time, and most of the time there are only 2 in any given price/performance bracket that you'll be looking at anyway. Not coincidentally, these are almost always the different CPU architectures of the major CPU manufacturers in competition: Intel and AMD. So yes, Intel's Ghz is not equivalent to AMDs Ghz as far as *work done per Ghz*, but within the same family Ghz is a pretty clear and consistent measure of *performance*, with a few modifications.
For the most part all you need to do is multiply the speed of the CPU by the number of physical cores. That will give you a pretty darn good idea of actual performance. So an Intel i7 860 at 2.8Ghz with 4 physical cores could be thought of as a "4x2.8Ghz=11.2Ghz" CPU. Accurate in a technical sense? No, not at all. But it does communicate the *amount* of work it can do at once, relative to other chips with the same architecture.
Now 2 things can complicate this, but neither actually has such a huge impact on performance that it really throws off the scale enough to invalidat it. One is "hyperthreading", which effectively allows instructions to be submitted to additional "virtual" cores and does increase performance from 5-15% or so on average. Some i-series CPUs have it, some don't. Those that do are more performant, at least on multithreaded tasks. But again those CPUs with more actual threads and higher actual Ghz will be faster anyway. The other thing is "turboboost", the ability for the CPU to clock itself up under single threaded tasks. This again does not necessarily have a huge impact, though it depends on the user's app mix. Granted many things are still single threaded so A: faster single-core speeds will matter more than more cores, and B: CPUs with turboboost could actually make a difference. But fortunately as a general rule CPUs with turboboost are already higher on the performance scale and the straight Ghz measure should suffice. There are plenty of exceptions to this, but it's still worthwhile paying attention to Ghz, and the idea that processor clock speeds should be totally ignored is rather silly as it's still the main factor in performance.
As I said this doesn't work for comparison between processor architectures, particularly across Intel and AMD. For that you need actual benchmarks anyway because there's no way you'll ever figure it out just looking at spec sheets. For the most part all you need to know these days is that Intel's architectures are more efficient, so generally speaking a given Ghz on an Intel CPU will be more powerful than the same on an AMD CPU. The opposite used to be true back in the days of the original Athlon and P4s, but the tables have turned (and may turn again, but the rule of thumb will probably still be easily deduced).
So long story short, don't bother trying to understand every model # and feature. Just look at CPU speed (Ghz) and number of cores and multiply. That's *if* you do multithreaded work (any graphics app, more and more games, and many other apps, even web browsers are increasingly multithreaded). Otherwise the max speed of a single core is the most important figure. And, as I said, Intel is higher performance per Ghz.
Anyway wasn't the Windows Performance Index supposed to help with all this BS? Why don't we see those scores advertised with new PCs at this point? I always figured that was the eventual goal...