Sure, tablets and smart phones maybe the future, but hell no, desktops haven't sailed away and won't be for a very loooooong time.
Any time I hear predictions about the demise of the desktop, I just want to strangle someone. That prediction is based on a fundamental misunderstanding of human-machine interaction where ONLY the most basic consumption is taken into account.
Form factor is the big thing. Phones, tablets and laptops simply do not offer the massive display possibilities of a full monitor, or array of monitors. Those monitors are not "portable" (yet).
Input is another big thing. Tapping a screen is NOT a replacement for a tactile keyboard. You simply CANNOT ever become as fast on a virtual keyboard as you can on a physical keyboard. e.g. How can you find F or J without the ridges or tactile sensation? A minor shift is easily corrected through touch on a normal keyboard. You cannot do that on a flat surface unless you look or start making errors as you type.
Fact is, tablet and phone input methods are still extremely primitive with almost no thought at all put into them. "Let's touch the screen" is just an overly simplictic approach to input. Contrast that with the keyboard and you have a very stark difference where the keyboard is logically thought out and incredibly well put together with actual thought put into human physiology and ergonomics.
Ok. Someone's going to call me on that... Here's why...
Get a tablet. Or get a book about the size of a tablet and pretend that it's a tablet. Hold it. Don't turn it on, or if you do, read a book on it. Sit down with it or walk around with it.
Next, notice where your hands are.
The Apple UI dogma is to have the "Back" button at the top left of the screen. If you didn't notice, when you were reading this was the furthest away point for you to be able to actually press it.
i.e. The optimal user input portion of a tablet is the lower-left and lower-right corner quarter-circles of the tablet with an effective radius slightly less than that of the length of your thumb. That's where your easiest input is.
Examine where the ACTUAL input interfaces are in most applications... Not there, that's for sure.
Some applications make sense to not have the inputs there, though it's a case-by-case thing.
Things will change when STT (speech recognition) becomes viable/real. Eye tracking will also change how devices work. However, we are a very far way away from that.
However, current speech recognition engine revenue models make it impractical for most software. For some really really good information on speech recognition, listen to our interview with Adam Smith on The Doc Report
. I don't mean that to be self-promotional -- I mean that quite literally. We talk at length about speech recognition.
Anyways, I think I'm wandering off-topic here somewhat.
I've not tried the new Windows developer release, so I can't really comment on it other than to say... it's a developer preview, as is already noted above.