Imho ILP64 is the better choice, since "int" ought to be the "native int size", for performance reasons (okay, using 32bit ints isn't
that bad on x86-64 because of how that architecture was hacked together, but still...)
Kernel and drivers need to be recompiled for 64bit. It would be possible to construct a thunking layer, but you don't want that for performance reasons. And it would take quite some work, since 32bit drivers obviously can't use 64bit pointers.
And some usermode apps on a 64bit os do need to be 64bit, or at least be 64-bit aware - as soon as memory addresses or sizes are involved.
nontroppo: when you download a "single binary that supports all platforms", it's probably 32bit. And that's doable on windows as well, since x86-64 natively supports running 32bit code (at the expense of not running 16-bit code in long mode).
lots of code, both on windows and lunix, don't port cleanly to 64bit mode, because of st00pid programmers (NO, you CANNOT always fit a void* in an int).
You use an "int" when you want native integer size, you use "size_t" when you want address-space-size, ptrdiff_t when dealing arithmethically with pointers, etc. If you
specifically need 32- or 64-bit integers (for file formats etc.), you use sint32/uint32/sint64/uint64 typedefs,
specifically. It's not as complicated as some people want you to think, but you have to do this from the ground up, not as an afterthought.
For usermode code, there's a 32<>64 thunking layer, it's the most sane way to handle things.
Here's a bunch of links:
http://www.gamedev.n...cles/article2419.asphttp://www.viva64.co...it_Applications.htmlhttp://www.viva64.co...our_egg_is_laid.htmlhttp://www.viva64.co..._Windows_64-bit.html