ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

32bit vs 64bit question

<< < (2/2)

eleman:
...
No other practical difference.
-eleman (November 11, 2014, 05:58 PM)
--- End quote ---

Forgive a Turbo-Newb butting where he doesn't belong, but I think it makes a TON of "practical" difference!!

When you change a "bit" addressing amount, you unlock things that weren't possible before. Lots of examples. Discuss.
-TaoPhoenix (November 11, 2014, 06:36 PM)
--- End quote ---

I say none, you say lots. I believe the onus lies with you, rather than me, for non-existence is a tad harder to prove than proving existence. See the debate on agnosticism.

I really think the only practical difference for 4.99 percent of the population is the address space limit. There maybe a few others for the 0.01 percent, that I am not aware of, and that you will hopefully name. For 95 percent, no difference whatsoever between 32 and 64 bits.

ed.: removed a "the". I will never master these things.

TaoPhoenix:

Complex post Eleman.

I say, review the history of 8,16,32,64 bit systems and see what all helped improve. No doubt, for each addressing change, backup hardware had to scream along at Moore's Law and more, but somewhere in there, the addressing matters. I think I can (barely) cite the case of the C128 that had to do hard bank switching to max out to the limit, and if you just look at 8,16,32 bit games, you see stuff.

64 bit hit a new realm. For the least of examples, it took a stratospheric boost to chess, because a 64 bit board could do things no other set could do before.

Then you have X today's games, whatever they are. A little work needed in modeling art, and we're still "only" at 64 bit. Like it matters. If anything CS has taught us, when we jump a Bit-Level, the game changes. Who's gonna post the first 7 bits of 128 bit news?

Shades:
Compiling software for 64-bit takes, in practically all cases, a whole lot longer (C++/Embarcadero).

And not that many programmers have a lot of experience with (efficiently) programming for multi-threaded applications. Something modern 64-bit processors are cut-out to do. When that happens you will see some progress again. Not as much as you think, though. Multi-threading requires extra computational overhead, level of efficiency will differ between applications and all this hardly matters if the OS these applications run on doesn't assign the available computational horsepower appropriately.

In my experience Windows isn't that good with automatically assigning different cores to different processes/applications. When I take a look at the i7 processor with task manager on my database server I always see one core under huge load, while the others are more or less idling, even when I run several different Oracle databases and a VM at the same time on it. All installed software is 64-bit. Using several instances of 7-zip at once (to archive database dump files) does put a load on every core though.

Going out on a limb here: assumingly Linux/BSD operating systems are better at this, as these OS's are more commonly used in academic fields and super computing. I don't think that smartphones or tablets make efficient use of their multi-core processors either. With the financial risks and returns as they are currently, you won't see this happening (yet) in consumer devices, such as PC's, mobile devices or consoles.

It used to be the case the "techies" were allowed to show the best they could do with the hardware they made. Hence the big jumps with the increase of bits. However, with the vested interests of today, financial and marketing departments won't let them anymore. Trading innovation for evolution...

MilesAhead:
I agree with Shades it will take awhile to catch up.  One thing we may see is a return to multiprocessing rather than multithreading.  When resources were relatively scarce it was a good idea to spawn another lightweight thread rather than another process.  But with 64 bit one could let the OS handle more of the address space integrity chores and just fork another process rather than doing all the semaphore/mutex stuff by hand.

Also with huge data space one could do things like map an entire 16 GB or larger DB file into ram instead of relying of the caching algorithms.  The difference from 32 bit to 64 bit is vast compared to say 24 bit address space of 286 type systems to 32 bit.  So it stands to reason it may take a bit longer(is there any plausible deniability if I say no pun intended?) to utilize all the resources.

As for the holding back by executive decision that may be a factor also.  I don't have the experience to make an educated guess how much of one though.

bit:
It's just that I'm running an aging machine in an endless game of catch-up on ever newer more complex stuff.
Especially after adding MalWareBytes, everything slowed down (but I wouldn't be without it now, after I saw it in action blocking a few nasty little PUPs).
Aside from getting a whole new machine, the best I've come up with is getting a faster HD.
But I've seen reports that SSDs have a fatal flaw that their memory slowly starts to decay somehow, with individual sectors either failing or working more slowly over time.
So I chose a 10K rpm Western Digital last time I got an HD, rather than an SSD.
But one is never sure what to decide, except when you have enough cash; then you can just ignorantly throw money at the problem until something works.

Navigation

[0] Message Index

[*] Previous page

Go to full version