topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Saturday December 14, 2024, 1:36 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: Using GPUs to accelerate computations  (Read 4227 times)

oBFusC8r

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • default avatar
  • Posts: 24
    • View Profile
    • Donate to Member
Using GPUs to accelerate computations
« on: November 10, 2006, 01:09 PM »
An interesting feature of the current, and future, generation of graphics processing units, GPU, is their ability to be programmed to process a massive amount of data in parallel...and the application using the GPU to accelerate data processing does not even have to be graphics related.

Stanford University recently upgraded their Folding@Home software ( works pretty much like SETI@Home, but is used to gain greater knowledge about diseases like cancer, Alzheimers disease etc) to make use of the power in the ATI GPUs, resulting in a massive performance increase.


http://folding.stanford.edu/FAQ-ATI.html


By writing highly optimized, hand tuned code to run on ATI X1900 class GPU’s, the science of Folding@home will see another 20x to 30x speed increase over its previous software (Gromacs) for certain applications. This great speed increase is achieved by running essentially the complete molecular dynamics calculation on the GPU

Recently NVidia released their latest generation of graphics cards, the 8800 series, using the G80 GPU, supporting "General purpose processing". What is especially cool with the latest gadget from NVidia is that they have developed a C compiler to make it easier for a programmer to use the GPU for general purpose processing. Check out this page on Anandtech.


http://www.anandtech....aspx?i=2870&p=8


It sure will be interesting to see when ordinary CPU intense apps start to use GPU:s to speed up the data processing. Apps that benefit the most are of course those that gain the most from running multiple threads, i.e. dual core friendly apps, and the algorithm must also process data in ways that the GPU is especially good at.

Media encoders would benefit a lot. I sure wouldn't mind a 10 times faster mpeg4 or mpeg2 encoder. ATI actually made a Video converter that used the GPU in their X1xxx series of grahpics cards, but according to people that know about video quality it was fast but the quality was poor compared to the good free ones available...
« Last Edit: November 27, 2006, 05:08 PM by brotherS »

masu

  • Member
  • Joined in 2006
  • **
  • default avatar
  • Posts: 401
    • View Profile
    • Donate to Member
Re: Using GPU:s to accelerate computations
« Reply #1 on: November 10, 2006, 01:27 PM »
I hope this technique will be used soon for media encoders !!
this would speedup things a lot
Find+Run Robot 2.90.01
Windows 7

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,914
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: Using GPU:s to accelerate computations
« Reply #2 on: November 27, 2006, 02:19 AM »
Jeff Atwood had a post about this today:
http://www.codinghor...archives/000732.html

f0dder

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 9,153
  • [Well, THAT escalated quickly!]
    • View Profile
    • f0dder's place
    • Read more about this member.
    • Donate to Member
Re: Using GPU:s to accelerate computations
« Reply #3 on: November 27, 2006, 05:00 PM »
The 8800 is a pretty damn beefy GPU! But just how fast is it, compared to how much it costs and how much power it drains? (Remember the big version has power input from the PCI-e slot as well as TWO 6pin PCI-e power connectors!). Compare that to the core2duo processors... the interesting thing is, however, that the GPU has a different instruction set, and can do some operations insanely fast.

I wonder if the different floating-point formats could impose precision problems, though? Nvidia have been known for slacking a bit on image quality for speed purposes, hope that's not something so general it affects the FP quality.
- carpe noctem