Using GPU for math operations

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,174
Location
Flushing, New York
I've read about some programs greatly increasing calculation speed by using the GPU for floating point calculations. Is there any software or drivers which could make a computer do that with any program? I'm just curious because lately I've been doing a lot of circuit simulation, and I'm wondering if there's a way to speed it up by taking advantage of the GPU which is sitting idle. I would imagine even a slow GPU such as mine ( GeForce 6200 ) might be several times faster than my processor ( XP 3200 ) doing floating point operations. No, I'm not looking to do a major computer upgrade at this time as the machine still serves me well. Rather, I'm just looking to take maximum advantage of my existing hardware. My next major upgrade will probably come in a couple of years when SSDs reach a reasonable price point.
 

Stereodude

Not really a
Joined
Jan 22, 2002
Messages
10,865
Location
Michigan
I wouldn't be so certain about that. They would do well at SIMD computing. I'm not sure if circuit simulation is something that could be sped up by a GPU.

On a somewhat related note, someone wrote a FLAC compressor for CUDA. You can read about it here
 

udaman

Wannabe Storage Freak
Joined
Sep 20, 2006
Messages
1,209
Pretty sure the software in question must be updated to use gpu accel, no way I could think of you'd be able to have a 'general' implementation by 3rd party software.

Perhaps contact the sim creator to see if there's an update?

http://www.vizworld.com/2009/11/mathworks-accepting-volunteers-gpu-acceleration/

If you’re a big user of MathWorks products like Matlab and Simulink, you should head on over to their site where you can register as a beta tester for their new GPU acceleration (only for NVidia GPU’s in the first release). An invite-only beta, it can’t hurt to get your name in the hat and hope you get lucky!


http://en.wikipedia.org/wiki/Video_Acceleration_API
 

Handruin

Administrator
Joined
Jan 13, 2002
Messages
13,741
Location
USA
Stereodude's link intrigued me to dig through and notice that the FLAC compressor is using a C# library called CUDA.NET. I don't really know what your circuit simulations consist of, but there may be ways to see if threads of work could be dispatched using their library. Is there any way to give me some examples of what you're doing and how the calculations are currently being run?

Unfortunately I don't see your GeForec 6200 listed as supporting CUDA.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,174
Location
Flushing, New York
I'm using Circuitmaker 6.0 student version. It's not longer supported or updated by the original programmer, so no chance of the code being updated to take advantage of GPUs. Like I said first post, I'm more curious for academic reasons if I can get this and a few other computation intensive CAD programs to use the GPU. If I can't, it's fine. It's not like I'm waiting hours for a simulation to end. Worst case it's a couple of minutes.
 

Mercutio

Fatwah on Western Digital
Joined
Jan 17, 2002
Messages
21,593
Location
I am omnipresent
There are some Linux-based projects for using GPUs for general purpose computation. Most of them focus on CUDA or OpenCL, but there are people doing work in the way that you're describing.
 

jtr1962

Storage? I am Storage!
Joined
Jan 25, 2002
Messages
4,174
Location
Flushing, New York
I found a whole site dedicated to using GPUs for general purpose computing:

http://gpgpu.org/

Unfortunately, it doesn't look like anyone has developed a driver of some sort which can reroute floating point operations from any application from the CPU to the GPU.
 

LunarMist

I can't believe I'm a Fixture
Joined
Feb 1, 2003
Messages
16,664
Location
USA
I'm using Circuitmaker 6.0 student version. It's not longer supported or updated by the original programmer, so no chance of the code being updated to take advantage of GPUs. Like I said first post, I'm more curious for academic reasons if I can get this and a few other computation intensive CAD programs to use the GPU. If I can't, it's fine. It's not like I'm waiting hours for a simulation to end. Worst case it's a couple of minutes.

Any modern CPU should provide dramatic performance increases over what you have. I tried to install the program in XP but it failed, so I cannot comment on the speed.
 

Pradeep

Storage? I am Storage!
Joined
Jan 21, 2002
Messages
3,845
Location
Runny glass
jtr, there are two ways it can be done. One, is for the application to be written explicitly to support such operations, OpenCL would be one option. The other is for there to be libraries for certain functions, that can then be run on the GPU. There are certain libraries available for Excel acceleration etc.
 
Top