13

Is it possible to use shader for calculating some values and then return them back for further use?

For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?

This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.

Thanks

Raven
  • 4,403
  • 7
  • 41
  • 71

4 Answers4

7

Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).

What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).

There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.

They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.

I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html

caiosm1005
  • 1,533
  • 1
  • 18
  • 31
Andrey
  • 56,384
  • 10
  • 111
  • 154
  • As testalino wrote, CUDA is nVidia specific. As I am running Radeon, wouldn't I suffer from slow-down, or is it even possible to run CUDA on Radeon? Is OpenCL less effective or why you don't recomend it to use as first? – Raven Oct 12 '10 at 13:39
  • @Raven Yes, CUDA is completely nVidia specific. If I was to choose i would prefer CUDA+nVidia. Since you have ATI you have to pick OpenCL. As far as i know CUDA is more mature, tools are better, etc. In terms of performance i don't think there is a significant difference. – Andrey Oct 12 '10 at 13:47
  • @Raven, I would choose Direct Compute if I was using Direct X already, Open CL if I was using Open GL. I would never use CUDA for the reason stated. You can't make sure that everyone that uses your application has an NVIDIA card. – testalino Oct 12 '10 at 14:27
  • thanks for opinion. For the reason you stated as I hate those physx games I can't fully play + I was learning openGL not CG nor DirectX, I think that openCL will suite my needs best. – Raven Oct 12 '10 at 16:10
  • @Raven if you have ATI then this is the only option. – Andrey Oct 12 '10 at 16:22
  • The question is specifically for GLSL? Many devices do not have GPGPU or OpenCL, such as millions of IPhones out there! –  Dec 01 '12 at 02:31
2

This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.

These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.

testalino
  • 5,274
  • 6
  • 31
  • 45
1

Now you can use shader buffer objects in OpenGL to write values in shaders that can be read in host.

viktorzeid
  • 1,361
  • 1
  • 16
  • 30
0

My best guess would be to send you to BehaveRT which is a library created to harness GPUs for behavorial models. I think that if you can formulate your modifications in the library, you could benefit from its abstraction

About the data passing back and forth between your cpu and gpu, i'll let you browse the documentation, i'm not sure about it

samy
  • 14,306
  • 2
  • 49
  • 80