7

As far as i know, we can't read the Z(depth) value in OpenGL ES 2.0. So I am wondering how we can get the 3D world coordinates from a point on the 2D screen?

Actually I have some random thoughts might work. Since we can read the RGBA value by using glReadPixels, how about we duplicate the depth buffer and store it in a color buffer(say ColorforDepth). Of course there need to be some nice convention so that we don't lose any information of the depth buffer. And then when we need a point's world coordinates , we attach this ColorforDepth color buffer to the framebuffer and then render it. So when we use glReadPixels to read the depth information at this frame.

However, this will lead to 1 frame flash since the colorbuffer is a weird buffer translated from the depth buffer. I am still wondering if there is some standard way to get the depth in OpenGL es 2.0?

Thx in advance!:)

Brian
  • 665
  • 1
  • 8
  • 12

3 Answers3

9

Using an FBO, you can render without displaying the results. If you're in ES 2.0, your fragment shader can access the current fragment's depth (in window coordinates) as part of gl_FragCoord, so you can write that to the colour buffer, use glReadPixels to get the result back and proceed. Alternatively, you can load world-space z as a varying and write that from your fragment shader, in case that's an easier way around.

To convince yourself, try writing a quick shader that puts gl_FragCoord.z out hastily in low precision, e.g. just

gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);

You should get a greyscale with the intensity of the colour representing depth. Because you're in window coordinates, intensity will range from 0.0 (closest possible unclipped fragment) to 1.0 (farthest possible unclipped fragment). In order not to lose quite a lot of precision, it's probably more helpful to split the value between components, as your vendor almost certainly doesn't support floating point target buffers.

Tommy
  • 97,164
  • 12
  • 174
  • 193
  • 1
    Tommy, could you please say a bit more about splitting gl_FragCoord.z between components. Not following. Cheers. – dugla Apr 30 '12 at 01:23
  • 2
    @dugla If depth were a 32-bit integer, you might but 8-bits of it in each of R, G, B and A. So in practice what you'd do is something like multiply the vector `(1, 256, 65536, 16777216)`, then store each component of that vector mod 1.0 into the relevant channels. You can recombine later by dividing by the relevant components and adding together. – Tommy May 01 '12 at 00:37
  • @Tommy It is very interesting!! Could you elaborate on the reverse of the vec4 to float conversion? Сould you show an example please? – Chego Apr 13 '21 at 05:15
0

I use a basic ray casting for picking a 3D object from a screen touch. Actually I calculate the intersection between the screen normal at the touch point and a sphere containing my object. For a very precise picking or complicated shape you have to use several sphere.

You can also project some key points of you object in 2D space of you screen (my multiplying your 3D point by your transformation matrix) and then make some 2D comparison (distance) with you touch point.

Vincent Zgueb
  • 1,461
  • 11
  • 11
0

i would also like to be able to read values in the depth buffer, but research is indicating it can't be done.

as vincent suggests, if you have simple shapes like spheres, ray-casting is probably better.

for more complex shapes tho, i'm thinking of rendering the object to a (potentially smaller) offscreen buffer, manually assigning one of the color components of each vertex to be the depth of that vertex, and then reading the color values. this is somewhat inelegant and annoying tho, and requires you to be able to convert object-space to screen space (i'm using my own quaternions to drive the matrices, so that takes care of that). there may be a way with shaders to write the depth information into the color or stencil buffer (does GL ES even have a stencil buffer?)

if anybody has a cleaner approach i'd love to hear it.

orion elenzil
  • 2,926
  • 2
  • 26
  • 40