5

So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!

For example, using just textures:

// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);

for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
    glBindTexture(GL_TEXTURE_2D, m_textures[i]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}

both:

glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );

What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer? Why would one use textures or renderbuffers? What would be the advantages?

I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?

EDIT: So, my current code for a GBuffer is this:

    enum class GBufferTextureType
        {
        Depth = 0,
        Position,
        Diffuse,
        Normal,
        TexCoord
        };

. . .

glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
    {
    Delete();
    return false;
    }

glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
    {
    Delete();
    return false;
    }

uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
    {
    Delete();
    return false;
    }

// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );

// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
    {
    glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
    glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
    glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
    }

// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
    DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;

glDrawBuffers ( 4, DrawBuffers );

if ( Graphics::GraphicsBackend->CheckError() == false )
    {
    Delete();
    return false;
    }

GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
    {
    Delete();
    return false;
    }

Dimensions = In_Dimensions;

// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );

Is this the way to go? I still have to write the corresponding shaders...

Joao Pincho
  • 779
  • 1
  • 10
  • 22
  • The second version does not make any sense at all since it overrides the binding of the renderbuffer. – BDL Dec 16 '16 at 10:36

2 Answers2

8

What's the point of creating a texture, a render buffer, and then assign one to the other?

That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.

If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.

I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?

That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.

But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.

So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.

Nicol Bolas
  • 378,677
  • 53
  • 635
  • 829
  • Ok. I'm going to try it out. Those code samples were found in several different places, so I'm getting really confused about all this. – Joao Pincho Dec 19 '16 at 15:28
1

Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.

It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.

If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.


D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.

Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.


Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.

To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.

Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.

Andon M. Coleman
  • 39,833
  • 2
  • 72
  • 98
  • I strongly suspect that what you were attempting to do/show in the second example is actually blit between framebuffers. Simply attaching a different type of image in-place of the original won't do anything. – Andon M. Coleman Dec 19 '16 at 08:03