9

In both the OpenGL and Direct3D rendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?

From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.

I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.

(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)

1 Answers1

6

It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called "primitive shader."

Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).

These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.

There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).

Andon M. Coleman
  • 39,833
  • 2
  • 72
  • 98
  • 1
    Whether you like the name or not, the problem with _primitive shader_ is that the abbreviation ``PS`` would be ambiguous with _pixel shader_. ``GS`` (geometry shader) is at least distinct from ``PS`` (pixel shader), ``VS`` (vertex shader), ``HS`` (hull shader), ``DS`` (domain shader), and ``CS`` (compute shader). – Chuck Walbourn Sep 06 '15 at 19:46
  • 2
    @ChuckWalbourn: Yeah, which brings me back to my conclusion.. if they'd called Pixel Shaders fragment shaders we'd be a lot better shape. OpenGL has a really nice property that each shader stage is named after the data it outputs (with the exception of Geometry Shader, but they were following Microsoft's lead). D3D names some shaders after their output, some after their input, others non-descriptively. – Andon M. Coleman Sep 07 '15 at 00:06