5

So, I've gotten to basic lighting in my OpenGL learning quest.

Imagine this simplest lighting model. Each vertex has a position, color, and normal. The shader gets the ModelViewProjection matrix (MVP), Modelview matrix(MV) , and Normal matrix (N) which calculated as (MV-1)T, as well as LightColor and LightDirection as uniforms. The vertex shader performs the lighting calculations - the fragment shader just outputs the interpolated colors.

Now, in every tutorial on this subject I have come across I see two things that puzzle me. First, the LightDirection is already assumed to be in eye coordinates. Second, the output color is calculated as

max(0, dot(LightDirection, N * normal))*LightColor*Color;

I would expect that the LightDirection should be inversed first, that is, the correct formula I would think is

max(0, dot(-LightDirection, N * normal))*LightColor*Color;

It seems that it is assumed that the LightDirection is actually the reverse of the vector of the actual light flow.

Q1: Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?

Q2: If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?

Thanks for clarifying these things!

Armen Tsirunyan
  • 120,726
  • 52
  • 304
  • 418

2 Answers2

2

Q1: Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?

In fixed function OpenGL when supplying the position of the light, the 4th parameter determined if the light was directional or positional. In the case of directional light, the vector supplied points towards the (infinitely far away) light source.

In case of a positional light source, LightDirection is computed per vertex as LightDirection = LightPosition - VertexPosition -- for the sake of simplicity this calculation is done in eye coordinates.

Q2: If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?

In fixed function OpenGL the light position supplied through glLighti was transformed by the at call time current modelview matrix. If the light was positional, it got transformed by the usual modelview matrix. In case of a directional light, the normal matrix was used.

datenwolf
  • 149,702
  • 12
  • 167
  • 273
  • Thanks very much, datenwolf. Regarding your answer to Q2: was the directional light multiplied with the normal matrix because there was no ***need*** to multiply with modelview or because multiplying with modelview would yield incorrect results? – Armen Tsirunyan Sep 05 '11 at 17:17
  • The name "normal matrix" is a bit misleading. I'd be better call "direction transformation matrix". This is what makes a (M^-1)^T special: Being applicable to directions, instead of positions. Lighthouse3D has an excellent article about it http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/ – datenwolf Sep 05 '11 at 20:44
  • @datenwolf When implementing a ray tracer, I remember implementing ray-ellipsoid intersection using ray-sphere intersection. If sphere `s` is transformed into an ellipsoid `e` by `M`, instead of doing `r-e` intersection, I transformed the ray, `r' = rM⁻¹` and did `r'-s` intersection. But the `transformRay` function gave the right output only when I applied just `M⁻¹` to both `ray.origin` and `ray.direction` and not `M⁻¹` to origin and `((M⁻¹)⁻¹)ᵀ = Mᵀ` to the direction. Why? Is `(M⁻¹)ᵀ` only applicable to normal vectors and not to all direction vectors as stated in your comment? – legends2k Nov 01 '14 at 15:12
  • It may be that I'm misunderstanding something completely too, hence the comment to learn the matter correctly. However, [my prof's lecture](https://www.youtube.com/watch?v=ARB3e0kjVoY) states the same too. On which instances do we apply the inverse-transpose? Can't I blindly do it as a rule i.e. for all direction vectors apply inverse-transpose? – legends2k Nov 01 '14 at 15:15
  • 1
    @legends2k: You apply the inverse transpose when you want to know how the tangent space transforms in relation to a regular transformation. The tangent space is the space of directions in relation to a surface (space). So it does not apply immediately to any kind of spatial vector, but to a certain class of vectors. So when I mention "direction" vectors, then I actually mean these kind of directions. – datenwolf Nov 01 '14 at 15:29
  • Thanks, I think I kind of got it, still would be great if you can point me to some literature. Did you mean to write normal space (instead of tangent space) and tangent space (instead of surface space) in your comment? – legends2k Nov 01 '14 at 15:40
  • @legends2k: normal space == tangent space. To be precise the surface normal is actually one base vector of the tangent space. In a 3D tangent space, the surface tangent and *some* vector not coliniear with both tangent and normal (usually called the binormal vector) for the tangent space of a 2-surface in 3D-space. When it comes to literature, the only books I could recommend (because I've read them) seem to be out of print. But any text on basic principles of computer graphics should do. There's a new edition of "Computer Graphics – Principles and Practice" out, which may cover it. – datenwolf Nov 01 '14 at 17:46
  • @legends2k: "Computer Graphics – Principles and Practice" should be a recommended read anyway. At least it belongs into the bookshelf of every person who considers him-/herself a graphics coder/hacker. – datenwolf Nov 01 '14 at 17:49
  • Oh, now I remember tangent, normal and binormal vectors for curves of vector-valued function from Calculus III. As for _Computer Graphics — Principles and Practice_, I've it but this isn't discussed in this detail. Thanks for the clarification :) – legends2k Nov 03 '14 at 15:21
1

Is this some sort of established convention that the LightDirection in this model is assumed to be the vector to the infinitely far light source rather than the vector of the light direction or is this not a principal matter and it just so happened that in the tutorials I came across it was so assumed?

Neither. There are simply multiple kinds of lights.

A directional light represents a light source that, from the perspective of the scene, is infinitely far away. It is represented by a vector direction.

A positional light represents a light source that has a position in the space of the scene. It is represented by a vector position.

It's up to your shader's lighting model as to which it uses. Indeed, you could have a directional light and several point lights, all affecting the same model. It's up to you.

The tutorials you saw simply used directional lights. Though they probably should have at least mentioned the directional light approximation.

If LightDirection is in world coordinates rather than in eye coordinates, should it be transformed with the normal matrix or the modelview matrix to eye coordinates?

Neither. If the light's direction is in world coordinates, then you need to get the normal into world coordinates as well. It doesn't matter what space you do lighting in (though doing it in clip-space or other non-linear post-projective spaces is rather hard); what matters is that everything is in the same space.

The default OpenGL modelview matrix does what it says: it goes from model space to view space (eye space). It passes through world space, but it doesn't stop there. And the default OpenGL normal matrix is just the inverse-transpose of the modelview matrix. So neither of them will get you to world space.

In general, you should not do lighting (or anything else on the GPU) in world space, for reasons best explained elsewhere. In order to do lighting in world space, you need to have matrices that transform into world space. My suggestion would be to do it right: put the light direction in eye space, and leave world space to CPU code.

Nicol Bolas
  • 378,677
  • 53
  • 635
  • 829