0

I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure. I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:

image initial scene:

enter image description here

image with objectA removed:

enter image description here

In the first image, the green sphere is "objectA" and the blue cube is "objectB". So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.

The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.

Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA. So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.

Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog: WebGL : How do make part of an object transparent? But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.

Thanks a lot in advance!

Community
  • 1
  • 1

1 Answers1

0

Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?

So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.

This links might be useful:

EDIT:

Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.

Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.

There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).

Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).

Under some other sets of restrictions you might be able to achieve the effect.

In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.

There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.

Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.

Abstract Algorithm
  • 6,601
  • 3
  • 28
  • 42
  • "Why wouldn't you write raytracer inside the fragment shader" because it's way way waaaaaaaaaaaaaaaay toooooooooo ssssssslllllllllooooooooowwwwwwwwww – gman Mar 21 '20 at 05:24