1

Is there a way to render a scene to normal resolution then the other part of the screen to a lower resolution in OpenGL ES 2.0 on Android?

If I use GLES20.glViewPort() and change the resolution, it won't scale to full screen size, but I'm getting the desired result, only smaller.

I would like a solution without having to render to a texture than render the quad on the screen.

dragostis
  • 2,289
  • 1
  • 16
  • 33

1 Answers1

2

If what you want to achieve is a lower-resolution scene rendered into a higher-resolution viewport (and thus getting some kind of "pixelization" effect), then OpenGL cannot do that that easily. Look here for a similar question.

Basically, you won't get around rendering the whole thing into a low-res texture (best done using FBOs) and displaying a screen-sized quad in the high-res viewport, sampling from the low-res texture using nereast filtering. OpenGL cannot just enlarge your pixels, a single fragment results in exactly one (or no) pixel.

But maybe that's not what you're after and I misunderstood your question.

Community
  • 1
  • 1
Christian Rau
  • 43,206
  • 10
  • 106
  • 177
  • So that's impossible, but how about making the fragment shader render fewer pixels? Basically I have a lighting program and a shadows program because of the instruction set limitation. Using shadow maps like this, I have to render the scene twice(apart from rendering from the light point), the second time I render the scene with alpha 0, and I change the alpha value when the part is shadowed. I wonder if my second rendering, the one with the shadows can go easier on the fragment shader. – dragostis Aug 22 '12 at 10:19
  • @user1454653 What you can do instead of making a fragment completely transparent, is to actually drop it using the `discard` keyword. This causes no fragment to be emitted. But the fragment shader will still be executed for the fragment (since you yourself decide to discard it) so this won't buy you anything in regard to performance, it's just more conceptually correct to remove the fragment out instead of making it completely transparent (and you can disable blending which might actually buy you some small profit, not sure). – Christian Rau Aug 22 '12 at 11:09
  • @user1454653 To actually keep the OpenGL from generating fragments you need to employ other techniques, like using the stencil buffer and relying on early-z hardware (if ES hardware has that), but this won't work in your case, since you don't know if a fragment can be omitted until you actually have the fragment and can perform tests on it in the fragment shader. – Christian Rau Aug 22 '12 at 11:12
  • @user1454653 But I'm not sure why you're doing two seperate passes anyway. I cannot really believe the standard approach of doing a normal lighting shader and preforming the shadow map test inside this lighting shader (scaling down the color when in shadow) won 't work for you. OpenGL ES 2 cannot be that limited. But all this rather is a seperate question (or more) on its own and to your original question of rendering low-res content into a high-res viewport the answer is definitely "no, at least not automagically". – Christian Rau Aug 22 '12 at 11:15
  • By using 3 lights in the fragment shader I have no more instruction space to deal with the shadows in the same shader because I need quite a few intructions: MVPMatrix from the light source, if block to determine if the light is directional or not, a few instructions for smoothing... Anyway, my framework for Android works pretty well, with a few tens of thousands of polygons it runs at 60 FPS, the problem is this: I wanted to use MSAA which turns the frames down to 24 FPS when shadows are on because the fragment shader executes itself twice(shadows) and thus it has to render many more pixels. – dragostis Aug 22 '12 at 13:05
  • Now, if I had a way to render the scene at a lower resolution, as I said earlier, for the shadows, it wouldn't be such a burden for the MSAA. Right now I'm thinking of a way to automatically turn off MSAA if the device is not capable enough, but I also thought about doing motion AA.(Drawing every second frame a pixel off) – dragostis Aug 22 '12 at 13:05
  • @user1454653 You might also consider putting the lighting and shadow mapping into a single shader and instead use multiple passes with that shader for the different lights (putting only 1 or 2 lights into a single shader). This way the code fits together in a more conceptually clean way and it would also scale better (if one day you add a 4th, 5th 17th light, you need additional passes anyway). And this way you might also profit better from larger-scale optimizations based on each light's influence radius (only rendering those objects actually reached by the light). – Christian Rau Aug 22 '12 at 13:31
  • You have a point. I've thought about it, but the problem is that rendering more than 2 light, thus going 2 times through the whole thing would be overkill for a phone... I think I'll split my code in two, one going this way, the other going the way you said it. – dragostis Aug 22 '12 at 14:25