8

I noticed that my animation suffers from artifacts that look like missed vblanks. No visible tearing, but sometimes the frame halts for a split second and then visibly jumps. I decided to measure the time between buffer swaps:

void draw_cb() {
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glutSwapBuffers();
  static auto last = std::chrono::high_resolution_clock::now();
  auto now = std::chrono::high_resolution_clock::now();
  std::cout
    << std::chrono::duration_cast<std::chrono::microseconds>(now - last).count()/1000.
    << '\n';
  last = now;
}

To my surprise, I see times varying by as much as 1.5 millisecond, even in this completely undemanding routine. The times measured between frames are in the vicinity of 16.6 ms, but quite consistently on the higher side:

enter image description here

Nothing changes if I add an usleep of a few milliseconds in the draw callback (unless more than 16 ms, obviously), confirming that it is not the drawing commands that causes the delayed response but waiting for the vsync. Why then don't I see values very close to 16.666 ms? What other measures could I take to make the animation smooth? I am certain my computer is fast enough.

Here are the relevant parts of how I set up freeglut:

    glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGBA);
    glutDisplayFunc(draw_cb);
    glutIdleFunc(glutPostRedisplay);

I also tried putting my draw callback into glutIdleFunc, no difference.

The environment is Linux + Gnome 3 on Wayland, integrated graphics. Load average is well below 1. Looking closely, glxgears show similar behaviour, reaching about 291 frames in 5 seconds in the default settings.

Update

With the great help of the commenters, I can now say this is due to the compositor. Running on X without the Wayland middle layer, I get a much sharper and well-centered distribution:

So the problem is specific to Wayland (or perhaps Gnome3 on Wayland). The question remains unchanged, though: how do I get a smooth animation in this setting with minimal changes? I'm OK to let go of freeglut if it's somehow not appropriate but I'd appreciate something equally simple and would like to keep a decorated, managed window, if possible. I updated the title.

The Vee
  • 10,647
  • 5
  • 22
  • 50
  • Sounds like this may be some general issue with your system!? Do you have a chance to test this on another machine? – Michael Kenzel Apr 05 '19 at 14:07
  • Also, just another thought: the standard doesn't actually put any requirements concerning accuracy on `std::chrono::high_resolution_clock`, so you may wanna verify that the one provided by your implementation is actually accurate to within what you expect. Note: talking about accuracy here, not just resolution. Also, note that `std::chrono::high_resolution_clock` is not required to be stable. What time source are you using to base your animations on? – Michael Kenzel Apr 05 '19 at 14:18
  • @MichaelKenzel Not at the moment, but in a few hours I can try on a similar configuration but with X11, to see if Wayland is involved. – The Vee Apr 05 '19 at 14:18
  • Oh, sorry, I thought of that but didn't include it in the question. If I wait longer than the frame time, the results are very close, perhaps within 1% of the waiting time. Also, the reported resolution of my `high_resolution_clock` is one nanosecond. (I can't guess about the accuracy, as you point out.) – The Vee Apr 05 '19 at 14:20
  • Can you temporarily disconnect from your network and run the test. There are operations withing tcp/ip that can take time away from one core (but probably not more than 1). Perhaps you might share what type of system you have (mutli-core? plenty of ram?) – 2785528 Apr 05 '19 at 14:25
  • @2785528 i5-4570, 4 cores, 12 GB RAM – The Vee Apr 05 '19 at 14:39
  • Might also try Weston instead of Gnome's compositor to rule out any compositor wonkiness/overhead. – genpfault Apr 05 '19 at 14:54
  • @genpfault I'm not sure how to do that. I tried choosing a Weston session when logging in, but then I do get a pure Weston environment without even X-on-Wayland, and GLUT needs an X server running. – The Vee Apr 05 '19 at 14:56
  • As far as GLUT replacements go look into [GLFW](https://www.glfw.org/), it's much more actively maintained. – genpfault Apr 05 '19 at 18:12
  • 1
    "So the problem is specific to Wayland (or perhaps Gnome3 on Wayland). " No, actually it is not. Compositors just added another layer to the overall mess. The underlying problem is that taking some timestamp on the CPU as an indicative measure of when a frame was actually _presented_ to the user is just broken on a much more fundamental level. See for example Alen Ladavac's article [_The Elusive Frame Timing_](https://medium.com/@alen.ladavac/the-elusive-frame-timing-168f899aec92) (and [slides](https://twvideo01.ubm-us.net/o1/vault/gdc2018/presentations/Ladavac_Alen_ElusiveFrameTiming.pdf)). – derhass Apr 06 '19 at 17:09

0 Answers0