1

I'm struggling to make a timer in c that counts minutes and seconds. I'm trying to test it by printing the time to the console but it doesn't seem to show anything. Does anything look wrong in my code?

#include <stdio.h>
#include <stdlib.h>
#include <windows.h>

#define TRUE 1

int main( void )
{

    int min = 0;
    int sec = 0;

    while (TRUE)
    {
        sec++;
        Sleep(1000);
        printf("%2d:%2d", min, sec);
        if (sec == 59)
        {
            min++;
            sec = 0;
        }
    }
    return 0;
}
Lee Taylor
  • 6,091
  • 14
  • 26
  • 43
Billbert
  • 19
  • 2
  • 1
    There must be hundreds of duplicate posts on SO related to line buffering, but I can't seem to find a good one. It's possible that so many of the questions have inappropriate titles, as this one does. The problem here is your program is not showing any output, yet the title is "how to make minutes and seconds timer". This is not useful in any search. – paddy Aug 21 '19 at 01:15
  • @paddy. That's because the OP didn't have any inkling of line buffering... – Lee Taylor Aug 21 '19 at 01:18
  • I only mean to point out that the issue here is that a program produces no output. That is the actual symptom, which would make a much better title. – paddy Aug 21 '19 at 01:22
  • 1
    Possible duplicate of [Why does printf not flush after the call unless a newline is in the format string?](https://stackoverflow.com/questions/1716296/why-does-printf-not-flush-after-the-call-unless-a-newline-is-in-the-format-strin) – phuclv Aug 21 '19 at 01:45
  • @phuclv, The question is not about trying to get the `printf()` output unbuffered, but how to get some kind of clock (or chronometer) working. As the OP asks at the end, the objective is to know things that are wrong in this code, so it cannot be a duplicate of the question posted. – Luis Colorado Aug 22 '19 at 13:01

4 Answers4

2

For performance reason, printf is buffered. That is, it won't display until the buffer is full. First thing to try is to add a new-line character to the end:

    printf("%2d:%2d\n", min, sec);

If that doesn't work, you can force the output buffer to flush by calling fflush(stdout);

eduffy
  • 35,646
  • 11
  • 90
  • 90
  • what does it mean that its buffered – Billbert Aug 21 '19 at 01:13
  • 3
    It keeps the contents of what you want to display in memory until it's ready to be displayed. Writing to the display is expensive, so the I/O routine try to be smart and wait until the optimal time to write the output. Usually, when you begin a newline is a good time – eduffy Aug 21 '19 at 01:16
0

I would just check the system time rather than keeping track of seconds/minutes. This is because, your Sleep may not be exactly 1000ms, so over time your counter will not be accurate.

Since you're using Windows, here's a slightly modified version of your code:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <windows.h>

int main()
{
    for (;;)
    {
        time_t now = time(NULL);
        struct tm* ptm = localtime(&now);
        printf("%02d:%02d\n", ptm->tm_min, ptm->tm_sec);
        Sleep(1000);
    }

    return 0;
}

I hope that helps.

Lloyd Macrohon
  • 1,332
  • 8
  • 7
0

As you ask, there are several things wrong in your code, I'll tell you as I read it. See below.

#include <stdio.h>
#include <stdlib.h>
#include <windows.h>

#define TRUE 1

int main( void )
{

    int min = 0;
    int sec = 0;

    while (TRUE)
    {
        sec++;
        Sleep(1000);
  1. There's a problem with Sleep(). The thing is that you ask the kernel to pause your program for 1000 milliseconds, but that means that the kernel will awake your program 1 second after your call it to pause. This doesn't take into account the fact that the kernel will put in the schedule queue your process and it will, in general never take the cpu immediately, but after some delay. Then, even if you get the cpu immediately, your code will need some time to execute, making the total loop longer than 1000 ms. And your clock will be slow. It is better to get the system time and show it on the screen, or to take a timestamp when you start... and then show the difference in time from the start time to the timestamp you get at each display. The system time is maintained by an interrupt, that happens at regular intervals (by means of a precise clock oscillator) so you'll get a good clock time that way, instead of your slow clock (how slow it is will depend on things like how many other processes you are running on the system)
            printf("%2d:%2d", min, sec);
  1. This has already been stated in other answers, but let me explain how it works so you can understand how buffering works. A buffer is a large block of memory (normally it is 512 bytes) that is filled by printf() so it only calls write() when it has filled a complete buffer of data. This allows stdio to save system calls to do the actual writing and so, be more efficient when transferring large amounts of data.

    On interactive applications, this is not applicable, as no output would be done if you don't force the buffers to fflush() before any input is done, so when the output is a tty device (something that stdio can know from the file descriptor associated to standard output) then it switches to line mode buffering, and that means that printf() will flush out the buffer when: 1) it is filled up, or 2) when a newline \n character is found in the output. So, one way to solve your problem is to put a \n at the end of the string, or to call fflush(stdout); after calling printf().

            if (sec == 59)
  1. as you increment your seconds before comparing, you have to check against 60 and not 59, as you compare with 60 to convert it to 0 after you have already incremented the seconds.
            {
                min++;
  1. You should have to do the same with the minutes, when minutes get to 60. As you have not included this code, I assume you are not considering an hours chrono.
                sec = 0;
            }
        }
        return 0;
    }

A complete solution for what I mean can be:

#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <windows.h>

int main()
{
    long start_timestamp;
    long display_timestamp;

    start_timestamp = time(NULL);

    for(;;) { /* this is the same as while(TRUE) */
            display_timestamp = time(NULL);
            int elapsed = display_timestamp - start_timestamp;
            int min = elapsed / 60;
            int sec = elapsed % 60;
            /* the \r in next printf will make to print a single carry
             * return in the same line, so the time updates on top of
             * itself */
            printf("\r%02d:%02d", min, sec); fflush(stdout);
            Sleep(100); /* as you print the time elapsed, you don't
                         * mind if you shorten the time between 
                         * displays. */
    }
}
Community
  • 1
  • 1
Luis Colorado
  • 8,037
  • 1
  • 10
  • 27
-1

Does anything look wrong in my code?

Several things look wrong. The first is stdout line buffering - see eduffy's answer for that.

The second problem is that you're doing sec++; sleep(1000);, which means that sec will be incremented once every 1000 seconds or more.

The third problem is that if(sec == 59; is wrong and needs to be if(sec == 60). Otherwise you'll have 59 seconds per minute.

The fourth problem is that sleep(1) will sleep for at least 1 second, but may sleep for 2 seconds, or 10 seconds, or 1234 seconds. To guard against this you want something more like this:

expiry = now() + delay;
while(true) {
    sleep(expiry - now() ):
    expiry += delay;
 }

The basic idea being that if one sleep takes too long then the next sleep will sleep less; and it'll end up being "correct on average".

The last problem is that sleep() doesn't really have enough precision. For 1 second delays you want to be able to sleep for fractions of a second (e.g. like maybe 9/10ths of a second). Depending on which compiler for which OS, there's probably something better you can use (e.g. maybe nanosleep()). Sadly there may be nothing that's actually good (e.g. some sort of "nanosleep_until(expiry_time)" that prevents jitter caused by IRQs and/or task switches that occur after you determine now but before you call something like "nanosleep()").

Brendan
  • 26,293
  • 1
  • 28
  • 50
  • Hmmmm I'm pretty sure that `Sleep(1000)` sleeps for 1000ms = 1s, *not* 1000s. – Marco Bonelli Aug 21 '19 at 02:36
  • @MarcoBonelli: It's easy to find out (there's plenty of descriptions of the function online, like this one: http://man7.org/linux/man-pages/man3/sleep.3.html ) – Brendan Aug 21 '19 at 02:37
  • Dude, it's Windows API `Sleep()` (also notice the uppercase `S`). See [here](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298(v=vs.85).aspx). 90% of your answer is assuming Linux and therefore wrong. Your suggested code snippet is also wrong. – Marco Bonelli Aug 21 '19 at 02:38
  • @MarcoBonelli: Ah - you're right - I missed the fifth problem (using a non-standard, non-portable/vendor specific function that has nothing to do with C). Note that the 1st, 3rd, 4th and "part but not all" of the last problem are still relevant even with "Microsoft's vendor lock-in malware" function. It's not "90% wrong because I assumed Linux", it's "20% wrong because I assumed C". – Brendan Aug 21 '19 at 02:44
  • W-what? 1. Windows has no `nanosleep()`; 2. The code snippet is wrong, you should not do `expiry += delay`, but `expiry = now() + delay` each time; 3. `if(sec == 59` is not wrong, it's the fact that `sec++` should be in the `else` branch of that `if`... doing `if( sec==60` is still wrong since it would display `XX:60`. 4. What is even `now()`..? It's not that simple, `GetSystemTime()` should be used. This answer does not make any sense. – Marco Bonelli Aug 21 '19 at 02:52
  • @MarcoBonelli: 1. When I said "depending on which compiler for which OS, there's probably something better" what I meant is that depending on which compiler for which OS there's probably something better. 2. The code snippet is correct, and it's deliberately trying to avoid jitter and drift caused by reasons given. 3. In the code provided `if(sec == 59)` is wrong, there are 2 ways to fix it (my way and your way) but that doesn't mean it wasn't wrong and didn't need fixing. 4. `now()` is abstract pseudo-code with obvious but context dependent meaning (possibly `clock()` or `time()` or..) – Brendan Aug 21 '19 at 03:05
  • **1**. Ok.. granted, I would still have corrected that with the appropriate function though. **2**. It's not correct. Doing `expiry += delay;` completely defies the purpose of using `now()` to be more "accurate", since you are trusting that `sleep()` just slept for the appropriate amount of time, and you just add the expected `delay` to `expiry` when you should really re-calculate it doing `expiry = now() + delay` again. **3**. Your fix is wrong, as I said, it prints `0:60` and then `1:1`; **4**. Granted that too, but we're talking to a beginner here, same reasoning as point 1 applies IMHO. – Marco Bonelli Aug 21 '19 at 03:14
  • 1
    @MarcoBonelli: 2. Um, dude. Stop and think about what happens if the OS decides to do a task switch (and run some other program for who knows how long) in the middle of the `printf()`, and/or what happens if the `sleep()` (or `Sleep()` or `nanosleep()` or `usleep()`) sleeps for longer than requested. These things will cause drift and will make the program wrong (e.g. the program saying that 10 minutes passed after 12 minutes have passed). My example is necessary to fix/avoid that drift. – Brendan Aug 21 '19 at 03:26
  • Okay, I see the point, you're right, sorry about that. Would probably need an overflow check then since `sleep` takes an unsigned value so that `expiry - now()` does not end up doing something like `sleep(999999)`, right? – Marco Bonelli Aug 21 '19 at 03:30
  • @MarcoBonelli: Yes, if the delay is unsigned (which is sadly common across all the variations) there's a risk of overflow. I mostly bury it all (including compiler/OS specific `#ifdef` to select the least awful functions for portability) under my own wrappers so I can end up with a portable custom time format (partly because C ends up being a horrible mixture of different time formats when you start trying to mix `time()` with `select` with `pthread_cond_timedwait()` and... where some has nasty leap second problems and some has "year 2038" bugs and most is non-portable). – Brendan Aug 21 '19 at 03:47
  • @MarcoBonelli, `Sleep()` is a windows function, misnamed after one ancient unix function, `sleep()` that sleeps a number of seconds. While it is true that `Sleep()` (the windows version) sleeps a number of milliseconds, It fails to do that, depending on the windows (or ms-dos, as it is also defined in microsoft compilers for ms-dos) versions you call, mainly because of the kernel time tick spacing those systems have. `Sleep()` (the windows version) is not in any standard, but in the Microsoft documentation. Nowhere else. – Luis Colorado Aug 22 '19 at 12:56