Hello there! Here comes the second installment of my power programming article. This time around, I'll be teaching you of the many ancient and arcane ways of implement timing into your programs!
What the hell is timing? Let me answer by giving you some applications. For instance, you can insert precise delays while running cut scenes. You can also display your program's frame rate to see if it's competing on a level with snail races. You can also do a synchronization technique popularly used in low-level music programming.
QBasic commands that do time
The ON TIMER statement, and it's sister statements, TIMER ON, TIMER OFF, and TIMER STOP, certainly makes timing very easy for newbies. But if you're advanced enough to actually bother with timing, then you should leave these statements alone. They slow down your program and when you compile it, they bloat the size up.
Another command that QBASIC has is the SLEEP statement. SLEEP is essentially a delaying statement, but it delays in only 1-second increments, and when a person presses a key, the timing is skipped. Also, the keystroke that ends a SLEEP isn't flushed from the keyboard buffer. So if you use SLEEP as a "Press any key to continue" command, you'll eventually get keyboard beeps, especially if you don't use INPUT$ or INKEY$ to relieve the keystrokes.
Don't bother anymore with TIME$. It's only useful purpose is to display and change the date. In order to do calculations with time using TIME$, you still have to mess around with string functions like MID$ and VAL (which can be very slow). Besides, you can easily compute the hour, minute, and second using TIMER.
My verdict? TIMER is possibly the best command to use for all your timing processes. One major problem with TIMER is that it only has a frequency of 18.2 Hz (ticks per second). This means that you can only time up to around five-hundredths of a second. This may seriously affect your timing, especially if you don't know about it.
Basic timing principle
The same principle is used for timing inside QBasic. You just record the time when the thing you want to time started, and record the time when it ended. The time elapsed is then the difference between the two recorded times.
Start! = TIMER
It's that simple. This basic principle is used in almost all applications that involve the TIMER command. You can use this principle to compare similar routines and see which is faster. This is called benchmarking.
Degree of error
I'll illustrate this point with an extreme example. Let's suppose that your digital watch only tells you the time in minutes (no seconds). And you wish to see how long a TV commercial lasted. When the commercial starts, you record the time on your watch: It's 8:25. When the commercial ends, you find out that the time is 8:27. So the commercial lasted 2 minutes, right? Wrong. The commercial lasted for 2 minutes, plus or minus 1 minute. Why plus or minus a whole minute? Well, the commercial could have actually started at 8:25:59 and ended at 8:27:00, in which case, the commercial actually lasted for 1 minute and 1 second. You could go the other way around. If the commercial started at 8:25:00 and ended at 8:27:59, then the commercial lasted for 2 minutes and 59 seconds—almost three minutes! So with your weird watch, you could say that the commercial lasted from around 1 minute to 3 minutes. See my point?
Your watch has a frequency of 1 tick per minute or a resolution one minute while the TIMER function has a frequency of around 18.2 Hz (ticks per second), or a resolution of around 5 hundredths of a second. You might say that since the frequency of the TIMER function is high, then our timing will be precise. Not quite. If the event you're timing lasts on the order of milliseconds, then TIMER isn't precise enough for that. Just remember that the time results you will obtain will have an possible error of around 0.1 seconds.
Midnight Rollover Issues
Using TIMER to delay
SUB Delay (Secs!)
You should avoid using an empty FOR-NEXT loop to delay the program, since the length of the delay created by the loop would vary according to the speed of a computer. What would be a one-second delay on a 286 might not be a delay at all on a high-end Pentium. I really wouldn't recommend it even if you perform a speed test at the start of the program to find out the computer's speed.
A frame is essentially a single loop of code (usually one that includes drawing to the screen). By computing the frame rate, you are, in effect, computing how many loops the computer can perform in a given amount of time.
The first method of getting the FPS is very simple. You just time your whole program (while counting the number of frames executed) then divide the number of frames by the time, that's your FPS.
Start! = TIMER
This method can be very accurate especially if you run the program for a while. The problem with this method is that it only prints the FPS when the program is through. You have no way to know where your program bogs down.
Obviously the answer to the first method's inadequacies is to display the frame rate while the program is running. One method would be to time each frame and derive the frame rate from that. This isn't very nice since you'd only get a maximum of 18.2 FPS for slow programs while faster programs would give the "Divide by Zero" error. Why is that? Recall that the TIMER has a resolution of 5-hundredths of a second, which would give us 18.2 FPS. If your frames are faster than .05 seconds, then the frame would have an elapsed time of 0 seconds, giving a Divide by Zero error.
To get around this, instead of timing every frame, you can time a specified number of frames, and compute the frame rate from that. It would be vastly better, however, to display the FPS on a per-second basis. That is, instead of timing frames, you just count the frames that has been performed every second.
TimeNow! = TIMER
I hope that you can follow the logic of the program code above and see how it manages to display the frame rate every second.
An alternative to TIMER
Fortunately, I have found a workaround. I have devised a routine called CLOCK which can have a frequency greater than 18.2 Hz. In my game, The Labyrinth, I use a variant of the routine to give me a frequency of 291.2 Hz. This has a resolution of 3 milliseconds (very adequate for my purposes). This routine does not mess up the system time and is very fast, since it uses integer math. The routine, however, does not return the number of seconds that have elapsed since midnight, but instead returns the number of "ticks" that have elapsed since midnight. The frequency of the ticks corresponds to the frequency of the routine. (In my game, the value increases by 291 every second.) Here's the routine (which has a frequency of around 4660 Hz):
I can't explain in detail how the routine works. Suffice it to say that it "peeks" at the PIT chip and system timer status.
To verify that the program works, try the program below with the function above. Here the program divides the CLOCK value by its frequency, 4660.859, to see that it indeed matches the value provided by TIMER. The actual difference varies from 1 to 16 milliseconds. That's because the CLOCK updates faster than TIMER. Notice that the difference doesn't exceed the resolution of TIMER which is 54 milliseconds.
It would be a good idea to convert the function to assembly and to compile it in include it in a library, or as a string to be executed by CALL ABSOLUTE.
It's the end of the article as we know it =}
>>>Back to Top<<<<