![]() |
Re: need your advice on clock()
clock() does not return unsigned longs, hence the rollover is fixed to LONG_MAX and can't be changed. It's the RETURN TYPE of the clock() function which determines the rollover, not the capacity of the variable I put it in.
And in this function the type "double" was used only to provide a greater floating-point computation accuracy, but this extra precision is not needed in the return type, since converting the results to float do not truncate any data at all. The precision was only needed during the computations. |
Re: need your advice on clock()
gotcha. it's too bad clock() doesn't take advantage of the full range of longs.
|
Re: need your advice on clock()
Quote:
In any case, I think the function we now have (once float is replaced with double and all optimizations are put in place) is about as good as it can get. *edit* I guess we should have the function check for the -1 result if clock() ever returned it |
Re: need your advice on clock()
I'm having concerns about the accuracy of clock(). It may not have enough resolution to do adequate timings in some cases even for a bot. I'll report what I find here once I'm done testing.
BTW to test the ProcessTime function, you don't have to wait 28 days for a rollover, instead create a function called clock_test() and replace clock() with it as shown here Code:
long clock_test(void) Code:
start = ProcessTime(); // clock_test()= LONG_MAX-1 On my computer, tests show that clock() increments in units of 10 ticks. This means the actual resolution is in units of 0.01 second instead of 0.001 second. You can still time events which take more time than 1/100th of a second, but there are many timing applications where you'll need more resolution. For example, you cannot use clock() to accurately time the duration of frames (100 FPS = 0.01 seconds per frame which is at the resolution limit). |
Re: need your advice on clock()
already tried using this rdtsc stuff instead ? ok, you'll get the absolute time and not the time of your process, but maybe one could do some filtering ...
why do you need a better timebase than the one of HL ? are the bots running better if calculating your own time ? I always thought that if the bot and the engine are based on the same time, that shouldnt make problem ... but looks like I'm wrong here :) hl is getting stranger and stranger ... but hey, that was always the case, right ? |
Re: need your advice on clock()
Quote:
I have not used rdtsc because I am looking for a generic method of timing events. Perhaps I'm wrong, but the rdtsc instruction appears to be cpu specific and not generic from what I read here http://www.scl.ameslab.gov/Projects/Rabbit/menu4.html I suppose you could write code to take into account all processors, but that's a lot of effort, and testing it requies access to each cpu type. note that I'm not actually interested in process time, but in real time, so the performance counter is actually better than the process clock - for me anyway. Quote:
|
Re: need your advice on clock()
During my port of mEAn to linux g++ 3.x I discovered that clock() under linux works differently than under windows and returns the processor time (in ticks) used by the calling process. Under windows, clock() instead returns the time (in ticks) that has elapsed since the calling process was invoked.
So, under linux, clock() will tell you how much cpu time the process has used, but under windows clock() will tell you how much time has gone by since the process was first executed. These are two very different numbers, and under linux the use of clock is no good for replacing pfnTime( ). |
Re: need your advice on clock()
clock under windows < me also is only accurate to apprx 100ms I think, cause that's the timebase it's working with. xp / 2k / nt use 10ms, though they are still based on the timer on your mainboard ( system timer is at 1193182 Hz ) and not on the cpu clock.
|
All times are GMT +2. The time now is 01:57. |
Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.