Why does time go so slowly? Because you are calling the real time clock

So woke up at 6:20, which was fine, got Dillon to daycare. I got a text a little bit later to say there was blood in his poo. Considering what he eats I’m surprised there wasn’t a television in his poo. Now work was quite interesting. We have timers, lots of timers, and boy do they show up high in profiles, so I decided to take a look at them. For every timer we are basically calling a linux style real time clock (RTC). This does indeed store very accurate time, from the linux epoc. But every call goes via the kernel and then reads the clock, which is a hardware chip outside the cpu, so there’s loads of bus activity and loads of hanging about. So instead I changed it to the CPU tick clock. Now this is a handy hardware register that store the current cycle number. It’s a 64 bit unsigned int. Also there is a handy call that returns the clocks per second or hertz as it were. So simple you just divide the ticks by the clock rate and the gives you the time in seconds. But what if you want the time in micro seconds? Well then you have to multiply the current tick by one million and then do the divide. Ah, but then you run into a bit of a problem. If you start shifting the ticks up you then begin to run out of room in you unsigned int. So what you need to do is sacrifice a little bit of accuracy. You only multiply the tick by a thousand, but divide the clock rate also by a thousand. This still gives you a pretty accurate microsecond clock, but it then won’t wrap round for almost four years.

Took Jamie to the bank. Picked up Dillon. Went to the gym and did Pump. Finally got all the latest video footage imported and synced up. So now I just have to edit the whole lot.

Leave a Reply

Your email address will not be published.