How can we estimate how long it will take to run. Take the following (dumb) code: int32 sum = frigged_value() I appreciate there are "complications" to the question, but c'mon: If we can estimate the number of piano-tuners in NYC, we should be able to estimate code runtimes. Instruction #2: Move a 32-bit value from register to memory.Įdit: The reason I ask this is to try and develop a "rule-of-thumb" that would allow me to look at simple code and roughly gauge the time taken to the nearest order of magnitude.Įdit #2: Lots of answers with interesting points, but nobody (yet) has put down a figure measured in time. Instruction #1: Add one 32-bit register to a second. Best and worst cases (assuming nothing in cache/everything in cache) would be useful. So, can anyone provide some timings for two sample instructions, on (let's say) a 2GHz Core 2 Duo. I know they're a lot faster, but I also know that the headline gigahertz speed isn't helpful without knowing how many cycles of that clock are needed for each instruction. Depending on family, one (or four) cycles equated to one "memory fetch", and without caches to worry about, you could guess timings based on the number of memory accesses involved.īut with modern CPU's, I'm confused. When I used to program embedded systems and early 8/16-bit PCs (6502, 68K, 8086) I had a pretty good handle on exacly how long (in nanoseconds or microseconds) each instruction took to execute.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |