Wednesday, July 05, 2006

Hyperthreading

This morning I was thinking about hyperthreading.

I generally think of process threads as more than one program counter executing in a shared address space. If you have two CPUs, e.g., then each of two threads can execute on each of the two CPUs.

Typically with one CPU it's necessary to schedule instructions from a particular thread on the CPU, then switch to the other thread at some appropriate point.

Hyperthreading is a similar idea handled inside the CPU. Instructions from two streams are decoded into microcode instructions (uops). Where possible different sets of microcode are run “simultaneously” on different execution units of the processor. When this simultaneous scheduling of microcode works, you get something like the performance of two CPUs minus the overhead involved in scheduling and management of two logical CPUs. When one set of microcode has to wait on another, then you are back down to single-CPU performance (and still minus the management overhead, thus the penalty for hyperthreading).

I found a nice article that explains hyperthreading at a useable level of detail.