The CLL produces the timing of the GPS signal WRT (with respect to) the receiver's internal clock -- this is normally expressed in distance units called a Pseudo-Range (PR) and it includes the error in the receiver's internal clock added to the real geometric range of the satellite. Similarly, the PLL measures the carrier phase rate (i.e. apparent satellite frequency with respect to the receiver's local oscillator, usually derived from the same crystal oscillator that is used as the timing clock) -- this is called the Pseudo-Range Rate (PRR) and includes the frequency error of the receiver's LO and the contribution from the Doppler shifts associated with all the motions. The Doppler shift includes the vector sum of the satellite's ~7 km/sec orbital velocity plus the 400 m/sec (at the equator) rotational velocity of the earth plus your receiver's motions (in a moving car, ~10-50 m/sec).
In early GPS receivers, four PRs from 4 satellites was converted into a 3-D (XYZ, Lat/Lon/Hgt or whatever) position plus the calibration of the timing bias of your receiver, and 4 PRRs were converted into a 3-D velocity plus a measurement of the frequency error of the oscillator. More modern receivers take all the PR+PRR data from all the N satellites in view for the past T seconds and feeds the 2*N*T PR+PRR samples it into a single mathematical "black box" (BB) (usually a Kalman filter) to produce an over- determined estimate of the same 8 parameters. So in modern receivers, this BB is using both the combination of past & present PRs and PRRs from many satellites to improve the Position, Velocity & Time (PVT) estimate. So Paul's statement about velocities being determined by changes in position is sorta, partially correct, but (when you look at the equations inside the BB), the measured "apparent Doppler" frequencies are even more important.
Aside #1: Part of this thread asked about the speed capabilities of the various Garmin receivers. Garmin has had two generic receiver types. The first, including the GPS-20, GPS-38, GPS-45, GPS-45XL and the original GPS-II used a slow CPU to do the DSP+BB+ display functions. The DSP supported only two channels of receiver, which were sequentially multiplexed amongst the various satellites. The "Brain Dead" CPU apparently did not have enough horsepower to handle a PRR "bandwidth" more than that associated with the ~100 MPH speed, so these receivers have a software "clamp" that makes them useless in airplanes.
The newer Garmins (GPS-25, GPS-48, GPS-12, GPS-12XL, GPS-II+, GPS-III etc) have a LOT more CPU horsepower; my GPS-III uses an i386ex ! As a result, Garmin has been able to handle up to 12 satellites simultaneously in the DSP, the BB can handle airliner speeds, and the GPS-III even supports the neat highways+land masses mapping.
Now returning to the tutorial -- let's answer the speed/velocity error question. The errors you get in speed/velocity arise from several sources. The two main ones are the inherent accuracy of the GPS system (your receiver plus the GPS satellites), and the added noise from the DoD's aggravation called Selective Availability (SA).
To look at the first of these, let's turn SA off and assume that the GPS satellites are perfect. The GPS signal carrier wavelength is ~20 cm so with a modest SNR, the PLL in the receiver can see (measure) the carrier phase to about 1 cm (1/20th of a cycle or ~20 degrees of electrical phase). For simplicity, we assume that we measure for 1 second, so I can see ~1 cm/sec of velocity for a given satellite. Since we don't normally think in these units, 1 cm/sec = 36 meters/hour = ~120 feet/hr = ~0.023 miles/hour. To factor in the geometry and the fact that we see multiple satellites, we multiply this by HDOP (for horizontal speed) or VDOP (for speed); since HDOP is rarely > 3, this means that the "system" velocity error is rarely > ~0.07 miles/hour. And note that the fact that we are relying on carrier phase rate to determine speed, we are measuring the relative L-band carrier frequency (i.e. Doppler offset) to 1/20th of a Hertz!
Now lets look at SA. In essence, the DoD degrades the performance of the orbiting atomic clocks on the GPS satellites by putting a programmable line-stretcher in the output of the (10.23 MHz) clock, and diddling the clock with a magic pseudo-random sequence that only they know. Our amateur measurements have shown that they jerk the line stretcher over a wide spectrum of time scales, ranging from a few seconds to ~1/2 hour, but that the long-term average is zero. Over longer time scales, the variations affect the timing of the clock signals that GPS transmits (like the 1.023 Mb/s CA code) so our PR measurements are noisy and we see our apparent position wander. The DoD has "guaranteed" that the positional errors due to SA will not exceed ~100 meters (3-sigma, horizontal) for the range of variations they add, when seen with a reasonable number of satellites (like with HDOP < 3). We have verified that the SA "dither" is independent on the different satellites, so viewing more than the minimum=4 satellites not only improves the geometry, but also reduces SA's effects on position. Since the slowest SA component is ~1/2 hour in duration, averaging your position over a few hours of time beats SA's effects down.
This thread really was about speed/velocity, so now let's examine those effects. It is the rapid variations of the SA "clock dither" that do bad things. In measuring the power spectrum of SA, we have seen that the largest frequency offset we see on a given satellite is ~1 Hz at L-band, i.e. a velocity error for that satellite of ~20 cm/sec = 720 meters/hr = 0.45 MPH. Again multiplying by HDOP translates this into a ~1 MPH peak velocity error when several satellites are used to determine your velocity.
The final point I wish to make concerns speed versus velocity. Here is the dilema -- K9DOG might say "W3IWI stated that SA averages to zero, and yet my GPS receiver always says I'm moving a few tenths of a MPH when I'm stopped at a traffic light.".
These two statements are not in conflict! My assertion was that the VECTOR velocity averages to zero. It may be north for a while, then east, then west, then south but I come back "home". K9DOG's statement applied to the indicated SPEED, ignoring the direction. The speed on your receiver is always positive, so it can't average out the east-vs-west motions.
Now lets consider the case that we are driving west at a steady 60 MPH. When SA is north/south, the 1 MPH max changes the apparent direction I am moving by 1 part in 60 (i.e. about one degree), but has little effect on my indicated speed. When SA is contributing in the east-west direction, my indicated speed varies between 59 & 61 MPH, again adding a noise of one part in 60. And the average error is zero! So Jon was right -- the effect of SA on apparent SPEED is different when moving than when stopped. Paul didn't say if he was discussing VECTOR VELOCITY or SPEED, and he is right ONLY if he is discussing 2-D vector velocities.
[For simplicity, the previous discussion assumed the simplistic 2-D case, and it assumed that the satellite geometry results in a circular error "footprint". In point of fact, at mid latitude locations in the northern hemisphere, you see GPS satellites (nearly, on average) equally in both the east and the west. But the 55 degree inclination for the GPS satellites mean that you have no satellites in a large region to the north. Hence "HDOP" (which has no NS vs EW distinction) is an imprecise description of the geometrical effects. If we broke it down into NSDOP vs EWDOP, we would see that EWDOP < HDOP < NSDOP. I would note that VDOP is always bigger than HDOP because you only see satellites above you in the upper hemisphere.]
Aside #2: For those of you who are mathematically inclined, I have just noted that when your speed is zero, we have a 2D Gaussian error. When viewed as a one-D scalar, the resulting errors are Rayleigh distributed at zero speed, and become Gaussian as the speed gets significantly bigger than error radius. When they are comparible, a Rice distribution results.