They are not truncated because this would lose information which is valuable for performing data reduction, error analysis, and examining when other (astrophysical) effects are taking place. It is somewhat of an inconvenience for someone wanting to use parallaxes to determine distances, and in that case the best method is to truncate them or use alternative methods (e.g. spectroscopic parallax rather than geometric).
Negative stellar parallaxes are quite common, but the small ones (few milliarcsec) are almost solely due to uncertainty. This is actually true even when the stated parallax is more than a few times more negative than the +/- range.
When we report measurements, we may say the result is "x +/- y", but what that actually means is "68% of the measurements fall within y of x". The 68% might seem mysterious, but it arises from a statistical analysis of all the measurements to determine how they are distributed. If the errors are small, random, and uncorrelated, they may form a Normal Distribution. Then we say that the best estimate is the mean, and the uncertainty is the standard deviation, which is related to the width of the distribution and 68% lie within one standard deviation of the mean. This is also the "1 sigma" range.
So if I state a result of 10.0 +/- 0.1, which came from performing 1000 measurements, then if they are normally distributed you would expect about 680 of those measurements to be between 9.9 and 10.1 (the 1 sigma range). But you would also expect to see about 45 measurements below 9.8 or above 10.2 (outside the 2 sigma range), and 2 or 3 measurements less than 9.7 or greater than 10.3 (outside the 3 sigma range).
With the stellar parallaxes from Hipparcos, the distribution of the small parallaxes of single stars is consistent with this form of error:
(From
"Validation of the new Hipparcos reduction")
"
Figure 16: Top: the histogram, for all single stars with five-parameter solutions, of the formal error on the parallaxes. Bottom: the distribution of parallaxes less than 1 mas s[sup]-1[/sup] as a function of formal error on the parallax determination for the new solution. The diagonal lines show the one, two and three sigma levels as based on the formal errors."
In the case of very negative parallaxes other causes are usually at play. For example, the orbital motion of stars in binary or multi-star systems can be important. For relatively nearby or fast-moving stars, their proper motions can be important.
By the way, I heard that it could have to do with what reference for infinity you have placed.
This is mainly for older catalogs where parallaxes were measured with respect to a "background" of other stars in the field. This can cause negative parallaxes to appear which are essentially the negative of the valid (positive) parallax you would expect, if the star you are measuring is actually the background of a field of more nearby cluster stars. But these days the parallax is measured with respect to an
extragalactic reference frame, so this typically isn't important.
What midtskogen found was a paper showing negative parallaxes also arise at the microarcsecond scale, caused by weak lensing from the small variations in gravitational field of our galaxy. As precision of parallax measurements approaches that scale then this becomes a fundamental limitation.