I use Registax for lunar/planetary imaging. It can be a little quirky, but when it works it works very well, and it shines at processing video. As an example, here's a raw frame from a video I took through a 32" scope:

After processing the video in Registax:

You can easily get similar improvement for the Moon -- I just don't have a before/after offhand.
That's right. Every frame will have the same light pollution, and stacking just averages the information across all the frames, so what's the average of something that is constant?
What stacking does improve is the shot noise (from the discreteness of light -- in low light levels the statistics of individual photon strikes on each pixel on the sensor becomes important, hence the graininess in images). The SNR for this type of noise improves with the square root of the number of frames.
For lunar-planetary imaging (LPI), this is true. For deep sky object (DSO) or Milky Way imaging, "it depends", and in your particular situation it probably doesn't matter much. (Also with Milky Way photography, do not fall for the misconception that lower ISO is always better. It often isn't.)
The SNR improves with the square root of number of frames, but in low light (such as when capturing faint details in galaxies and nebulae) it also improves with the square root of exposure time, for the exact same reason. Longer exposure = more photons striking the sensor = better statistics! So what matters most is the total integration time. If light from objects you are imaging was the only factor, then it doesn't matter if you use many short exposures or fewer long ones.
However, there's a subtlety for DSO imaging in that there are additional sources of noise from the camera itself (read noise and thermal noise or "dark current"). This noise is more significant in longer exposures in dim light, for which you may want to use calibration frames. The more calibration frames you take, the better they themselves are averaged together, and therefore the better they are subtracted out to make the final image. So it can be better to take more shorter exposures, but in practice this is most important with very dark (not light polluted) skies and when exposure times are very long anyway (several minutes or more). Otherwise it probably doesn't matter, and I would simply opt for taking as long exposures as possible to get the most signal in each frame. I would also pay close attention to your camera's ISO performance (is it ISO invariant or not?) and do some tests to find the best settings for your equipment and your sky.
-------------------------------------------------------------------------------------------------------------------------------------------------------------
For lunar and planetary imaging, the answer is very strongly that more shorter exposures is better. There is a lot of light available, so there is no trouble getting proper exposures in much less than a second. But the objects you want to capture are very small, so atmospheric turbulence becomes important. That blurs out the details, as if you were looking through water or heat haze. The distortions fluctuate very rapidly, but if you take a large number of very fast frames, then by random chance a few will be distorted less than others. Then with programs like Registax, the computer can analyze the frames and sort them by how much they are distorted, allowing you to stack the highest quality ones and reject the rest.
This makes video capture a great method for LPI, since individual frames can be very short and you capture a large number of them very quickly. But the most important thing by far is to do your imaging on nights when the turbulence is weakest. The above image of Saturn was from about 12 seconds worth of video at 29fps, but I also took it when the seeing was as good as I've ever had. Sites like cleardarksky.com show predictions for the atmospheric seeing, and you can also tell when seeing is good or bad by how steady the stars are. Twinkling stars means bad views of the Moon and planets!








