Capturing crystal-clear astro images involves a delicate balance of having just enough pixels for the object you're imaging. But you don't need a PhD to understand the sampling theory that's involved.

Single Pixel Details
A one pixel moon would hardly be satisfying.

In last month's blog I demonstrated that larger pixel scales make just about every aspect of imaging more forgiving. From that, it sure sounds like the larger the pixel scale the better. In truth, there's a battle between the benefits of a large pixel scale and a small one, because the smaller your pixel scale, the finer you can resolve your images. Thus, there is an opposing desire for as small a pixel scale as possible.

Let’s take an extreme example for illustration's sake. What if we had a humongous pixel scale, say, half a degree per pixel, and we took a photo of the Moon. You wouldn't be especially satisfied with the results!

All of the light coming from the details you might want to capture — craters, rilles, maria — would be collected in a single pixel (assuming the Moon was centered on that pixel), resulting in nothing but a big white square on the background sky. We also tend to like zooming into our photos to see more details, but enlarging a single pixel doesn’t really give us a more detailed square, does it? When there aren't enough pixels to represent the image we want to reproduce, we call this undersampling. A one-pixel Moon is about as extreme an example of undersampling as you could ask for.

Better sampling
A better sampled moon shows us some surface features.
[credit, Richard S. Wright Jr.]
Now, let’s take a look at a more highly sampled Moon image, one with a smaller pixel scale. We can see the familiar features we want. We can even zoom in a bit to see more details. The sampling is better because the number of pixels used to represent the image is sufficient to display all the details we want.

Or is it? Zoom in a little more and the details start to get pixelated and we start seeing blocky features were we want crisp image details. Obviously, we still need more pixels!

It would seem that if our equipment and skill are up to the task, we would want as small a pixel scale as we can possibly get. Alas, there are two very significant obstacles in our path; first is the physics of light and optical design, and second is . . . you guessed it, atmospheric seeing.

Our first limit to pixel scale is related to the smallest detail your optics can capture — a big, complicated topic I can’t squeeze into this blog. I’m simply going to say, diffraction-limited optics are now more or less the norm for telescopes, and if your optical design is diffraction limited, you can be assured the real limit to image sharpness is the atmosphere, not your equipment . . . unless you're in orbit!

Still too few pixels.
We need more pixels if we want more detail. But how far can we go?
[credit, Richard S. Wright Jr.]
Remember, seeing is a measure of how much the stars are moving around due to the atmosphere's turbulence. Stars are point sources and make the ultimate test for how fine a detail we can capture (they literally are the finest detail available). If the seeing is 3 arcseconds, this means that not only stars, but other fine details, say, in a nebula, or in lunar craters, are also going to be spread out and blurred by that much as well. Thus, the seeing is the absolute limit on how fine a detail you can possibly capture.

Now the question is, how many pixels do you want to use to try and capture that detail optimally? In astrophotography this optimal sampling is typically referred to as critical sampling.

Usually when this topic comes up, poor Harry Nyquist gets dragged into the conversation, and someone attempts to explain signal processing (and the Nyquist-Shanon sampling theory) in one or two paragraphs. I took a whole course on signal analysis in college (well, at least one . . .), and I really don’t think I can do it justice in a single blog post. Instead, I’m going to distill the spirit all the way down to what really matters to us when trying to photograph star fields. So here's Richard’s Super-Simplified Sampling Theory for Astrophotography, which I think anyone can understand quite easily. Here it is:

What is the minimum number of pixels required to make a star look round?

Yep, that’s it. How many pixels do you need to adequately sample a round star that has been distorted and enlarged by atmospheric seeing? The image below shows the first three possible solutions, with a circle superimposed that represents the theoretical star.

Various sampling values
Which of these sampling values looks more round?
[credit, Richard S. Wright Jr.]
We already know from the Moon example that one pixel represents undersampling. Many understand Nyquist as saying that a signal must be sampled at least twice, that is with at least two pixels. However, sinusoidal signals (such as round things, like stars) need to be sampled at least three times. So you really need at least three pixels across (and down – this is a two dimensional signal, remember) before a star image is going to start to look roundish. In other words, three pixels spread across the seeing limit would be the minimum, or critical sampling.

If your seeing is 2 arcseconds (a pretty still night for most of us), then a critically sampled image would have a pixel scale of 2 / 3, or 0.66 arcseconds per pixel. In fact sampling theory says this is the minimum. If your cell phone signal was minimally critically sampled in this way, you would be pretty unhappy with the sound quality. For astrophotography, though, 3 pixels across your seeing limit isn't really the minimum but the maximum sampling you should attempt. Why? Because until now, all of this has been academic . . . and the real world is a much harsher place than the classroom.

Small and large blobs
A high-resolution blurry blob is just a larger blurry blob!
[credit, Richard S. Wright Jr.]
First, unlike your cell phone signal, there is much more noise in an astronomical image, and oversampling makes the noise more prominent on a per-pixel level. Also, and more importantly, details due to atmospheric turbulence are soft (read: blurry). Increasing the resolution of a soft image simply makes a bigger soft image. Unlike the initial Moon example, once you reach the seeing limit, there are no more details for you to acquire with additional sampling. Moreover most of your stars and other details are going to straddle multiple pixels, rather than being exactly centered; with dithering and multiple image stacking, even a slightly under sampled image can be quite satisfactory.

So, how do you know what your seeing is? A quick and dirty way to determine your seeing is to come to a good focus on a star, and measure the FWHM (Full Width Half Maximum) diameter of the star. Most camera control programs will have a readout that will tell you this. Also, many autofocusing programs will report the best FWHM available once you are in focus. Remember, too, that seeing conditions vary with the weather and can change throughout an evening. Seeing can even depend on which direction you are pointing: A warm house, for example, will create air currents that significantly degrade seeing conditions. There are times I’ve set up only to find that, due to poor seeing, I’m so oversampled that I simply can't get any good data with my system. Those are the nights to practice something else, or call it a night and get some sleep!

There’s a lot of things I’ve glossed over here, and there are some clever ways to “beat the seeing” in some circumstances (lucky imaging). You can be sure these ideas will come up again in future blogs, including the topic of proper sampling for quality results!

Comments


Image of dpuche

dpuche

February 17, 2018 at 10:13 am

Nice and simple! Thank you for this.

It would be interesting if you could write something similar for spectroscopy using a transmission diffraction grating such as the ones used by amateur astronomers, explaining why the choice of lines/mm, distance to the CCD, and sampling of the spectrum in relation with seeing.
Cheers!

You must be logged in to post a comment.

Image of Richard S. Wright Jr.

Richard S. Wright Jr.

February 17, 2018 at 6:09 pm

Thanks, glad you liked it. An interesting suggestion, I'll keep it in mind!

You must be logged in to post a comment.

Image of Farzad_k

Farzad_k

September 14, 2020 at 6:47 pm

Hello. Thank you for writing this up. I have been reading a lot of blogs on the subject of Sampling and yours is the easiest to understand. I have many questions though. One of them is about FWHM. Not every capture software provides that, I am afraid. Mine reports HFR (half flux radius). Some literature say that HFD (half flux diameter) is compatible or as good as FWHM. What are your thoughts on that?

Do you know of any other software that can be used to actually monitor FWHM as you image?

Thanks a lot.

You must be logged in to post a comment.

Image of Richard S. Wright Jr.

Richard S. Wright Jr.

September 15, 2020 at 10:29 am

They are indeed similar quantities. The calculation of the HFD or radius is slightly more robust in the presence of poor seeing conditions.

I don't know of one that reports the values as you image automatically. I use TheSky Imaging Edition (of course, I also do some programming for them), and when you do an image link (plate solve) it will report the average FWHM of all the stars in the image. Many focusing packages also will report this so you know how good your seeing is once focused.

You must be logged in to post a comment.

You must be logged in to post a comment.