Image Processing

The galaxy UGC1382 - from initial 3 colour image, to an enhanced image
showing that what appeared to be an elliptical galaxy is actually a spiral.
Credit: NASA/JPL/Caltech/SDSS/NRAO/L. Hagen and M. Seibert

When we use telescopes to take images of objects in space, we are actually taking data. This data comes back in a raw form. To be able to see what is going on in the observations we need to process and analyse the images.

Our digital cameras do the same thing, but they do the processing for us. With telescope data we need to do it ourselves. There are a few stages involved in processing a raw image. These need to be done before we can analyse it. Below are some of these stages:

  • Debiasing: Often the instruments used on a telescope have a signal put into them which we call a bias. This is so that there is always base-line signal in the instrument. This needs taking out of the image when we want to see the signal coming from the object we are looking at. 
  • Flat-Fielding: In astronomy we are always taking images against the background of the sky. The sky does not have a flat background colour or brightness.  Not only does the brightness change as the Sun sets, but it is not constant across the sky. It is also different depending upon which colour (wavelength) you are looking in. We therefore try to map the sky every twilight and remove the differences on our 'science' image. 
  • Calibration: When we look at objects in space we are usually trying to measure how bright they are. To do this we need to compare what we're looking at with objects that we already know the brightness of. Likewise, if we are taking a spectrum, we need to know the wavelengths we are looking at. To do this we take extra observations of objects for which we already know the wavelength range we are looking at. This could be a standard star (which doesn't change in brightness), or a lamp with a particular element in. We can then calibrate our observations so that our own measurements are true.