Intro to Narrowband and Hubble Palette, Part 1

Jan 3, 2020 11:00 · 3052 words · 15 minute read like shooting actually emit flat

Hello my name is Nico Carver and I’m an Astrophotographer. My website is at nebulaphotos.com and today we’re gonna look at what astrophotographers call narrowband imaging the most famous application of this is the Hubble palette popularized by the Hubble Space Telescope in the videos that follow this one I’ll show you how I process narrowband images to make pretty full-color pictures and you can find the links to those videos below I have one for Photoshop one for GIMP which is a open source free application and one for pix insight which is what a lot of more advanced astrophotographers are using. Before we get into all that though I thought it would be a good idea to really understand narrowband first so we first have to understand a bit about the science of color and the science of color vision in the human eye and then how we try to replicate that with cameras and monitors and basically and photography and anything in the modern age where we’re trying to make a full-color image to look like what we see with our actual eyes and so we’re gonna start with this thing we’re gonna start sort of at the beginning here with this thing called the electromagnetic spectrum and sure a lot of you have heard of this also called the e/m spectrum for short and it’s basically this large range of radiation of wavelengths from a very low frequency radio waves and microwaves and things like that to very super energetic high frequency gamma rays and there’s a very small section of the spectrum that our eyes can actually directly detect these waves and we call this section visible light meaning the light that is visible to our human detectors our eyes with different sensors we can now though directly image other parts of the spectrum and so you may have heard of scientific missions that are that are doing x-ray detection or ultraviolet or different parts that we don’t actually directly see but sensors kept and that interesting thing is different animals are sensitive to other parts of the spectrum just outside of what we think of as the visible spectrum for instance the lenses in our eyes block ultraviolet light but recent research is found in other mammals like my cat here their lenses don’t block UV now we’re still not sure if that means Bobby here can really see in UV but why is this it’s because the sensor of the eye is the retina that’s in the back of the eye where the rods and the cones live and the cones are cells that only can detect certain wavelengths and then our animal brain or the cat’s brain then interprets those wavelengths as colors cats are like most mammals and that they are dichromats meaning they’re color vision is due to the interaction of two different types of cones in the eye humans and many other primates are trichromats and that means that we have three different types of cones and we can characterize these cones as short wavelength cones or s cones medium wavelength M cones and long wavelength L cones and so you can see from this chart each type of cone is most responsive to certain wavelengths and a simple way to think of this is we have blue cones green cones and red cones RGB we don’t have cones that see this particular color for instance but through the combination of response in the green and red cones our brains can interpret this as yellow makes sense okay so note that the biggest area of over up here for the cones in our eyes is in the green part of the spectrum that’s going to be important later when we get to color cameras but we have to start with the black and white camera or mono camera this is short for monochromatic the earliest photography was all black and white mono meaning we could measure the intensity of the light across the scene but we didn’t yet have a way of reproducing color the earliest technology for reproducing color was very interesting it happened before we invented color film and so this is how it worked we would you would ask your subject to stay very still and then take three photographs each time placing a different color glass filter in front and so I’m sure many of you can guess the colors of these filters yes it was red green and blue and by combining the relative intensities of the light captured through these three filters we can fairly precisely mimic what the human eye sees and back then the only way to display this color film was by using three slide projectors displaying three distinct images one on top of one another onto the same screen but how do we do it today well an LCD monitor or a flat-screen TV is just basically a light panel if the light panel itself is just white light and in front of that are pixels and the pixels are just little polarizing filters and those polarizing filters control the intensity of the light at each pixel site and then again in front of that we have the red green and blue filters and they control the the each array in the pixel with a sub pixel and so one pixel is lighting up a sort of pale green and then the one next to it is a little bit darker green and when you combine all this together it looks like a color image but if we use a microscope on an a monitor that shows all white we would actually see these sub pixels so each pixel is clearly made up of a red green and blue sub pixel a digital color camera works through but instead of a light panel in the back we have the sensor and a sensor is just a piece of light-sensitive silicon with millions of little pixel wells attached to electronics to turn the analog source so the light coming through the lens and hitting that sensor into a digital readout so at each pixel site or photo site it says this intensity of light hit right here based on the light coming through the lens and so with astrophotography a lot of times those signals are very small which is why noise and noise were you know minimizing noise is so important if you have a model camera then there’s nothing in between the front of the sensor and the optics other than maybe a glass protect window if you have a color camera like in most DSLRs today then in front of the sensor is what we call a color filter array or CFA and the most common color filter array is the Bayer array named after a doctor Bayer who worked at Eastman Kodak and he invented it the Bayer array is arranged like this and so if you look closely you’ll notice that there are two green pixels for every one red and one blue if we remember back to what I was saying about the cones in our eyes the Bayer filter array is arranged with this over sensitivity to green light in our eyes in mind the camera through a computer on board the camera or software later if you shoot raw d mosaics the image this in effect interpolates the colors based on the pixel itself so for instance if light hit this green pixel it would then interpolate what color actually goes there not just based on the single pixel that will I hit but all the pixels nearby around it and so this is how we end up with all the different colors not just red green and blue pixels but all the different colors is based on this interpolation of where the light is hitting and the interaction of the pixels okay so to review all sensors are actually mono sensors but in a DSLR or a one shot color camera there is a filter array in front of the sensor however the Bayer array or if the color filter array is not the only way to filter light hitting the sensor to make a full color image another type of imaging often called mono imaging is we use a monochromatic camera so in this case I’m using as ewo astronomy camera called the ASI 1600 mm see and so this is a cooled mono camera this part right here the red can like thing and to capture different colors I put different colored filters in front of the sensor and I do this with this thing called an electronic filter wheel meaning that it’s a completely closed in design this part attaches to the telescope and then through a computer I can move different filters in front of the sensor so this is an 8 position filter wheel so I have L RGB Ha s2 and O3 and I’ll get into what those different things mean and I also have near IR but I’m not going to talk about that so the most common ones for putting in a filter wheel like this if you’re doing mono imaging are red green and blue because if you just choose red green and blue and you shoot those it’s like shooting with a DSLR it’s slightly more efficient since you’re shooting all red you’re using all of the pixels in the sensor at once then all green then all blue but it’s it’s you end up with a similar image that you would get with the DSLR because you’re just shooting red and green and blue another option is to shoot l RGB so you shoot luminance where you’re getting all the information and then you mix that with RGB and the nice thing about shooting RGB is that you get really accurate star color which you’re often missing when you use narrow band filters which is what I use a lot and so red green and blue are what we call broadband filters meaning there is no one wavelength that equals red what the filter does is it lets in a pretty broad range of reds and so we call this broadband and it’s letting in wavelengths from a wave where peak to peak it measures 590 nanometers all the way up to a wave that measures 700 nanometers peak to peak so a wavelength is if we look at a wave we measure the length of the wave from this point to this point we call that a wavelength it’s literally 500 nanometers or 700 nanometers or somewhere in there and all of those are we respond to as red in the eye and so we can say that this red filter here has a band pass of 110 nanometers meaning that anywhere from a red that measures 590 nanometers to one that measures 700 will come through with this filter everything outside of that bandpass is rejected a narrowband filter blocks more light they look sort of more reflective like this and a lot of times in the case of a narrowband filter you’re letting in just ten or five or even three nanometer wide band pass so you’re blocking almost all of the visible spectrum just except for the little small slice of the visible spectrum that you want but why do we do that well the is my very favorite thing to photograph are nebulae hence my website nebulae photos calm and there are different kinds of nebulae and I’m not going to explain all of them here but a basic division is there’s reflection nebulae where there’s swaths of dust and things that reflect the light from bright stars dark nebulae which block out the light from the stars and also other nebulae and then lastly a mission nebulae which actually emit their own light either because they are clouds of excited gases where stars are being formed or they’re part of the death of a star either planetary nebulae are death the deaths of the star or supernova remnants which are one of my favorite kinds of objects and it’s this last class that we call mission nebulae where we typically always use narrow band filters the reason is these nebulae emit light at very particular and known wavelengths for instance singly ionized sulfur atoms or s2 emits at 670 2.4 nanometers that’s it hydrogen-alpha also called h-alpha emits at 656 point 2 and doubly ionized oxygen ro3 emits at 500 point 7 and so I have narrow band filters with band passes just a few nanometers wide 3 to 5 nanometers wide and the band passes are centered on these key emission lines and so these essentially block out 99% of the visible spectrum and blocking out this unwanted light is often a really good strategy due to terrestrial light pollution it increases the contrast that you get the sky will appear is almost pretty dark almost black and the nebula really come out that way the really high contrast if I just use a more broadband filter like a blue filter here instead of an O 3 then I’m also capturing the all these LED streetlights around and all these other things that I don’t really want in my picture and so I’m sure a question I get is should I use narrow band filters with my DSLR and this is a big source of debate what you’re essentially doing is taking one filter on a filter array of red green and blue and then putting other filters on top of that so it’s not as efficient but I know that people can get good results doing so I’ve seen it online I’ve never personally tried it so I’m not gonna comment from personal experience I’ve only done narrowband imaging with my mono camera I use my dslr’s mostly with out filters and I try to travel to darker sites to use those ok the last part that I want to I’ll talk about this more in process in the processing videos too but I’ll just mention briefly here once you have have captured narrowband data as opposed to red green and blue data it’s a little bit less clear how to actually make an image out of that the reason is is that O3 is actually somewhere in between green and blue it’s sort of a teal color 500 point seven nanometers is sort of right in between green and blue it’s sort of like a greenish blue or a teal h-alpha and s2 are actually a very they’re both very deep reds h-alpha is already a deep red and then s2 goes even deeper probably beyond what our eye can actually detect so you have to basically read colors and one green blue color so what’s cool about this is that if you are just doing what we call bicolor imaging where you just shoot H alpha and o3 you can just put the Ha in the red Channel and the o3 and both the green and the blue channel and you get a fairly natural response from that which is really nice but when you shoot all three or more narrowband channels then it gets a little bit more creative and a lot of people find this really fun if you’re a stickler for what does this what should this actually look like then that might it might not be for you but what we still can see from these images where we’re a little bit more creative is how the gases are interacting and the actual position of all the gases is still accurate it’s that the colors are just what we call false color if you change the mappings so one of the most common mappings meaning you’re taking some narrowband data and putting it into the red green or blue channel is to put the s2 data in the red Channel the h-alpha data in the green Channel and the oh three data in the blue Channel and so if you think about that that’s S2 Ha O3 or for short SHO imaging where you put them in that order makes some sense because sulfur is the furthest into the red and then H alpha is less so and then o3 is even less so so it does make sense the order when you do that and then you sort of remove a little bit of the green you get this really cool sort of golden orange and yellows and blue look and it’s sort of reminiscent of like Hollywood film toning or it’s like the the orange and blue look and this style of imaging redo the show and then you remove a little bit of the green is known as the Hubble palette because when NASA sent up the Hubble Space Telescope there’s all these beautiful images that they got back where they shot those filters and people I’m including people who work for NASA often processed it in this way so became known as the Hubble palette it’s also you’ll also see though show palette or SH Oh palette something like that okay so that’s really it for this intro but I would encourage you if you are interested in how to process narrowband images from a mono setup to keep watching because I have videos for GIMP Photoshop and pics in site where I go through the process stacking registering and putting together a full-color image using some sample data I shop with this setup of the seagull or parrot nebula it’s called both things it’s I see two one seven seven and I’ll explain a little bit more about palette choices and color in those videos but I hope that this was a good introduction to the science behind color filters and what we mean by broadband versus narrowband and why we use those terms what we’re actually doing when we shoot with these kinds of filters thanks for watching again my website is at nebulaphotos.com if you’re not already subscribed to my youtube channel i encourage you to subscribe thanks very much .