!!Con West 2020 - Phil Warren: Beyond the Black Rainbow: Using Booleans to Photograph IR Rainbows!
Mar 20, 2020 18:52 · 1675 words · 8 minute read
Hi. I’m Phil Warren. I’m a research engineer who works in the field of imagery, both capture and display, and when I’m not in the lab, I do the exact same thing, but with my own cameras, and I would like to share with you a novel exploration I’ve been working on. Imagine a rainbow. Actually, you don’t need to imagine it. I’ll show you. This is a pretty good representation of the electromagnetic spectrum that is visible light. Light has a wavelength. And like everything else on the electromagnetic spectrum, we can perceive everything ranging from violet, around 380 nanometers, to deep red, around 720 nanometers. Now, as we look up at a rainbow, photons rain down upon us like so many droplets of splishy splashy color. However, our human eyes can only capture a certain range of those photons.
01:24 - Below the blue wavelengths, ultraviolet photon goes unperceived. But that’s what not we’re going to talk about today. Above the red, infrared photon is likewise not seen. This means the rainbow is hiding something. There are unseen stripes to this rainbow. And not only do I need to see them. I need you to see them too. That’s the problem we’re gonna solve today.
01:51 - Going back to the rain of photons, our awesome human eyes perceive color by catching those photons, those droplets of color energy, in cone cells that behave like… Buckets. Most people have exactly three buckets. And only three buckets in which to catch those photons. These buckets, which are actually three kinds of cone cells, are long cones, which capture red light, medium cones, which capture green light, and short cones, which capture blue light. At this point, our brain is now given color information with these triplet sets of data, a paradigm we call the tristimulus model. We never directly perceive the spectrum, because we can never dump out the buckets and sort the droplets to see what specific wavelength that photon was.
02:42 - Instead, we can understand intermediary shades of color by observing where the buckets overlap. If photons tickle both our red and green cones, we identify yellow or orange. If photons tickle both our green and blue cones, we see cyan or indigo. If we wanted to see the hidden stripes of the rainbow, we would have to see the world in cones that perceived longer wavelengths, say 760 to 1100 nanometers. Let’s figure out how to do that. Cameras, unsurprisingly, are set up to mimic human eyes.
03:17 - However, the silicon sensor in the back of a digital camera is by itself and before the camera is fully put together actually sensitive to a much wider spectrum. 300 nanometers to almost 1100 nanometers. Unfortunately, if cameras actually photographed in that range, the photos wouldn’t accurately resemble the world the photographer was trying to capture. See the image on the left? It doesn’t look right at all. To ensure that cameras do accurately capture color, a manufacturer would now install a hot mirror to bounce back photons invisible to the human eye. This is the camera that gets sold on the market, which accurately captures visible light and nothing else.
04:05 - In order to divide color from this single sensor, we actually coat it with a very fine array of translucent tiles, a color filter array. This only lets certain wavelengths in to each pixel. If we think of each tile as a bucket, again, we end up with a group of three buckets. Again, the digital camera has taken light from the real world spectrum and sorted it into a tristimulus model. With a red channel, a blue channel, and a green channel.
04:34 - Now, you can tear apart a camera and rip out the hot mirror that blocks infrared photons, thus returning your sensor to a state where it can catch all the photons, visible and otherwise. In order to photograph our unseen rainbow, we have to do this. I wouldn’t recommend you do this yourself. There’s a shocking number of screws in that densely packed nightmare of electronics. And moreover, there’s a solid chance your camera will never focus again. But there are several shops that you can go to online that will convert your camera for you, to see in the full spectrum. Now, if we want to photograph in infrared, we can order fairly inexpensive filters called IR pass filters, which are going to block out all the photons that are visible to the human eye. However, if you didn’t take apart and modify your camera, all photons are going to be blocked by this filter. This is all well and good, but once we allow those photons outside the visible realm onto the color filter array, all bets are off. The translucent tiles simply weren’t designed for this, so they don’t know what to do.
05:43 - They all kind of fail the same way, allowing 50% of all infrared light in. This means our buckets overlap completely with no differentiation in the infrared spectrum. There’s no triplet set of data. No tristimulus model here. This has a weird effect. When all the buckets are equally full, it doesn’t matter if there’s only a little infrared or a whole lot of infrared. The result is a traditional infrared photo, with pixels that are gray scale. Monochromatic. This is a white balanced version of a conventional infrared photo.
06:17 - We can’t photograph secret beyond the black rainbow stripes this way. We would only see a white band of light. See? I tried using a camera with a hot mirror ripped out. Now, there is an infrared photography community, including myself, that leans into this limitation in style. Augmenting the monochromatic infrared with a little visible spectrum and then swapping channels to create a cotton candy color, which is artistically beautiful, but does not actually offer a color concept of the invisible spectrum. Here’s where we diverge and truly introduce a novel format.
06:56 - Maybe we could reconstruct an infrared rainbow with three different infrared photographs of our rainbow shown here in the visible spectrum. Except no filters exist on a consumer market to create the desired buckets on the spectrum. We can use the consumer filters I mentioned before that start allowing light in at different points on the infrared spectrum, but they all have no upper terminus. They all let in as high of a wavelength as the sensor will allow, meaning they all cut off around 1100 nanometers. I used three of these filters, mounting a different one in front of my lens for each shot, to take these photographs.
07:35 - One allows in, in nanometers, 760 to 1100 to pass. We’ll call the result image A. One allows 850 to 1100 to pass. We’ll call the result image B. One allows 960 to 1100 to pass. We’ll call this unsurprisingly image C. And maybe there’s a creative way to computationally derive three separate buckets? I took a look at those three photos. I took those photos without moving the camera, the light, the subject, or changing any of the settings on the camera. It was all just the rainbow that I created using a Xenon bulb and a simple prism. Now, let’s not forget that we’re dealing with a digital image here.
08:19 - So now we can crack open the code and let our true nerd shine. I used Python to write my code for this, and remember, any time we’re dealing with a digital image, we’re really looking at a multidimensional array of integers. What we might call an mxnx3 array. Meaning height and width of an image and then three stacks to represent the red, the green, and the blue channels. Given that these images are really monochromatic, we can simply average those channels to reduce noise and increase sharpness, then deal with each image as a 2D array, mxn. As we have a wider spectrum than we want but also know the energy gathered by the spectrum we don’t want, we can simply subtract one from the other to get the spectrum we do want.
09:03 - We can look at this operation as a Boolean on two matrices or unary cookie cutter operation. We can do basic subtraction between images to reveal the light energy of specific spectrum. If you only take one fact away from this talk, it turns out you can isolate a spectrum beyond what optical filtering will allow, by using math. To get the lowest stripe in this infrared rainbow, we can subtract image B from image A, leaving with light energy from the low bucket, and to get the next stripes in the infrared rainbow, we subtract image C from image B and we get this midband bucket. We already have the high band bucket in image C, so suddenly, possibly for the first time ever, we have a tristimulus model entirely in infrared! Because there’s optical filters that weren’t perfect, and had a little rolloff, there’s even going to be enough overlap to have intermediary shades.
10:01 - If we just drop these into a tiff image file, like a red, green, and blue channel… We’ve imaged an infrared rainbow! Compared to the visible spectrum rainbow… This rainbow is further out, with narrower bands. Now, because this is a novel, nameless method, I’m proposing it be named after the three cup game. Thimblerig. And using the same technique on a cracker jack-loving taxidermied raccoon, we see the domino mask disappear and the cuddly trash panda has less to hide.
10:37 - A pineapple growing in an arboretum becomes a ghostly beautiful thing. A rainforest orchid loses all of its attractors for pollinating insects. Thanks. If you want to hear more about this, I would love to answer questions and chat about image technology. I’m Phil Warren. Grab me after the talk or visit my website, PhilWarrenPhotography.com, or find me on social media. Thanks for listening! .