Photography and light, the dynamic duo.
You can’t have a photograph without light. Photography has been credited, at least the concept of photography, to the Chinese around the fourth century BC. They had a box which later was called a “camera obscura”, a Latin term meaning “dark chamber”. Basically, it was a box with a small hole pierced in one side, which allowed light in the box. The light would shine on the opposite wall to the opening showing an inverted image of what was reflected from the outside.
When did photography start?
It wasn’t till the 1800s that actually capturing an image on light-sensitive material was made possible. Joseph Nicephore Niepce is credited to creating a permanent image, called a heliograph, around 1826. A heliograph is basically a piece of highly polished pewter covered with a concoction of bitumen and lavender oil. Apparently, bitumen which is a kind of asphalt is light sensitive.
Credit: Wellcome Collection
The first heliograph of Niepce looking over his balcony.
Heliograph means writing with the sun – helios = sun, graphein = writing.
Today we have something similar, a box with a hole in the lens which when the curtain inside the camera body is opened allows light in. The light shines on a light-sensitive surface such as film or digital sensors and leaves a lasting image. Like the original camera obscura, the image is still rendered upside down.
How are images saved?
Film contains light-sensitive silver-halide crystals whose silver ions clump together when light reaches them; these clumps of silver form the image on a photographic negative. As more light strikes the film, more silver ions clump together, and the image on the negative becomes darker.
Instead of film, a digital camera has an array of image sensors that capture incoming light rays and saves them as electrical signals. it’s simply an electrical charge much like the static electricity that builds up on your body as you shuffle across a carpet on a dry day. As the light comes in and hits the sensors, the computer in your camera measures the colour of the light coming in as well as it’s brightness.
These sensors are typically called pixels. A pixel is only given size and shape by the device you use to display or print it. The information of these millions of sensors is then stored as a long string of numbers. Naturally, the more sensors you have, the higher quality image you will get.
There are three types of sensors; a CCD or charged coupled device, a CMOS image sensor or complementary metal-oxide semiconductor sensor, and the Foveon X3 image sensor. Each has its advantages but suffice it to say that there are millions of image sensors used to create a digital image.
The advantage the Foveon system is that is unlike a digital sensor which only records one colour at a time, either the red, green, or blue; it has three layers and directly captures all the colours and densities to give a richer more detailed image.
But here is the big rub.
What and how our eyes process a scene is different from how a camera does.
In fact, that is the number one reason why people are disappointed with their pictures. It doesn’t look like what they remember they saw or are seeing.
What is light?
We all know that this magical entity called ”light” comes from the sun and bounces off of the item you are looking at lets you know the shape (because it is bouncing off of the things around the item you are looking at) and the colour by absorbing all of the colours it isn’t and reflecting the colour back to that it is.
We know from our early science classes that English scientist Sir Isaac Newton and Dutch physicist Christian Huygens came up with the idea that light was made up of particles and waves in the 17th century. This conundrum of which it is, still hasn’t been resolved.
The18th century saw Danish scientist Orsted seeing that changing electric fields creates a magnetic field. Later English scientist Faraday found that a changing magnetic field creates an electric field. Scottish physicist Maxwell brought the two concepts together into “electromagnetism”. But Maxwell took it a step further and brought all these ideas together to explain light.
Light is electrical fields combined with a magnetic field that travels through space as a photon which has no mass and moves at high speed. That’s why we think of a photon as a “unit of light” and an electromagnetic wave as a “light wave”. That is why light can be reflected and refracted.
How do we see colour?
The first question we need to answer is how do we see? In very simple terms, under low light conditions the rod-shape receptors in our eyes registers black, white, and grey tones. That is why our night vision is sharper but only a little colour. The cones in our eyes (there are three types of cones) are the primary receptors and under normal light give us normal sight and the sensation of colour. These electrical charges are sent to the brain and interpreted as an image with the colour that is reflected. The human eye is only sensitive to the red, green, and blue wavelengths.
We get to see just a little bit of the wavelengths that are transmitted. The length of the “wavelength” determines the colour that we see.
The combination of all the visible wavelengths combines to give us “white light”.
The shortest of the wavelengths is violet while red is the longest of the wavelengths. As I mentioned earlier, white light hits an object and all of the wavelengths get absorbed but the one which is reflected back to us, which is the colour of the item.
The sky is blue because blue is the shortest wavelength and it is scattered around the sky by all the molecules in the Earth’s atmosphere. Likewise, because blue has such a short wavelength, it is totally scattered out and away when it has to travel so the farther distance early in the morning and later at sunset, giving the sky a warm red and yellow tone.
Think about these things as you go take pictures. We will take you to the next level in our next post.
To your great shots
Bob and Chuck
We would love to hear from you. If you have any questions or comments let us know.