R7 Strength Logo

Brain

Brain

Brain

Shedding Light and Unleashing the Power of the Photon Part 3

Shedding Light and Unleashing the Power of the Photon Part 3

Shedding Light and Unleashing the Power of the Photon Part 3

Part 3: Interpreting Light

Part 3: Interpreting Light

Part 3: Interpreting Light

May 16, 2024

May 16, 2024

May 16, 2024

In the previous installment of this series, I gave a brief overview of how we can mathematically quantify light and assign quantitative measurements to it. Light enables us to create an understanding and perception of the world around us, a sensory output we refer to as sight or vision. Our understanding of light in a quantitative manner sets the stage for us to look closer at how we, as humans, interact with light and use it to interpret our surroundings. The process of taking in and decoding the information provided from the environment around us is very complex, but my goal is to provide a basic breakdown that serves as a foundation for understanding how light impacts our perception of our environment and how we take in and process this stimulus. The workflow for this process is actually quite simple and can be broken down into four steps/parts:


Source → Modifier → Sensor → Interpreter


The workflow demonstrates the flow of information as the light is emitted from a source and interacts with the surrounding environment and produces various forms of data/inputs that are then received and interpreted by humans. In this article, I am primarily going to focus on the source and the sensor but we will still define all four:


  1. Source: any device/object that serves as a source of illumination by emitting electromagnetic energy that contains wavelengths associated with the visible spectrum

  2. Modifier: object/environment that interacts with the light as it reflects and refracts

  3. Sensor: the human eye

  4. Interpreter: the human brain and visual pathway


Let’s start with the source. As stated above, the source is defined as any device/object that serves as a source of illumination by emitting energy that contains wavelengths associated with the visible spectrum. Basically, this is just a fancy way of saying anything that emits light. The most common sources are sunlight, candle/fire light, and artificial sources (LED, fluorescent, halogen, incandescent, etc.), but what distinguishes one source from another? Aside from the obvious, the sun is not a light bulb kind of answer, each source has a specific associated spectral power distribution, SPD, that functions as a “fingerprint” for a light source. The SPD plots wavelength (nm) and relative power output to produce a curve that depicts a source’s power output with each respective wavelength on the visible spectrum. All of these wavelengths “combine” to produce what we refer to as “white light,” a colloquial term because no light is actually “white.” In fact, most white light will still appear to have a general hue. For example, if a source’s SPD is shifted more to the right, with peaks in the 600-700nm range, the light will have a slightly more orange/warm hue; whereas, if it’s shifted more to the left, with peaks in the 300-400nm range, the light will have a slightly more blue/cool hue. The SPD for each source is unique and rather than going through the general trends associated with the common light sources I listed above, we will go over the two most common sources in modern design: daylight and LED light. 

  1. Daylight SPD: generally even/uniform power distribution across wavelengths with a moderate peak around wavelengths associated with blue light (450nm)

  2. LED Light SPD: spikes/peaks in power distribution usually around wavelengths associated with red (700nm), green (550nm), and blue (450nm) light depending on the specific source


It is important to note these general trends because they begin to demonstrate how artificial light sources can impact our visual perception. The invention of the blue diode functioned as the catalyst for using LED technology to create energy efficient white light. This is great for energy reduction in design, but can potentially impede color rendering because rather than a smooth power distribution across all wavelengths, like with sunlight, there are spikes in specific wavelength ranges.


Once the light is emitted from the source, it enters into the environment and reflects and refracts with modifiers. The specific modifier or object is determined by you and what is currently your focus and within your visual field. Similar to the source, the object also has a power distribution associated with it that plots wavelength (nm) against relative power output; however, rather than measuring the electromagnetic power being emitted it measures the electromagnetic power being reflected from the object surface. As a result, this plot is defined as the spectral reflectance distribution, SRD, and is defined by intrinsic properties of the object itself. For example, the SRD for a red delicious apple is going to be shifted to the right with the majority of the power output around the 700nm range. The SRD helps define which wavelengths and corresponding intensities from the source are going to be reflected back into the environment and to the sensor to be further interpreted.


Now that we have made it halfway through the workflow, let’s summarize what we have discussed so far. Light is emitted from a source and can be characterized by an SPD that graphically demonstrates the relative power of each wavelength on the visible spectrum that is being emitted by the source. The light enters into the environment and interacts with a modifier that has an associated SRD that determines which wavelengths and corresponding intensities from the source are reflected to the sensor. The source and modifier characteristics determine what stimulus information reaches the sensor, at which point the sensor then takes in the information and sends it to be decoded by the interpreter in the second half of the workflow.


As defined in the beginning of this article, the sensor is the human eye. Before we look at how light enters and interacts with the eye, it’s important to understand the anatomy of the eye. I have provided a simple anatomical breakdown below that briefly describes the function of each structure:


  1. Pupil: allows us to receive light through the lens back to the retina and can adjust size based on environmental light levels

  2. Lens: focuses light on the retina (more specifically the macula) and the lens curvature adjusts to bring it into focus

  3. Retina: a thin sheet of interconnected nerve cells that convert light energy into electrical pulses to be transmitted along the optic nerve, contains the macula and the fovea

    1. Macula: where the focused image falls

    2. Fovea: depression in the center of the macula where the majority of cones exist and is associated with the sharpest vision, within 2 degrees of the fovea is the majority of cones and as you continue to move out it’s mostly all rods with a few cones

  4. Photoreceptors: cells in the retina that responds to light

    1. Rods: responsible for BW vision

    2. Cones: responsible for color vision

    3. ipRGCs: non-visual photoreceptors (circadian rhythm)

  5. Optic Nerve: also referred to as the second cranial nerve, the optic nerve attaches to the back of the eye and sends information back to the brain via electrical impulses


In addition to an anatomical understanding of the eye, we must also establish a baseline understanding of the relationship between the stimulus, light/radiant energy, and the sensor, the human eye. The stimulus is radiant energy with wavelength (color) and frequency (speed) information that can be subdivided into S(B), M(G), and L(R) wavelengths. Once the stimulus reaches the sensor, the eye, the anatomical structures within the sensor work to take in the information and code it into signals that can be further understood by the interpreter. The sensor requires a stimulus and how the sensor responds is predicated on the specific characteristics of the stimulus; therefore, the information taken in by the sensor is dependent on the source and modifier characteristics.


I know it may seem like a lot of extraneous information, but understanding these basics makes it easier to follow the flow of information within the sensor itself. The pupil allows light to enter the eye, at which point the lens focuses the light onto the retina, more specifically the macula. The retina contains rods and cones that respond to the wavelength and intensity of the light, with the majority of cones located at the center in the fovea. The cones and rods absorb light though chemical pigments called photopsin and rhodopsin. It is important to note that these photoreceptors respond to frequency (intensity/how much energy) and wavelength (color), I will touch more on this as we look closer at cones, and produce a “number” output. Rods produce a single number, cones take an average. This information is then coded into electrical signals that are sent along the optic nerve to the brain to be further interpreted. 


But how do the photoreceptors make sense of the radiant energy that is entering the eye? Rods are pretty simple, they are characterized with high sensitivity and low acuity and are responsible for BW vision - let’s focus on cones. Cones are responsible for color vision and are characterized by a lower sensitivity and higher acuity. There are three types of cones that are responsive to the three types of wavelengths: S(B), M(G), and L(R). Even though these wavelengths have color attached to them, the cones themselves don’t determine the color because wavelength itself is colorless. Rather, they respond to wavelength and frequency information and work together to communicate. A single cone cannot distinguish one wavelength from another, so they work in color opponency channels (BY, RG, Achromatic) and communicate the information from these channels along to the rest of the visual pathway all the way to the visual cortex to be interpreted. This information is reinforced through chromatic opponency (context affects the way that we see). Chromatic opponency occurs in receptive fields that contain groups of cones that work together to encode information to be transmitted via ganglion cells back to the brain. Each receptive field contains a central disk and concentric ring that respond in opposition to one another and cause an increase or decrease in the firing of a particular ganglion cell. 


Once the rods and cones have taken in the stimulus information and translated it into coded impulses, the information is sent along the optic nerve to the rest of the visual pathway in your brain, the interpreter. The information goes from the optic nerve to the optic chiasm, optic tracts, lateral geniculate bodies, optic radiations, and finally the visual cortex. In the context of this workflow, the visual cortex functions as the primary interpreter in your brain as it works to receive, decode, process, and interpret the information gathered in the eye and transmitted along the visual pathway. It is responsible for producing the image, color, and overall impression of the environment around us and is the final step in interpreting light. 


That concludes the workflow for how we interpret and understand light, starting at the source and ending with the interpreter, however, there are two additional concepts that I want to touch on because they also influence our perception and interpretation of our environment:


  1. Human Spectral Sensitivity

  2. Luminance vs Brightness


Human spectral sensitivity is represented with a luminous efficacy curve as we look at brightness and human perception. Basically, it demonstrates that our eye does not respond to all wavelengths equally and once you reach the full “potential” for a specific wavelength adding more won’t make a difference. In design, understanding this concept can help minimize any wasted energy because adding more doesn’t translate into a greater desired effect. Additionally, understanding the difference between luminance and brightness also furthers our understanding of how we perceive light by taking into account the entire environment and not just a single object/modifier in isolation. Luminance is defined as the amount of light that reaches the eye; brightness is defined as the visual perception/interpretation of that light in the context of the surrounding environment. Contrast and finishes within the environment will impact how “bright” something appears, so it is important to note in design as we can achieve vastly different visual looks with the same light levels just by changing other components in the environment.

Per usual, I have presented a lot of information so let’s summarize the workflow:


  1. We interpret light via a four step workflow (light source → object → eye → brain/visual cortex) that takes electromagnetic energy that is emitted from a source and creates a visual output of our surrounding environment and shapes how we view and understand the world around us.

  2. A light source can be characterized by a spectral power distribution (SPD) that graphically describes the relative power of each wavelength on the visible spectrum being emitted by the source.

  3. An object can be characterized by a reflective power distribution (SRD) that graphically describes the relative power of each wavelength on the visible spectrum being reflected by the object.

  4. Together, the SPD and the SRD determine the electromagnetic energy/light information that reach and enter the eye.

  5. Photoreceptors, rods and cones, in the eye take in and translate the wavelength and frequency information from the light entering the eye into electrical signals that are transmitted via the optic nerve.

  6. The signals travel along the visual pathway of the brain until they reach the visual cortex where they are decoded to create the colors, shapes, textures, and overall images of the world around us. 

Based on the workflow and supporting information, we can derive big takeaways and answer the question, “why does this matter?”

  1. Our perception of something is dependent on the source (SPD) and object (SRD) characteristics, if you change one of these it will change how we view something. The most common example I use to illustrate this is when you go to a fancy grocery store. How many times have you bought bananas that looked perfectly ripe and yellow only to get to the car to see they have miraculously turned an off shade of brown? Upscale grocery stores tune their lighting, shifting the SPD to wavelengths that correspond with the food they are lighting (red for apples or meat, green for leafy vegetables, etc.). Your bananas didn’t go brown in 30 seconds, but your source changed and was no longer shifted towards “yellow” light.

  2. Know your source and the quality of light associated with it. Daylight provides the most consistent relative power output for all wavelengths of light, while artificial sources have peaks and valleys as you go along the spectrum because we artificially combine different wavelengths of light to create “white light.”

  3. Your eyes don’t actually “see” anything, they take in external sensory information to be processed and decoded by your brain via photoreceptors, rods and cones, located in the retina.

  4. Human spectral sensitivity dictates that “more” doesn’t mean better. Our eye does not respond to all wavelengths equally and once we’ve reached full potential for a wavelength adding more won’t make a difference. Stop over-lighting everything, understand how to light with less and you can achieve a more visually appealing and comfortable space.

  5. You can achieve different visual effects with the same source and object by changing the surroundings, luminance (amount of light reaching the eye) remains the same but brightness (the visual perception and interpretation of the light) will change. You can play with contrast and finishes to make something appear lighter, darker, more prominent, and so forth without changing the source characteristics or light levels.


At this point in the series, we have a baseline understanding of the math and science behind light as we can both mathematically quantify light and understand how humans interpret light to create an image of the world around us. In the next part of this series, we will start to take a closer look at how we can use this science based foundation to manipulate light in a space to influence how we feel, operate, and respond to specific environments.

In the previous installment of this series, I gave a brief overview of how we can mathematically quantify light and assign quantitative measurements to it. Light enables us to create an understanding and perception of the world around us, a sensory output we refer to as sight or vision. Our understanding of light in a quantitative manner sets the stage for us to look closer at how we, as humans, interact with light and use it to interpret our surroundings. The process of taking in and decoding the information provided from the environment around us is very complex, but my goal is to provide a basic breakdown that serves as a foundation for understanding how light impacts our perception of our environment and how we take in and process this stimulus. The workflow for this process is actually quite simple and can be broken down into four steps/parts:


Source → Modifier → Sensor → Interpreter


The workflow demonstrates the flow of information as the light is emitted from a source and interacts with the surrounding environment and produces various forms of data/inputs that are then received and interpreted by humans. In this article, I am primarily going to focus on the source and the sensor but we will still define all four:


  1. Source: any device/object that serves as a source of illumination by emitting electromagnetic energy that contains wavelengths associated with the visible spectrum

  2. Modifier: object/environment that interacts with the light as it reflects and refracts

  3. Sensor: the human eye

  4. Interpreter: the human brain and visual pathway


Let’s start with the source. As stated above, the source is defined as any device/object that serves as a source of illumination by emitting energy that contains wavelengths associated with the visible spectrum. Basically, this is just a fancy way of saying anything that emits light. The most common sources are sunlight, candle/fire light, and artificial sources (LED, fluorescent, halogen, incandescent, etc.), but what distinguishes one source from another? Aside from the obvious, the sun is not a light bulb kind of answer, each source has a specific associated spectral power distribution, SPD, that functions as a “fingerprint” for a light source. The SPD plots wavelength (nm) and relative power output to produce a curve that depicts a source’s power output with each respective wavelength on the visible spectrum. All of these wavelengths “combine” to produce what we refer to as “white light,” a colloquial term because no light is actually “white.” In fact, most white light will still appear to have a general hue. For example, if a source’s SPD is shifted more to the right, with peaks in the 600-700nm range, the light will have a slightly more orange/warm hue; whereas, if it’s shifted more to the left, with peaks in the 300-400nm range, the light will have a slightly more blue/cool hue. The SPD for each source is unique and rather than going through the general trends associated with the common light sources I listed above, we will go over the two most common sources in modern design: daylight and LED light. 

  1. Daylight SPD: generally even/uniform power distribution across wavelengths with a moderate peak around wavelengths associated with blue light (450nm)

  2. LED Light SPD: spikes/peaks in power distribution usually around wavelengths associated with red (700nm), green (550nm), and blue (450nm) light depending on the specific source


It is important to note these general trends because they begin to demonstrate how artificial light sources can impact our visual perception. The invention of the blue diode functioned as the catalyst for using LED technology to create energy efficient white light. This is great for energy reduction in design, but can potentially impede color rendering because rather than a smooth power distribution across all wavelengths, like with sunlight, there are spikes in specific wavelength ranges.


Once the light is emitted from the source, it enters into the environment and reflects and refracts with modifiers. The specific modifier or object is determined by you and what is currently your focus and within your visual field. Similar to the source, the object also has a power distribution associated with it that plots wavelength (nm) against relative power output; however, rather than measuring the electromagnetic power being emitted it measures the electromagnetic power being reflected from the object surface. As a result, this plot is defined as the spectral reflectance distribution, SRD, and is defined by intrinsic properties of the object itself. For example, the SRD for a red delicious apple is going to be shifted to the right with the majority of the power output around the 700nm range. The SRD helps define which wavelengths and corresponding intensities from the source are going to be reflected back into the environment and to the sensor to be further interpreted.


Now that we have made it halfway through the workflow, let’s summarize what we have discussed so far. Light is emitted from a source and can be characterized by an SPD that graphically demonstrates the relative power of each wavelength on the visible spectrum that is being emitted by the source. The light enters into the environment and interacts with a modifier that has an associated SRD that determines which wavelengths and corresponding intensities from the source are reflected to the sensor. The source and modifier characteristics determine what stimulus information reaches the sensor, at which point the sensor then takes in the information and sends it to be decoded by the interpreter in the second half of the workflow.


As defined in the beginning of this article, the sensor is the human eye. Before we look at how light enters and interacts with the eye, it’s important to understand the anatomy of the eye. I have provided a simple anatomical breakdown below that briefly describes the function of each structure:


  1. Pupil: allows us to receive light through the lens back to the retina and can adjust size based on environmental light levels

  2. Lens: focuses light on the retina (more specifically the macula) and the lens curvature adjusts to bring it into focus

  3. Retina: a thin sheet of interconnected nerve cells that convert light energy into electrical pulses to be transmitted along the optic nerve, contains the macula and the fovea

    1. Macula: where the focused image falls

    2. Fovea: depression in the center of the macula where the majority of cones exist and is associated with the sharpest vision, within 2 degrees of the fovea is the majority of cones and as you continue to move out it’s mostly all rods with a few cones

  4. Photoreceptors: cells in the retina that responds to light

    1. Rods: responsible for BW vision

    2. Cones: responsible for color vision

    3. ipRGCs: non-visual photoreceptors (circadian rhythm)

  5. Optic Nerve: also referred to as the second cranial nerve, the optic nerve attaches to the back of the eye and sends information back to the brain via electrical impulses


In addition to an anatomical understanding of the eye, we must also establish a baseline understanding of the relationship between the stimulus, light/radiant energy, and the sensor, the human eye. The stimulus is radiant energy with wavelength (color) and frequency (speed) information that can be subdivided into S(B), M(G), and L(R) wavelengths. Once the stimulus reaches the sensor, the eye, the anatomical structures within the sensor work to take in the information and code it into signals that can be further understood by the interpreter. The sensor requires a stimulus and how the sensor responds is predicated on the specific characteristics of the stimulus; therefore, the information taken in by the sensor is dependent on the source and modifier characteristics.


I know it may seem like a lot of extraneous information, but understanding these basics makes it easier to follow the flow of information within the sensor itself. The pupil allows light to enter the eye, at which point the lens focuses the light onto the retina, more specifically the macula. The retina contains rods and cones that respond to the wavelength and intensity of the light, with the majority of cones located at the center in the fovea. The cones and rods absorb light though chemical pigments called photopsin and rhodopsin. It is important to note that these photoreceptors respond to frequency (intensity/how much energy) and wavelength (color), I will touch more on this as we look closer at cones, and produce a “number” output. Rods produce a single number, cones take an average. This information is then coded into electrical signals that are sent along the optic nerve to the brain to be further interpreted. 


But how do the photoreceptors make sense of the radiant energy that is entering the eye? Rods are pretty simple, they are characterized with high sensitivity and low acuity and are responsible for BW vision - let’s focus on cones. Cones are responsible for color vision and are characterized by a lower sensitivity and higher acuity. There are three types of cones that are responsive to the three types of wavelengths: S(B), M(G), and L(R). Even though these wavelengths have color attached to them, the cones themselves don’t determine the color because wavelength itself is colorless. Rather, they respond to wavelength and frequency information and work together to communicate. A single cone cannot distinguish one wavelength from another, so they work in color opponency channels (BY, RG, Achromatic) and communicate the information from these channels along to the rest of the visual pathway all the way to the visual cortex to be interpreted. This information is reinforced through chromatic opponency (context affects the way that we see). Chromatic opponency occurs in receptive fields that contain groups of cones that work together to encode information to be transmitted via ganglion cells back to the brain. Each receptive field contains a central disk and concentric ring that respond in opposition to one another and cause an increase or decrease in the firing of a particular ganglion cell. 


Once the rods and cones have taken in the stimulus information and translated it into coded impulses, the information is sent along the optic nerve to the rest of the visual pathway in your brain, the interpreter. The information goes from the optic nerve to the optic chiasm, optic tracts, lateral geniculate bodies, optic radiations, and finally the visual cortex. In the context of this workflow, the visual cortex functions as the primary interpreter in your brain as it works to receive, decode, process, and interpret the information gathered in the eye and transmitted along the visual pathway. It is responsible for producing the image, color, and overall impression of the environment around us and is the final step in interpreting light. 


That concludes the workflow for how we interpret and understand light, starting at the source and ending with the interpreter, however, there are two additional concepts that I want to touch on because they also influence our perception and interpretation of our environment:


  1. Human Spectral Sensitivity

  2. Luminance vs Brightness


Human spectral sensitivity is represented with a luminous efficacy curve as we look at brightness and human perception. Basically, it demonstrates that our eye does not respond to all wavelengths equally and once you reach the full “potential” for a specific wavelength adding more won’t make a difference. In design, understanding this concept can help minimize any wasted energy because adding more doesn’t translate into a greater desired effect. Additionally, understanding the difference between luminance and brightness also furthers our understanding of how we perceive light by taking into account the entire environment and not just a single object/modifier in isolation. Luminance is defined as the amount of light that reaches the eye; brightness is defined as the visual perception/interpretation of that light in the context of the surrounding environment. Contrast and finishes within the environment will impact how “bright” something appears, so it is important to note in design as we can achieve vastly different visual looks with the same light levels just by changing other components in the environment.

Per usual, I have presented a lot of information so let’s summarize the workflow:


  1. We interpret light via a four step workflow (light source → object → eye → brain/visual cortex) that takes electromagnetic energy that is emitted from a source and creates a visual output of our surrounding environment and shapes how we view and understand the world around us.

  2. A light source can be characterized by a spectral power distribution (SPD) that graphically describes the relative power of each wavelength on the visible spectrum being emitted by the source.

  3. An object can be characterized by a reflective power distribution (SRD) that graphically describes the relative power of each wavelength on the visible spectrum being reflected by the object.

  4. Together, the SPD and the SRD determine the electromagnetic energy/light information that reach and enter the eye.

  5. Photoreceptors, rods and cones, in the eye take in and translate the wavelength and frequency information from the light entering the eye into electrical signals that are transmitted via the optic nerve.

  6. The signals travel along the visual pathway of the brain until they reach the visual cortex where they are decoded to create the colors, shapes, textures, and overall images of the world around us. 

Based on the workflow and supporting information, we can derive big takeaways and answer the question, “why does this matter?”

  1. Our perception of something is dependent on the source (SPD) and object (SRD) characteristics, if you change one of these it will change how we view something. The most common example I use to illustrate this is when you go to a fancy grocery store. How many times have you bought bananas that looked perfectly ripe and yellow only to get to the car to see they have miraculously turned an off shade of brown? Upscale grocery stores tune their lighting, shifting the SPD to wavelengths that correspond with the food they are lighting (red for apples or meat, green for leafy vegetables, etc.). Your bananas didn’t go brown in 30 seconds, but your source changed and was no longer shifted towards “yellow” light.

  2. Know your source and the quality of light associated with it. Daylight provides the most consistent relative power output for all wavelengths of light, while artificial sources have peaks and valleys as you go along the spectrum because we artificially combine different wavelengths of light to create “white light.”

  3. Your eyes don’t actually “see” anything, they take in external sensory information to be processed and decoded by your brain via photoreceptors, rods and cones, located in the retina.

  4. Human spectral sensitivity dictates that “more” doesn’t mean better. Our eye does not respond to all wavelengths equally and once we’ve reached full potential for a wavelength adding more won’t make a difference. Stop over-lighting everything, understand how to light with less and you can achieve a more visually appealing and comfortable space.

  5. You can achieve different visual effects with the same source and object by changing the surroundings, luminance (amount of light reaching the eye) remains the same but brightness (the visual perception and interpretation of the light) will change. You can play with contrast and finishes to make something appear lighter, darker, more prominent, and so forth without changing the source characteristics or light levels.


At this point in the series, we have a baseline understanding of the math and science behind light as we can both mathematically quantify light and understand how humans interpret light to create an image of the world around us. In the next part of this series, we will start to take a closer look at how we can use this science based foundation to manipulate light in a space to influence how we feel, operate, and respond to specific environments.

In the previous installment of this series, I gave a brief overview of how we can mathematically quantify light and assign quantitative measurements to it. Light enables us to create an understanding and perception of the world around us, a sensory output we refer to as sight or vision. Our understanding of light in a quantitative manner sets the stage for us to look closer at how we, as humans, interact with light and use it to interpret our surroundings. The process of taking in and decoding the information provided from the environment around us is very complex, but my goal is to provide a basic breakdown that serves as a foundation for understanding how light impacts our perception of our environment and how we take in and process this stimulus. The workflow for this process is actually quite simple and can be broken down into four steps/parts:


Source → Modifier → Sensor → Interpreter


The workflow demonstrates the flow of information as the light is emitted from a source and interacts with the surrounding environment and produces various forms of data/inputs that are then received and interpreted by humans. In this article, I am primarily going to focus on the source and the sensor but we will still define all four:


  1. Source: any device/object that serves as a source of illumination by emitting electromagnetic energy that contains wavelengths associated with the visible spectrum

  2. Modifier: object/environment that interacts with the light as it reflects and refracts

  3. Sensor: the human eye

  4. Interpreter: the human brain and visual pathway


Let’s start with the source. As stated above, the source is defined as any device/object that serves as a source of illumination by emitting energy that contains wavelengths associated with the visible spectrum. Basically, this is just a fancy way of saying anything that emits light. The most common sources are sunlight, candle/fire light, and artificial sources (LED, fluorescent, halogen, incandescent, etc.), but what distinguishes one source from another? Aside from the obvious, the sun is not a light bulb kind of answer, each source has a specific associated spectral power distribution, SPD, that functions as a “fingerprint” for a light source. The SPD plots wavelength (nm) and relative power output to produce a curve that depicts a source’s power output with each respective wavelength on the visible spectrum. All of these wavelengths “combine” to produce what we refer to as “white light,” a colloquial term because no light is actually “white.” In fact, most white light will still appear to have a general hue. For example, if a source’s SPD is shifted more to the right, with peaks in the 600-700nm range, the light will have a slightly more orange/warm hue; whereas, if it’s shifted more to the left, with peaks in the 300-400nm range, the light will have a slightly more blue/cool hue. The SPD for each source is unique and rather than going through the general trends associated with the common light sources I listed above, we will go over the two most common sources in modern design: daylight and LED light. 

  1. Daylight SPD: generally even/uniform power distribution across wavelengths with a moderate peak around wavelengths associated with blue light (450nm)

  2. LED Light SPD: spikes/peaks in power distribution usually around wavelengths associated with red (700nm), green (550nm), and blue (450nm) light depending on the specific source


It is important to note these general trends because they begin to demonstrate how artificial light sources can impact our visual perception. The invention of the blue diode functioned as the catalyst for using LED technology to create energy efficient white light. This is great for energy reduction in design, but can potentially impede color rendering because rather than a smooth power distribution across all wavelengths, like with sunlight, there are spikes in specific wavelength ranges.


Once the light is emitted from the source, it enters into the environment and reflects and refracts with modifiers. The specific modifier or object is determined by you and what is currently your focus and within your visual field. Similar to the source, the object also has a power distribution associated with it that plots wavelength (nm) against relative power output; however, rather than measuring the electromagnetic power being emitted it measures the electromagnetic power being reflected from the object surface. As a result, this plot is defined as the spectral reflectance distribution, SRD, and is defined by intrinsic properties of the object itself. For example, the SRD for a red delicious apple is going to be shifted to the right with the majority of the power output around the 700nm range. The SRD helps define which wavelengths and corresponding intensities from the source are going to be reflected back into the environment and to the sensor to be further interpreted.


Now that we have made it halfway through the workflow, let’s summarize what we have discussed so far. Light is emitted from a source and can be characterized by an SPD that graphically demonstrates the relative power of each wavelength on the visible spectrum that is being emitted by the source. The light enters into the environment and interacts with a modifier that has an associated SRD that determines which wavelengths and corresponding intensities from the source are reflected to the sensor. The source and modifier characteristics determine what stimulus information reaches the sensor, at which point the sensor then takes in the information and sends it to be decoded by the interpreter in the second half of the workflow.


As defined in the beginning of this article, the sensor is the human eye. Before we look at how light enters and interacts with the eye, it’s important to understand the anatomy of the eye. I have provided a simple anatomical breakdown below that briefly describes the function of each structure:


  1. Pupil: allows us to receive light through the lens back to the retina and can adjust size based on environmental light levels

  2. Lens: focuses light on the retina (more specifically the macula) and the lens curvature adjusts to bring it into focus

  3. Retina: a thin sheet of interconnected nerve cells that convert light energy into electrical pulses to be transmitted along the optic nerve, contains the macula and the fovea

    1. Macula: where the focused image falls

    2. Fovea: depression in the center of the macula where the majority of cones exist and is associated with the sharpest vision, within 2 degrees of the fovea is the majority of cones and as you continue to move out it’s mostly all rods with a few cones

  4. Photoreceptors: cells in the retina that responds to light

    1. Rods: responsible for BW vision

    2. Cones: responsible for color vision

    3. ipRGCs: non-visual photoreceptors (circadian rhythm)

  5. Optic Nerve: also referred to as the second cranial nerve, the optic nerve attaches to the back of the eye and sends information back to the brain via electrical impulses


In addition to an anatomical understanding of the eye, we must also establish a baseline understanding of the relationship between the stimulus, light/radiant energy, and the sensor, the human eye. The stimulus is radiant energy with wavelength (color) and frequency (speed) information that can be subdivided into S(B), M(G), and L(R) wavelengths. Once the stimulus reaches the sensor, the eye, the anatomical structures within the sensor work to take in the information and code it into signals that can be further understood by the interpreter. The sensor requires a stimulus and how the sensor responds is predicated on the specific characteristics of the stimulus; therefore, the information taken in by the sensor is dependent on the source and modifier characteristics.


I know it may seem like a lot of extraneous information, but understanding these basics makes it easier to follow the flow of information within the sensor itself. The pupil allows light to enter the eye, at which point the lens focuses the light onto the retina, more specifically the macula. The retina contains rods and cones that respond to the wavelength and intensity of the light, with the majority of cones located at the center in the fovea. The cones and rods absorb light though chemical pigments called photopsin and rhodopsin. It is important to note that these photoreceptors respond to frequency (intensity/how much energy) and wavelength (color), I will touch more on this as we look closer at cones, and produce a “number” output. Rods produce a single number, cones take an average. This information is then coded into electrical signals that are sent along the optic nerve to the brain to be further interpreted. 


But how do the photoreceptors make sense of the radiant energy that is entering the eye? Rods are pretty simple, they are characterized with high sensitivity and low acuity and are responsible for BW vision - let’s focus on cones. Cones are responsible for color vision and are characterized by a lower sensitivity and higher acuity. There are three types of cones that are responsive to the three types of wavelengths: S(B), M(G), and L(R). Even though these wavelengths have color attached to them, the cones themselves don’t determine the color because wavelength itself is colorless. Rather, they respond to wavelength and frequency information and work together to communicate. A single cone cannot distinguish one wavelength from another, so they work in color opponency channels (BY, RG, Achromatic) and communicate the information from these channels along to the rest of the visual pathway all the way to the visual cortex to be interpreted. This information is reinforced through chromatic opponency (context affects the way that we see). Chromatic opponency occurs in receptive fields that contain groups of cones that work together to encode information to be transmitted via ganglion cells back to the brain. Each receptive field contains a central disk and concentric ring that respond in opposition to one another and cause an increase or decrease in the firing of a particular ganglion cell. 


Once the rods and cones have taken in the stimulus information and translated it into coded impulses, the information is sent along the optic nerve to the rest of the visual pathway in your brain, the interpreter. The information goes from the optic nerve to the optic chiasm, optic tracts, lateral geniculate bodies, optic radiations, and finally the visual cortex. In the context of this workflow, the visual cortex functions as the primary interpreter in your brain as it works to receive, decode, process, and interpret the information gathered in the eye and transmitted along the visual pathway. It is responsible for producing the image, color, and overall impression of the environment around us and is the final step in interpreting light. 


That concludes the workflow for how we interpret and understand light, starting at the source and ending with the interpreter, however, there are two additional concepts that I want to touch on because they also influence our perception and interpretation of our environment:


  1. Human Spectral Sensitivity

  2. Luminance vs Brightness


Human spectral sensitivity is represented with a luminous efficacy curve as we look at brightness and human perception. Basically, it demonstrates that our eye does not respond to all wavelengths equally and once you reach the full “potential” for a specific wavelength adding more won’t make a difference. In design, understanding this concept can help minimize any wasted energy because adding more doesn’t translate into a greater desired effect. Additionally, understanding the difference between luminance and brightness also furthers our understanding of how we perceive light by taking into account the entire environment and not just a single object/modifier in isolation. Luminance is defined as the amount of light that reaches the eye; brightness is defined as the visual perception/interpretation of that light in the context of the surrounding environment. Contrast and finishes within the environment will impact how “bright” something appears, so it is important to note in design as we can achieve vastly different visual looks with the same light levels just by changing other components in the environment.

Per usual, I have presented a lot of information so let’s summarize the workflow:


  1. We interpret light via a four step workflow (light source → object → eye → brain/visual cortex) that takes electromagnetic energy that is emitted from a source and creates a visual output of our surrounding environment and shapes how we view and understand the world around us.

  2. A light source can be characterized by a spectral power distribution (SPD) that graphically describes the relative power of each wavelength on the visible spectrum being emitted by the source.

  3. An object can be characterized by a reflective power distribution (SRD) that graphically describes the relative power of each wavelength on the visible spectrum being reflected by the object.

  4. Together, the SPD and the SRD determine the electromagnetic energy/light information that reach and enter the eye.

  5. Photoreceptors, rods and cones, in the eye take in and translate the wavelength and frequency information from the light entering the eye into electrical signals that are transmitted via the optic nerve.

  6. The signals travel along the visual pathway of the brain until they reach the visual cortex where they are decoded to create the colors, shapes, textures, and overall images of the world around us. 

Based on the workflow and supporting information, we can derive big takeaways and answer the question, “why does this matter?”

  1. Our perception of something is dependent on the source (SPD) and object (SRD) characteristics, if you change one of these it will change how we view something. The most common example I use to illustrate this is when you go to a fancy grocery store. How many times have you bought bananas that looked perfectly ripe and yellow only to get to the car to see they have miraculously turned an off shade of brown? Upscale grocery stores tune their lighting, shifting the SPD to wavelengths that correspond with the food they are lighting (red for apples or meat, green for leafy vegetables, etc.). Your bananas didn’t go brown in 30 seconds, but your source changed and was no longer shifted towards “yellow” light.

  2. Know your source and the quality of light associated with it. Daylight provides the most consistent relative power output for all wavelengths of light, while artificial sources have peaks and valleys as you go along the spectrum because we artificially combine different wavelengths of light to create “white light.”

  3. Your eyes don’t actually “see” anything, they take in external sensory information to be processed and decoded by your brain via photoreceptors, rods and cones, located in the retina.

  4. Human spectral sensitivity dictates that “more” doesn’t mean better. Our eye does not respond to all wavelengths equally and once we’ve reached full potential for a wavelength adding more won’t make a difference. Stop over-lighting everything, understand how to light with less and you can achieve a more visually appealing and comfortable space.

  5. You can achieve different visual effects with the same source and object by changing the surroundings, luminance (amount of light reaching the eye) remains the same but brightness (the visual perception and interpretation of the light) will change. You can play with contrast and finishes to make something appear lighter, darker, more prominent, and so forth without changing the source characteristics or light levels.


At this point in the series, we have a baseline understanding of the math and science behind light as we can both mathematically quantify light and understand how humans interpret light to create an image of the world around us. In the next part of this series, we will start to take a closer look at how we can use this science based foundation to manipulate light in a space to influence how we feel, operate, and respond to specific environments.

with love,

with love,

with love,

White R7 Strength Logo

The Brand

Join Our Newsletter

Strength Spotlights

Exclusive Promotions

All content, images, and materials produced and distributed by R7 Strength are protected by copyright. They are the sole property of Rachel Turner and Rachel Lynn Fitness LLC. Unauthorized reproduction, distribution, or duplication of any kind is strictly prohibited. © 2024 Rachel Lynn Fitness LLC. All rights reserved.

White R7 Strength Logo

The Brand

Join Our Newsletter

Strength Spotlights

Exclusive Promotions

All content, images, and materials produced and distributed by R7 Strength are protected by copyright. They are the sole property of Rachel Turner and Rachel Lynn Fitness LLC. Unauthorized reproduction, distribution, or duplication of any kind is strictly prohibited. © 2024 Rachel Lynn Fitness LLC. All rights reserved.

White R7 Strength Logo

The Brand

Join Our Newsletter

Strength Spotlights

Exclusive Promotions

All content, images, and materials produced and distributed by R7 Strength are protected by copyright. They are the sole property of Rachel Turner and Rachel Lynn Fitness LLC. Unauthorized reproduction, distribution, or duplication of any kind is strictly prohibited. © 2024 Rachel Lynn Fitness LLC. All rights reserved.