An Intuitive Metric for Lumen Maintenance

By Eric Bretschneider

For better or for worse, the lighting industry commonly associates the lifetime of LEDs and LED-based lighting products with L70 – the amount of time for the lumen maintenance of an LED-based device to reach 70% of its initial value. Admittedly, the failure of other components, particularly those that provide power to LEDs, are more likely to determine the overall lifetime of an LED-based component or luminaire. However, only lumen maintenance is considered here.

That being said, I firmly believe that simple, intuitive metrics tend to get used more often simply because they are in fact intuitive. Unfortunately, all we have for TM-21 and TM-28 is L70. While L70 is intuitive and easy to understand, the details of how it is reported according to the standards means that we may often be stuck with comparing products with the exact same L70.

Both TM-21 and TM-28 model luminous flux behavior using:

{\phi}= \mathit{Be}^{-\alpha t},       Eq (1)

where:
Φ= luminous flux
B = initial constant
α = decay rate constant [hr-1]
t = time [hr]

from Eq (1), L70 can be calculated:

L_{70 }= \tfrac{ln(^{B/_{0.7}})}{\alpha}       Eq (2)

Because of uncertainties in the parameters, the maximum extrapolation limit for L70 is 6 times the total test duration of the dataset used to calculate the parameters in Eq (1). Given that most datasets range from 6,000 to 10,000 hours, this in turn limits reported L70 values to the maximum extrapolation limit, or 36,000 to 60,000 hours.

LED-based products may be evaluated by comparing L70 values, but when products have the same reported L70 (i.e., L70 > 36,000 hours) the only thing left is looking at α values.

Logically, a smaller value of α represents a product with a slower lumen depreciation rate, but how exactly do we compare different values of α? As a first step, we can calculate lumen maintenance for different time intervals, create data tables, and then compare the entries. That makes sense, but it seems like a lot of work, and it definitely doesn’t sound like it will allow a simple, intuitive comparison. Perhaps a bit of mathematical analysis can help us figure out a better approach.

Instead of calculating a table, let’s plot the data and see if it gives us any hints. To make things easy, we will assume B = 1.0. This isn’t a terrible assumption since in most cases 0.98 ≤ B ≤ 1.02. Let’s take a look at α = 1.189 x 10-5. To most, I would expect this value to have no meaning, but we’ll come back to that later.

Plotting the data gives us the figure shown below.

An Intuitive Metric for Lumen Maintenance

On cursory inspection, this looks an awful lot like a straight line. In fact, a least-squares fit of the data would give a line with a correlation coefficient of r2 = 0.9999 and would match the data points to within 0.02%. That’s what I call a good fit!

For those who are mathematically inclined, this shouldn’t be a surprise. The Taylor series expansion for an exponential function is given as:

e^{-\alpha t}\approx 1 - \alpha t + (\alpha t)^{2}/2! - (\alpha t)^{3}/3! +...

When the value of α << 1, αn ≈ 0 for n ≥ 2. At first glance, all we need to do is keep extending the line to L70 or our extrapolation limit, whichever comes first. Trying this, it looks like we reach L70 at about 26,250 hours (see next figure). At least we don’t have to worry about the extrapolation limit.

An Intuitive Metric for Lumen Maintenance

At this point, you may be wondering why the working groups for TM-21 and TM-28 didn’t just use a linear equation. Extending our data table gives the answer. In the plot below, the filled diamonds represent the initial data we used to calculate the straight line, the open diamonds represent that data all the way out to L70. Doing this we find out that the “true” value of L70 for this example is 30,000 hours. Wasn’t that completely obvious from the fact that α = 1.189 x 10-5?

An Intuitive Metric for Lumen Maintenance

So, while a linear fit of the initial data might seem a reasonable approach, it represents an error in L70 of almost 12.5%. Close, but I’m not sure people would want to “discount” the performance of their LED products by that much.

Looking at all the data points out to L70, it still looks like a great approximation of a straight line. In fact, you would have a correlation coefficient of r2 = 0.9979 and match the lumen maintenance to within 0.8%. As the value of α decreases, L70 increases and becomes a better approximation of a straight line. If only we could sort out how to convert α into the slope for a straight line.

This is actually pretty easy if you’re not afraid of a little math. We know that at t = 0 hours lumen maintenance = 1.0. We also know by definition at L70 lumen maintenance = 0.70. From TM-21 and TM-28 we also know how to calculate L70 from Eq (2) above. That means we should have everything we need to calculate the average slope. Let’s call this new metric δ: the average decay rate. We can calculate δ using:

\delta = \frac{0.3}{L_{70}}= \frac{0.3\alpha }{-ln(0.7)}= 841.1\alpha ,

where:
α = decay rate constant from TM-21 or TM-28 [hr-1]
δ = average decay rate [%/1,000 hrs, %/kh]

It turns out that all we have to do is multiply the decay rate constant (α) by a single number to convert it to an average decay rate. Best of all, the new number is simple and easy to understand. With units of %/kh, estimating the change in luminous flux over, say, 10,000 hours or 20,000 hours now becomes a math problem that most of us can do in our heads.

Keep in mind that the difference between this average linear model from t = 0 hours to L70 is less than 1% all the way out to L70 or the extrapolation limit, whichever comes first. I would argue that this is remarkably accurate for such a simple model and that it should be more than adequate for quick comparisons.

In practice, the initial constant (B) does have an effect, but for most cases, 0.98 < B < 1.02, which means in most cases it would change the value of the average decay rate by about 5% or less. That amounts to a worst-case error in estimating lumen maintenance of about 1% over a time span of 20,000 hours. I wouldn’t expect this to be a significant issue for quick comparisons. If product performance is that close, then you may just want to flip a coin. When using the average decay rate, it is still important to follow the rules for extrapolation limits. The intent of this paper is to give the industry an option for a simple, intuitive metric that will allow rapid comparisons of different products. Would you rather compare two products with α1 = 5.236 x 10-6 and α2 = 7.120 x 10-6, (LM-80 duration for both = 8,000 hours), both of which have L70 > 48,000 hours, or two products with δ1 = 0.44%/kh and δ2 = 0.60%/kh? In 10,000 hours the difference in lumen maintenance between the products will be about 1.6%, and in 20,000 hours the difference will be about 3.2%.

Going back to our initial example, α = 1.189 x 10-5, we find that δ = 1.189 10-5 x 841.1 = 1.0%/kh. In hindsight, it might now be much more obvious how and why I selected this for an example.

Concerns in the Age of the LED: Temporal Light Artifacts

By Dr. James M. Gaines

Flicker and stroboscopic effect are presently hot topics in lighting, along with other subjects like blue light (subject of a recent FIRES article). A National Electrical Manufacturers Association (NEMA) standard, NEMA 77[1], addresses measures for temporal light artifacts (TLA), which is an umbrella term covering both flicker and stroboscopic effect (as well as phantom arrays; see Comment at the end). The NEMA metrics for flicker (short-term flicker indicator, Pst) and stroboscopic effect (Stroboscopic Visibility Measure, SVM) are both based on experiments done with many human observers, to measure average human sensitivity to flicker and stroboscopic effects.

The strict definitions of flicker, stroboscopic effect, and temporal light artifact are given in CIE TN 006[2]. Flicker is the perception that a light source is varying in intensity with time, when neither the viewer’s eyes, nor the light source, nor the objects in the lit space are moving. It may be observed at light modulation frequencies up to about 80 Hz. Stroboscopic effect is the perception that the brightness of a moving object is changing in intensity as it moves through the lit space (Figure 1), when the eye is not moving. If the light source is modulated by a square wave, for instance, then a moving object will appear as an array of objects, instead of a streak, because the object is in different locations when the light is on than when it is off. Stroboscopic effect plays a role from frequencies near 80 Hz, up to about 2,000 Hz, in typical office environments.

The word flicker has often been used imprecisely to mean both flicker, as described above, as well as stroboscopic effect. These are two separate physical effects and deserve precise definitions. (Flicker creates an image in one spot on the retina, and the change in light must be slow enough that that spot can react to the change in light sufficiently to yield detection of flicker. Stroboscopic effect creates a series of images, each in a different spot on the retina, which is detectable to much higher frequencies.) Another confusing term sometimes used is invisible flicker, which is apparently used to describe stroboscopic effect, even though it is not invisible (although many people are not able to describe it without demonstrations and instruction).

Figure 1: An example of the stroboscopic effect, measured with a short duty cycle and 100% modulation.
Figure 1: An example of the stroboscopic effect, measured with a short duty cycle and 100% modulation.

At the detectability threshold, where Pst = 1 or SVM = 1, a hypothetical average observer* can detect flicker or stroboscopic effect, respectively, with probability of 50%. For these metrics, a lower value means the effect is less likely to be noticed, and a higher value means the effect is more likely to be noticed. So isn’t it a weak requirement, then, to set limits of Pst = 1 or SVM = 1.6 (these are the guidelines given in NEMA 77 [1]), if people will experience flicker or stroboscopic effect 50% (or more) of the time?

To explain Pst = 1 in simple terms: Imagine that an observer is given a dial that adjusts the amount of flicker from a light source. The observer is then asked to adjust the dial so that the flicker is just not visible, then asked to turn the knob a tiny bit back to make the flicker just visible. The detectability threshold for a hypothetical average observer, therefore, is the flicker condition where 50% of observers will say that it flickers and 50% will say that it doesn’t.

Flicker – Pst

The short-term flicker indicator, Pst, is defined in the IEC 61000-4-15[3], specification of a flickermeter, which simulates the behavior of an incandescent lamp, perceived by an average observer, when the mains voltage is distorted. IEC 61000-3-3[4] defines the maximum levels of voltage disturbances that electrical equipment (e.g., washing machines, cookers, air conditioners) may generate on the mains voltage. Pst = 1 when the maximum allowed level of distortion is applied to the mains voltage waveform. IEC TR 61547-1[5] adapts IEC 61000-4-15[3] for use with light sources of any technology. The document provides a way to evaluate the immunity of a light source to the effects of disturbances on the mains voltage (which are the most important root cause of flicker).

Any lamp with a full-wave diode bridge at the input will have low flicker (Pst approximately 0) when undistorted mains current drives the lamp. No frequencies below 100 or 120 Hz (2 times mains frequency) are present on the mains. Because flicker is only visible to the human eye below approximately 80 Hz, it is impossible for such a lamp to flicker when operated with undistorted mains current.

In most actual conditions, mains distortions will not be present. IEC TR 61547-1[5] has examples of flicker from three typical lamps, when no voltage fluctuations are applied, showing that Pst is very small:

When the five voltage fluctuation test conditions defined in IEC TR 61547-1[5] are applied, Pst increases for these same lamps to a worst-case Pst of 1.0, 0.54 and 0.47, respectively. The incandescent lamp gives the expected behavior (Pst = 1). The particular CFL and LED lamps chosen for this study have built-in immunity that gives Pst less than 1, and lower flicker than an incandescent lamp, when the specified voltage fluctuations are present. (This is not the case for every CFL and LED lamp! Some have low Pst when tested with undisturbed mains voltage, but Pst higher than 1 when the voltage fluctuations are applied.) Unlike IEC TR 61547-1[5], NEMA 77[1] does not define specific voltage fluctuations for testing the immunity of light sources, but it does warn the designer that voltage fluctuations should be considered.

The important point is that Pst may become large only when there is both 1) a disturbance on the mains voltage, and 2) the light source has insufficient immunity to prevent the disturbance from appearing as a visible light disturbance (flicker). If the mains voltage is not disturbed by lower frequency signals, then Pst will be very low. Thus, people will not observe flicker 50% of the time, but only in those rare (and generally temporary) conditions where a sufficiently high fluctuation, caused by other electrical equipment, is present on the mains voltage at a sufficient level to cause the light waveform to have Pst ≥ 1.

Stroboscopic effect – SVM

Similarly, SVM = 1 means that 50% of people will be just able to detect stroboscopic effect under certain circumstances. But these circumstances include motion of objects in the lit space at sufficient speed and with sufficient contrast to make the stroboscopic effect visible. In a typical office indoor environment (machine shops and the like excluded), there is not continuous motion at rates sufficient to make SVM visible. Therefore, stroboscopic effect will not be experienced 50% of the time, but only occasionally, when, for example, motion and contrast is sufficient to make it visible and the observer is directly viewing that motion. For documentation of SVM, see IEC TR 63158[6] and references therein.

Summary

The presence of daylight, other light sources, lower contrast, limitations on viewing angle, and other factors will reduce the likelihood of visibility of either flicker or stroboscopic effect even further.

The conclusion is that, although a visibility threshold of 50% seems high at first glance, there is actually a much lower probability that TLA will be observable in real applications at any specific time.

Comment

There is a third source of TLA, which is referred to as “phantom array effect” or “ghosting”[2]. This effect is primarily visible in outdoor nighttime situations where high contrast is present, such as with brake lights on an automobile, but may also be visible indoors if a deeply modulated light source with a high-contrast background (such as a lit troffer in a ceiling) is in direct view of an observer who rapidly moves his or her eyes. A visible array of images of the edges or corners of the luminaire may be formed on the retina. Research is underway to determine a measure to quantify this effect.

An example of phantom array is given in Figure 2. An array of images of the taillights of the car is formed when the camera (or an observer’s eyes) is suddenly moved. If the taillights were continuously illuminated (as with older incandescent brake lights) the array of images would be a single streak, with uniform intensity along its length. An example of this can be seen in Figure 3. Some of the lights appear as streaks with uniform intensity along their length. Some appear as dashed lines (phantom arrays). The lamps creating the dashed lines are strobing on and off. The lamps creating the uniform streaks are continuously lit. The modulation depth of the light waveform determines how clearly separated the dashes are from each other. The frequency of the modulation determines how far apart the dashes are.

Figure 2. An example of a phantom array seen in taillights at night.
Figure 2. An example of a phantom array seen in taillights at night.
Figure 3. Automobile lights at night, with the white ones that are further away demonstrating phantom arrays.
Figure 3. Automobile lights at night, with the white ones that are further away demonstrating phantom arrays.

Footnotes
* A person having sensitivity to flicker that matches the sensitivity curve determined by averaging the responses of many individuals.
Consider for example, the condition when a refrigerator turns on, and a momentary disturbance is present on the mains voltage from the initial inrush of current. The lights may flicker briefly while that disturbance is present.

References
1. NEMA 77-2017 Standard for Temporal Light Artifacts: Test Methods and Guidance for Acceptance Criteria
2. CIE TN 006:2016 Visual Aspects of Time-Modulated Lighting Systems-Definitions and Measurement Models
3. IEC 61000-4-15: 2010, Electromagnetic compatibility (EMC). Part 4-15: Testing and measurement techniques. Flickermeter. Functional and design specifications.
4. IEC 61000-3-3:2013, Electromagnetic compatibility (EMC). Part 3-3: Limits. Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems, for equipment with rated current smaller than or equal to 16 A per phase and not subject to conditional connection.
5. IEC TR 61547-1:2017, Equipment for general lighting purposes. EMC immunity requirements. Part 1: An objective light flickermeter and voltage fluctuation immunity test method.
6. IEC TR 63158:2018, Equipment for general lighting purposes – Objective test method for stroboscopic effects of lighting equipment. NEMA 77-2017 Standard for Temporal Light Artifacts: Test Methods and Guidance for Acceptance Criteria

The Lighting Design Objectives (LiDOs) Procedure

By Christopher Cuttle, MA, PhD, FCIBSE, FIESANZ, FIESNA, FSLL

Abstract
This procedure is based on the concept that there is real advantage to be gained from changing the illumination metrics used for specifying, measuring and predicting lighting applications so that they relate to people’s responses to visible effects of lighting in indoor applications. The currently used illumination metrics are directed towards providing for visibility, or more specifically, enabling people to perform visual tasks efficiently and accurately. Proposals are made herein for lighting metrics that relate to how the quantity and distribution of illumination may influence the appearance of people’s surroundings. While the procedure includes work environments, it encompasses all types of indoor activities and locations. The procedure enables lighting design objectives (LiDOs) that relate to people’s responses to the visible effects of lighting upon their surroundings to be specified quantitatively. This in turn leads to the development of a technical specification of lighting installation performance that enables the selection of luminaires to provide the illumination quantity and distribution that will achieve the LiDOs. The implications for both general lighting practice and professional lighting design are discussed.

The Practice of Lighting
Lighting practitioners are poorly served by the illumination metrics that are currently used to specify, measure and predict lighting in buildings. Almost a century has passed since the Lumen Method was introduced, providing a simple tool for enabling a prescribed average illuminance to be provided over the horizontal working plane (HWP). What is truly remarkable is that, to this day, the concepts upon which it is based persist as the basis for specifying illumination levels in lighting standards.

Probably the world’s most often referenced indoor lighting standard is the European Standard EN 12464-1 Indoor Work Places, which defines the purpose of lighting as “to enable people to perform visual tasks efficiently and accurately,” for which the prime criteria are: “a maintained illuminance over the task area on the reference surface, which may be horizontal, vertical of inclined,” and “the task area shall be illuminated as uniformly as possible.” This definition requires the designer to specify a reference plane for each task area. However, the schedule of activities for which maintained illuminance values are specified in this “work place” standard includes locations such as restaurants, hotels, theatres, concert halls, cinemas and so forth, making the notion of identifying a visual task to be performed efficiently and accurately quite meaningless and giving users little option but to fall back on the default position of a HWP that needs to be uniformly illuminated to the specified level. As an example of the dominant role that the HWP has within lighting practice, ”a 400-lux installation” is universally recognised as one that provides that average illuminance over the HWP with acceptable wall-to-wall uniformity, irrespective of whether the human activity associated with the space is in any way work-related.

The mid-years of the last century saw the emergence of architectural lighting design as a professional practice with objectives that reject virtually everything that lighting standards such as EN 12464-1 aim to achieve, including the use of illumination metrics to specify lighting objectives. Since then lighting design software has been developed that enables designers not only to visualise onscreen how lighting may affect the appearance of a chosen architectural space, but more generally, how it influences the appearance of people’s surroundings. The lighting profession is now divided between practitioners who use illumination metrics to achieve reliable and efficient compliance with lighting standards, and those who apply lighting to influence the appearance of people’s surroundings and who shun the use of illumination metrics, which they see as inhibiting their creativity and their scope to “think outside the box.”

It is proposed here that there is scope for an innovative procedure that combines components from both sides of this division. Lighting’s role in influencing the appearance of people’s surroundings provides a sensible basis for determining the overall illumination quantity to be provided, where surroundings is taken to include all visible surfaces and objects within the space. The appearance of details, which may include anything that deserves attention (including visual tasks), may be crucially affected by illumination distribution within the space, and managing illumination quantity and distribution within an enclosed space calls for competent application of illumination metrics. Application of such a procedure should support the achievement of any set of lighting design objectives without inhibiting innovative design options – as the imposition of the uniformity criterion does.

The LiDOs Procedure
The Lighting Design Objectives (LiDOs) Procedure (henceforth referred to as ‘the procedure’) is based on the proposition that the prime purpose of indoor lighting is to satisfy (or better, to exceed) people’s expectations for how lighting may influence the appearance of their surroundings, where surroundings are taken to include room surfaces, furnishings, objects of interest, visual tasks and other people – in fact, all the things that people respond to visually in their environment. This definition of purpose places emphasis upon providing illumination for its influence on appearance rather than on performance, but this should not be seen as denigrating lighting’s role in providing for visibility. While ambient illumination is crucial for creating settings that engage people in the activities associated with the spaces they occupy, it is the identification of target surfaces for selective lighting that expresses the design intention. If task visibility is the intention, then visual tasks become the target surfaces. This different notion of the purpose of lighting requires a novel procedure with newly defined lighting objectives.

Howard Brandston has described lighting design as comprising two phases: creation and implementation (Brandston, 2008). The ‘creation’ phase involves identifying lighting design objectives (LiDOs) that define an envisaged outcome, which may range in creativity from enabling safe movement to providing unique architectural or artistic experiences. For the procedure, the output of this phase for a given project is expressed as a listing of LiDOs. The ‘implementation’ phase concerns devising an installation specification to achieve the LiDOs, and this demands a distinctly different set of skills. The aim is that, regardless of the status of the appointed installer, when the installation is switched on, the performance characteristics of the lighting installation have been specified with sufficient precision to ensure that the visual effect is as the designers had envisaged. This defines the role of the procedure – it is proposed as a useful tool that can be used by both lighting designers and illumination engineers for achieving the implementation phase.

The procedure extends the use of illumination metrics beyond specifying lighting conditions for workplaces to encompassing the full range of indoor human activities. It starts from a consideration of lighting factors that influence the overall appearance of the space before focussing upon details. The overall appearance factors concern lighting’s role in creating settings that relate to how people engage in the activities associated with the spaces they occupy. The decisions made to achieve these objectives can be strongly influential in determining how lighting is then directed within a space to impart visual emphasis to selected objects and surfaces in order to achieve objectives that may range from the artistic to the commercial or the functional. While some situations may include providing for performance of visual tasks, the procedure always progresses from lighting the space to attending to the details. The procedure is based on a lit indoor space in which:

  • A direct flux distribution (DFD) is created by the luminous flux emitted from the luminaires being directed onto selected target surfaces.
  • The first reflected flux (FRF) is the sum of direct flux that is reflected from all of the target surfaces.
  • The indirect flux field (IFF) is the diffusely inter-reflected flux contained within the volume of the space, comprising the FRF supplemented by the multiple reflections it undergoes until it is totally absorbed. The IFF is assumed to be uniformly distributed within the space, and for situations where this may be questionable the space is to be divided into separate zones.

The illumination within the space is characterised by two metrics:

  • Mean room surface exitance (MRSE) is the area-weighted average exitance (lm/m2) of the surrounding surfaces within the indoor space. It equals the average flux density of the indirect flux field within the space, and it serves as the measure of ambient illumination. (As illuminance is the density of flux incident on a surface, exitance is the density of flux exiting, or emerging from, a surface.)
  • Target/ambient illumination ratio (TAIR) is the ratio of total illuminance (the sum due to both DFD and IFF) on a target surface to the ambient illumination level indicated by MRSE. (See Appendix 1 for more technically complete explanations.)

People’s assessments of how lighting influences the appearance of their surroundings are characterised by three human response relationships:

  • Perceived brightness of illumination (PBI) is the perception of overall brightness of illumination within a room, and assessments of PBI may be rated on a category scale ranging from ‘very dim’ to ‘very bright.’ These assessments relate to ambient illumination specified by MRSE and may be described as non-located illumination mode perceptions (Cuttle, 2004).
  • Perceived adequacy of illumination (PAI) specifies minimum ambient illumination levels (MRSE) rated as ‘adequate’ for categories of human activities. The PAI value for a given activity category is specified by a 95 percent ‘yes’ response to whether locations typical for that category appear to be adequately lit.
  • Visual emphasis refers to assessments of the visual impact of direct flux on target surfaces and may be rated on a category scale ranging from ‘absent’ to ‘emphatic.’ Visual emphasis ratings relate to target/ambient illumination ratio (TAIR) values and may be used to specify a broad range of lighting design objectives.

The procedure has two distinct stages. Referring to Figure 1, the first stage involves providing ambient illumination, and the practitioner chooses between opting for illumination efficiency, for which the level of the indirect flux field (IFF) is determined by the perceived adequacy of illumination (PAI) prescribed for the relevant activity category, or alternatively, opting for illumination hierarchy, for which the practitioner chooses the IFF level to satisfy a lighting design objective (LiDO) that relates to the perceived brightness of illumination (PBI). Either way, the level of the IFF is specified by an MRSE value.

Figure 1.
Figure 1. The Lighting Design Objectives (LiDOs) Procedure. The procedure guides the practitioner from having specified lighting design objectives (LiDOs) that relate to how illumination quantity and distribution may influence the appearance of a lit space, to developing a specification of a direct flux distribution (DFD) that would achieve the required balance of LiDOs. Throughout the process, the practitioner may give priority to achieving illumination efficiency or creating an illumination hierarchy.

The second stage concerns target illumination, or an arrangement of the direct flux distribution (DFD) that will generate the required MRSE. Again, the practitioner makes the priority choice. If opting for illumination efficiency, this involves directing flux from the luminaires onto selected target surfaces to generate MRSE with optimal efficiency, or if opting for illumination hierarchy, this involves directing flux onto selected target surfaces to achieve an ordered distribution of visual emphasis. Either way, the practitioner indicates the distribution of target illumination by a schedule of TAIR values, and the outcome of the procedure is a direct flux distribution (DFD) that, if provided in the space, would achieve the selected LiDOs.

Application of the procedure requires that the human response functions are reliably defined, but the current state of research falls short of enabling this. Nonetheless, Tables 1 and 2 suggest plausible relationships based on a review of recent research studies and other anecdotal information (Cuttle, 2017a).

Table 1.
Table 1. Tentatively Proposed PBI/MRSE Relationship.
Table 2.
Table 2. Tentatively Proposed Visual Emphasis/Target-Ambient Illuminance Ratio Relationship.


Illumination efficiency
is the practical option wherever the lighting design objectives (LiDOs) do not include target surfaces selected for visual emphasis. A prescribed perceived adequacy of illumination (PAI) value is specified with the objective of ensuring that the ambient illumination will be assessed as adequate (but no more than adequate) by a substantial majority of people. The practitioner is guided to select target surfaces and to specify target/ambient illuminance ratio (TAIR) values that will achieve the PAI value with optimal utilization of luminous flux, which may be specified in terms of room surface flux utilance, U(rms) (see Appendix 1).

For locations where the choice of LiDOs does include target surfaces selected for visual emphasis, the practitioner opts for illumination hierarchy priority and is guided towards creating an ordered distribution of TAIR values to satisfy the relevant lighting design objectives. In this case, the procedure starts with the practitioner choosing a perceived brightness of illumination (PBI) level and specifying the overall brightness or dimness of illumination by an MRSE value, and then devising an illumination hierarchy comprising a distribution of target illumination specified in terms of TAIR to achieve those lighting design objectives (LiDOs) that relate to a distribution of visual emphasis.

Where the practitioner’s aim is to achieve an envisioned visual effect, the listing of LiDOs may include:

  • To reveal the detail of artworks or architectural features
  • To promote items of merchandise
  • To enhance the visibility of visual tasks or work planes
  • To draw attention to information displays or warning signs
  • To guide people in a required direction
  • To alert people to safety hazards

For situations where the lighting design objectives include providing a prescribed level of task illuminance onto a specific surface or measurement plane, the procedure enables this to be achieved by identifying the surface or plane as a target surface and specifying an appropriate TAIR value.

Regardless of which priority is chosen, the outcome of the procedure is a direct flux distribution (DFD) for which the practitioner may feel assured that if the specified levels of flux are applied to the corresponding target surfaces, the related lighting design objectives will be achieved. Devising a suitable luminaire layout is then achieved by application of conventional illumination engineering principles. Application of the procedure is facilitated by use of the illumination hierarchy spreadsheet.

Figure 2.
Figure 2. Plan view of hotel reception area. Details of room surfaces and other features are indicated in the illumination hierarchy spreadsheet examples.

The Illumination Hierarchy Spreadsheet
Imagine that you are working on a lighting project for a new hotel. Figure 2 shows a plan of the reception area. You are at the stage of having analysed the project, and you have developed your listings of lighting design objectives (LiDOs) relating to daytime and night-time conditions and describing how lighting is to achieve the visual effects that you envisage. These will have been discussed with the client and may be as simple or as complex as the situation demands. You are ready to move on to selecting the luminaires, light sources, locations, aiming angles and control circuits that will comprise the lighting specification. You start by turning your attention to the night-time lighting of the reception area. How will you specify lighting equipment that will provide the quantity and distribution of illumination that will achieve your lighting design objectives?

The Illumination Hierarchy Spreadsheet is a useful tool for solving this problem. It acts as a guide towards selecting target surfaces to receive direct flux and developing a specification of the required direct flux distribution (DFD) to achieve the envisioned balance of direct and inter-reflected illumination within a space. The DFD specifies the quantities of flux (lumens) to be delivered onto each target surface in order to achieve the lighting design objectives that relate to illumination quantity and distribution.

Step 1 starts with downloading the Illumination Hierarchy Spreadsheet from https://1drv.ms/x/s!AteYXbEsDomRvR7K0F-mBqURyiA1. This is an Excel spreadsheet, but you do not have to be an experienced Excel user to operate it. You need only to fill in the shaded cells, as well as and adding or deleting rows as necessary.

Spreadsheet 1.

As shown in Spreadsheet 1, you start by filling in the project name, and then comes the task of listing the room surfaces and objects that make up the space. For every significant room surface that forms part of the lit scene, the surface area and surface reflectance values are entered in their respective columns. The aim here is to achieve a realistic model without excessive detail. Note that because the reception counter and the water feature cover parts of the floor area, the circulation and lounge areas add up to less than the ceiling area. Also, although the circulation and lounge areas have the same floor covering, the lounge area is accorded a lower surface reflectance to allow for the effect of furniture. For three-dimensional objects, surface areas include the whole exposed surface area, so that the water feature is treated as a vertical cylinder with the lower endcap obscured.

As the data for each surface is entered, its surface absorption value appears in the adjacent column, this being the measure of the capacity of the surface to absorb light. Upon completion, the total room surface absorption for all room surfaces is shown in the lower box to be 232.6 m2, this being the area of a perfect light absorber that, at the same ambient illuminance, would absorb light at the same rate as all the surfaces in this space. (For the technically-minded, refer to Appendix 1 for more explanation.) This completes the tedious part of the procedure, and you are now ready to make some decisions.

You stay with Spreadsheet 1 for Step 2 and now consider the level of perceived brightness of illumination for this space. From discussions with the client, you know that a distinctly ‘bright’ daytime appearance is wanted, but for night-time, when people are entering from a relatively dimly lit exterior, it has been agreed that illumination brightness is to reduce to ‘slightly bright.’ Reference to Table 1 indicates that an MRSE level of 120 lm/m2 would be appropriate, and that value is entered. Immediately you see in the lower box that 27,917 lumens of first reflected flux (FRF) from all room surfaces is required to generate the required 120 lm/m2 of MRSE. This FRF is the source of the ambient illumination to be provided. You now need to consider how you will distribute flux within the space to generate this level of FRF.

Step 3 brings you to the creative bit. Your aim is to develop an illumination hierarchy, comprising an ordered distribution of visual emphasis. You do this by referring to Table 2 and filling in the TAIR column with target/ambient illuminance ratio (TAIR) values for each listed room surface, as shown in Spreadsheet 2. Keep in mind that it is assumed that all surfaces receive indirect illumination equal to the MRSE level, so that a surface that has a TAIR value of 1 receives no direct illumination.

Spreadsheet 2.

You start by ignoring the ceiling and attending to the walls. These are to have an attractive woodgrain finish, and you decide upon a ‘noticeable’ level of visual emphasis. You enter a 1.5 value in the TAIR column for Wall 1, and immediately the spreadsheet indicates that for this, the direct surface illuminance needs to be 60 lux, and that 1968 lumens of direct flux are required, which will produce 886 lumens of first reflected flux (FRF). Also, (but this is not shown in Spreadsheet 2) the box below the spreadsheet would indicate that the target surface FRF contributes 3% of the total required FRF. Readers are encouraged to download the spreadsheet and follow the workings onscreen.

You move on and add a TAIR value of 1.5 for Wall 2, and this doubles the target surfaces’ FRF contribution to 6%. Then you come to the mural. It is a colourful work, and the client wants it to be a welcoming and memorable feature of the hotel. You decide to give it a TAIR value of 5, between ‘noticeable’ and ‘strong,’ and at this point the spreadsheet would show that the target surfaces’ FRF contribution jumps to 22% of the required total FRF.

There is no point in directing flux onto Wall 3, as so much of it will be lost to the outdoors, but Wall 4 gets similar treatment to the two previous walls. Continuing down the column, the water feature is selected for ‘strong’ visual emphasis, and other objects are accorded values to complete an illumination hierarchy. As is shown in Spreadsheet 3, the target surfaces produce 15,985 lumens of FRF, which is 57% of the total required FRF. This leaves you with the task of making up the remaining 43%.

Spreadsheet 3.

For Step 4, you switch from illumination hierarchy mode to giving priority to illumination efficiency (see Figure 1). The key to optimising flux utilization for providing MRSE is to maximise first reflected flux by directing flux from the luminaires onto high reflectance target surfaces. For this, you turn your attention to the ceiling and try some trial-and-error experimentation. Entering a value of 1.5 in the ceiling TAIR cell causes the target surfaces FRF to rise to 21,907 lm, which is 78% of the required total FRF value (not shown), but as is shown in Spreadsheet 3, increasing the TAIR value to 2.0 puts you right on target. This means that if each target surface receives the value of direct surface illuminance indicated in Spreadsheet 3, then the resulting first reflected flux will generate the chosen value of MRSE.

Careful attention needs to be given to this lighting distribution, and it should be talked through with the client. It might satisfy his or her expectations for how lighting would influence the appearance of this space, but are you entirely satisfied? The treatment of the principal selected features appears to work well, but the visual emphasis for the woodgrain walls was supposed to be ‘noticeable,’ and its TAIR value is less than that of the ceiling. Might the appearance of the ceiling be distracting?

You raise the TAIR values for Walls 1, 2 and 4 to a value of 2.5, and this causes the target surface FRF value increase to 17 percent more than the required total FRF value (not shown). So, it is back to the ceiling again and some more experimentation. As shown in Spreadsheet 4, you arrive at the finding that a ceiling TAIR value of 1.6 brings the illumination distribution back in balance.

Spreadsheet 4.

You now have the basis of a working specification. The crucial data is the direct flux distribution (DFD) column. It tells you that if that if the target surfaces receive the specified levels of direct flux, the reflected flux will generate the ambient illumination (MRSE) that corresponds to the chosen brightness of illumination, together with the balance of visual emphasis defined by the distribution of TAIR values. In this way, you will achieve your specified LiDOs relating to ambient and target illumination.

Providing the required DFD values involves selecting suitable luminaire locations and applying conventional illumination engineering principles to specify correct lamp wattages, but the design process does not stop there. The quality of the lighting solution will depend on your attention to detail. For example, how the lighting over the reception counter reveals the people behind the counter will influence the sense of welcome felt by new arrivals at the hotel, and at the same time it will influence how the receptionists are able to cope with the paperwork on the countertop. One LiDO calls for diffused side lighting, and the other for directional downlighting. Lighting quality inevitably depends upon the practitioner’s skill in identifying LiDOs and resolving their often-conflicting requirements. This requires thoughtful attention to the finer points of lighting’s power to influence not only the appearance of people’s surroundings, but of the people themselves.

Conclusions and Outcomes
The LiDOs procedure is proposed with the intention of demonstrating that changes in indoor lighting practice may be not only beneficial, but also practicable. However, it should not be thought of as a design method, as it requires the practitioner to have already selected and specified his or her lighting design objectives, and it is this that comprises the essence of the design. Furthermore, the procedure has nothing to say about the chromatic properties of the lighting; the availability of daylight; low levels of indoor ambient illumination (such as emergency lighting); outdoor lighting (where ambient illumination is effectively zero); or nonvisual aspects of light exposure, including circadian effects. However, any or all of these could be included once the profession elects to take first step of acknowledging that there is need for fundamental change in indoor lighting practice. In this context, the procedure may be seen as one plank of a bridge between design objectives and technical specification—and if adopted, could be expected to stimulate more changes in lighting practice.

Figure 3 identifies some differences between the conventional procedures for lighting calculations that continue to dominate general lighting practice, and the LiDOs procedure. Of course, practitioners who are accustomed to the convenience of using lighting design software packages are unlikely to welcome the notion of switching to a spreadsheet, but they should think about the process. If design software based on the procedure were to become available, it could still be relied upon to provide high-resolution renderings of lighting proposals, but also it could serve to support the design process. The LiDOs procedure guides practitioners to give prime attention to distributing illumination according to how people respond to its effect upon the appearance of their surroundings, and this would lead to fundamental re-evaluation of both the effectiveness and the efficiency of lighting practice. In particular, it would free practitioners from universal imposition of ‘uniformity’ criteria.

 

Conventional Procedures The LiDOs Procedure
The purpose of lighting is to enable visual tasks to be performed efficiently and accurately. The purpose of lighting is to satisfy (or, better, to exceed) people’s expectations for how lighting may influence the appearance of their surroundings.
Illumination adequacy for a given activity is determined by the reference plane illuminance, typically measured on the horizontal working plane (HWP). Perceived adequacy of illumination (PAI) for a given activity is determined by the density of the indirect flux field, for which the metric is mean room surface exitance (MRSE).
Efficient lighting directs flux onto the HWP, as only flux incident on the HWP adds to reference plane illuminance. Efficient lighting directs flux onto high reflectance surfaces, as all reflected flux adds to the perceived brightness of illumination (PBI), indicated by MRSE.
Illumination over the reference plane is to be as uniform as possible. The illumination distribution is chosen according to illumination efficiency or illumination hierarchy priorities, and is specified in terms of a target/ambient illuminance ratio (TAIR).
Light the task, then attend to the space. Light the space, then attend to the details. Visual tasks are treated as detail.
Select the luminaire; work out the layout; then calculate the required lamp flux. Specify the lighting design objectives (LiDOs); determine the direct flux distribution (DFD); then plan the layout and select the luminaires.

Figure 3. A comparison of determining factors involved in the conventional calculation procedures that form the basis of lighting standards and those factors involved in the LiDOs Procedure.

The procedure has the flexibility to suit projects ranging from such basic objectives as provision for security and safe movement to creative architectural and display lighting projects. Should regulators opt to base indoor lighting standards on perceived adequacy of illumination (PAI) specified in terms of MRSE, the effect would be limited to exclusion of ambient illumination levels likely to be assessed as inadequate for the activity or appearing gloomy. Otherwise, practitioners would be free to specify MRSE levels for chosen degrees of brightness or dimness of illumination, and to specify TAIR distributions to arouse peoples’ awareness of and interest in their surroundings. Where lighting for adequacy is the prime objective, this suggests priority should be given to illumination efficiency and selection of direct flux distributions to optimise MRSE flux utilization. On the other hand, where it is to generate responses specific to people’s activities and their surroundings, this points to priority being given to illumination hierarchy and the development of ordered distributions of visual emphasis.

Confidence in application of the procedure is inevitably dependent upon the reliability with which the human response functions are defined. Research by Duff et al. (2017a, 2017b) has demonstrated evidence of a relationship between perceived brightness of illumination (PBI) and mean room surface exitance (MRSE), and between perceived adequacy of illumination (PAI) and MRSE, for just one situation – a small office. This research and other anecdotal evidence have been reviewed (Cuttle, 2017a), and this has led to the tentatively proposed relationships indicated in Tables 1 and 2. However, while these enable plausible demonstrations of the procedure, the need for research-based relationships over a wider range of spaces and MRSE levels remains.

The obstacles to achieving these changes in lighting practice are substantial. As well as requiring further research effort to establish the underlying human response functions, it would require lighting standards and recommended practice documents that specify minimum lighting levels to be redrafted in terms of minimum MRSE values (or some similar metric). It would require the lighting design software that forms the basis of current lighting practice to be reprogrammed based on the LiDOs (or some similar) procedure, which would have the advantage of enabling simulations of outcomes. It would require the lighting industry to replace its standard ranges of ‘efficient’ lighting products designed to illuminate prescribed reference planes with luminaires that enable flux to be distributed onto selected target surfaces and objects. Moreover, it would require leadership within the lighting profession to achieve general acceptance of a changed understanding of the purpose of indoor illumination. While general acceptance of this concept would affect differently those practitioners in general lighting practice and those in professional lighting design, these changes are proposed with the aim that it may form a step towards bridging the gap between those practitioners.

Acknowledgements
The concepts described in this paper started to evolve through the author’s interaction with staff and students at the Rensselaer Polytechnic Institute (RPI) Lighting Research Center, NY, USA, while he worked there between 1990 and 1999. Since then, the author has maintained communication with three colleagues from that period: Prof. Emeritus Peter Boyce, Prof. Howard Brandston and Dr. Mark Rea. These interactions have been crucial in the development of the proposed procedure. Thanks are expressed also to Dr. Kevin Houser for his comments.

In 2011, the author was contacted by Dr. Kevin Kelly of the Dublin Institute of Technology (DIT) with a proposal that a PhD research programme be initiated to examine the author’s recently published concepts. Since then, Dr. James Duff has conducted research under Kelly’s supervision and with the author as adviser that has provided proof of concept for a selection of the author’s proposals. Research is continuing at DIT to establish functional relationships to enable specification for design applications. The continuing support of these colleagues is gratefully acknowledged.

Sincere thanks are expressed also to those individuals and organisations that have provided the author with opportunities to address audiences around the world during the past decade, and in particular to Kevan Shaw who was first to adopt the procedure for use in his lighting design practice.

Appendix 1: The Technical Basis of the MRSE and TAIR Concepts

Mean Room Surface Exitance, MRSE (Cuttle, 2015, 2010, 2008):

\mathit{MRSE} = \frac{\mathrm{\sum M_{rs}A_{rs}} }{\mathrm{\sum A_{rs}}},

where:

Ers(d) = Direct component of illuminance on room surface
Ρrs – Reflectance of room surface
(1 – Ρrs) = Absorptance of room surface (Assuming transmittance=0)
Aa = Room absorption
FRF = First reflected flux

MRSE is measured at a point (or points) within the volume of a space, rather than on a surface or plane. While it is a reasonably straightforward metric to calculate, its measurement is complicated by the need to exclude direct flux. No suitable meters are currently available, but Duff et al. (2016) have demonstrated proof of concept for an MRSE measurement tool that employs high dynamic range technology to enable sources of direct light to be identified and their effects discounted.

Target/Ambient Illuminance Ratio (TAIR) (Cuttle, 2015, 2013):
For a given target surface,

\mathit{TAIR = \frac{E_{tg}}{Ambient\,illuminance\,MRSE}}=\frac{E_{tg(d)}+ MRSE)}{MRSE},

where:
Etg = Target immuminance
Etg(d) = Direct component of target illuminance

For a three-dimensional object, the direct target illuminance Etg(d) is the quotient of total direct flux and total surface area.

MRSE Flux Utilance (Cuttle, 2017a):

U_{MRSE}= \frac{MRSE \,.\,A_{rms} }{F_{lum}},
where:
UMRSE = MRSE flux utilance, the measure of how efficiently luminaire flux is utilized for providing MRSE
Arms = Area of all room surfaces
Flum = Luminaire flux (total flux emitted by all luminaires)

These metrics are drawn together for application in the LiDOs Procedure, which may be facilitated by use of the Illumination Hierarchy Spreadsheet. This spreadsheet follows the format of the worksheet that Waldram developed for his Designed Appearance Method (Waldram, 1954), although there are distinct differences in the technical basis of Waldram’s procedure (Cuttle, 2004).

Appendix 2: Bibliography
Books
These sources describe an overall design approach that incorporates the LIDOS Procedure.
Cuttle, Christopher (2015). Lighting Design: A perception-based approach. Abingdon: Routledge, 132pp.
Cuttle Christopher (2008). Lighting by Design, Second Edition. Oxford: Architectural Press, 247pp.

Lighting design articles
These sources relate to practical application of the LIDOs Procedure.
Cuttle, Christopher (2018a,b). Bridging the gap. Part 1, arc Jun/Jul 2018; 104: 147-152. Part 2, arc Aug/Sep 2018; 105: (in press).
Cuttle, Kit (2018c). Lighting practice based on how lighting influences the appearance of people’s surroundings. Lighting Design + Application (in press).
Cuttle, Christopher (2017b). Integrating useful lighting metrics into the design process. In: PLDC 6th Global Lighting Design Convention Proceedings, 1-4 November, in Paris/FR, pp200-201.
Cuttle, Christopher (2013a). Introduction to a novel perception-based approach to lighting design. In: PLDC 4th Global Lighting Design Convention Proceedings, 30 October – 2 November, in Copenhagen/DK, pp152-154.
Cuttle, Christopher (2013b). Ridding ourselves of the barriers to darkness in lighting design. Professional Lighting Design, Jul/Aug 2013; No. 89: 54-55.
Cuttle, C (2012). A Shared Purpose for the Lighting Profession. Mondo*arc, Aug/Sept 2012; 68: 125-128.
Cuttle, Christopher (2011a). Perceived Adequacy of Illumination: a new basis for lighting practice. In: PLDC 3rd Global Lighting Design Convention Proceedings, in Madrid/SP, pp81-83.
Cuttle, K. (2011b). The Art of Lighting. Mondo*arc, Aug/Sept 2011; 62: 35-36.
Brandston, Howard M (2008). Learning to See: A matter of light. New York: IES, p49.

Lighting technology articles
These sources relate to the basis of the LIDOS Procedure.
Cuttle, Christopher (2017a). A fresh approach to interior lighting design: The design objectives to flux distribution procedure. Lighting Research & Technology. First published October 10, 2017. DOI: 10.1177/1477153517734401
Kelly, K. and Durante A (2017). An examination of a new interior lighting design methodology using mean room surface exitance. SDAR (Journal of Sustainable Design & Applied Research); 5(1): Article 6.
Duff, J., Kelly, K. and Cuttle, C. (2017a). Spatial brightness, horizontal illuminance and mean room surface exitance in a lighting booth. Lighting Research & Technology; 49(1): 5-15.
Duff, J., Kelly, K. and Cuttle, C. (2017b). Perceived adequacy of illumination, spatial brightness, horizontal illuminance and mean room surface exitance in a small office. Lighting Research & Technology; 49(2): 133-146.
Cuttle, Christopher (2016). A reassessment of general lighting practice based on the MRSE concept. SDAR (Sustainable Design & Applied Research) Journal; 6: 49-56.
Duff, J., Antonutto, G. and Torres, S. (2016). On the calculation and measurement of mean room surface exitance. Lighting Research & Technology; 48(3): 384-388.
Duff, J. (2016). Research Note: On the magnitude of error in the calculation of mean room surface exitance. Lighting Research & Technology; 48(6): 780-782.
Cuttle, C. (2013c). A New Direction for General Lighting Practice. Lighting Research & Technology; 45(1): 22-39.
Cuttle, C. (2010). Towards the Third Stage of the Lighting Profession. Lighting Research & Technology; 42(1):73-93.
Cuttle C. (2004). Brightness, Lightness, and Providing ‘A Preconceived Appearance to the Interior.’ Lighting Research & Technology; 36(3): 201-216.
Waldram, J.M. (1954). Studies in interior lighting. Trans Illum Eng Soc (London); 19: 95-133.

A Reality Check on Blue Light Exposure

By Eric Bretschneider, Ph.D

How often do we hear about the dangers of blue light from LEDs? Such discussions inevitably include statements about “the intense blue peak” in LED lighting and the potential for damage from the massive amounts of blue light present in LED lighting.

The whole argument sounds plausible enough when we look at the spectrum of a typical white LED. The spectrum below is for a typical white LED with a CCT of 4,000 K at levels that approximate a typical commercial or retail environment (400 lux). The isolated peak in the blue clearly stands out, but does it really represent a massive dose of blue light?

In an effort to answer this question, let’s look at fluorescent lighting. Below is the spectrum of a typical 4,000 K fluorescent lamp, equivalent to 400 lux. A quick look confirms it: fluorescent light appears to have a fraction of the blue content of the LED.

But wait a minute, you recall that I said both of these sources represented the same light level – 400 lux. Have we missed something?

Indeed we have. When the data is plotted on the same graph, perhaps our concerns about the “intense blue peak” of LED lighting were misplaced (or at least mis-scaled). Suddenly we begin to question why are we only hearing about “intense peaks” for LED lighting? Where are the concerns about an alternative lighting technology that we have been exposed to for decades?

Although the spectrum of an HID lamp is significantly different than that of a fluorescent lamp, it doesn’t change the situation. Below is a comparison of the emission spectra of an LED vs MH (metal halide). Light levels correspond to 150 lux, which approximates the lighting in a warehouse. Again, I have to question concerns about the “blue peak” when it comes to LEDs while nothing is said about “blue peaks” in relation to MH lighting.

What is often missed is that the total energy in a wavelength band is more critical than the height of the spectral peak. Specifically, it is the area under the curve (width x height) that we should be more concerned about. For example, if you define “blue content” as the fraction of light between 400 nm and 490 nm compared to light between 400 nm and 700 nm, then for the sources above, the blue content of the LED is 16.6%, the fluorescent is 18.4%, and the HID is 24.0%.

Now let’s try to tackle the real reason for this post – the total amount of blue light we are exposed to from LEDs. Our visual system is adapted to withstand exposure to sunlight for about 10 hours/day, every day for roughly 70-80 years.

Below is the comparison between indirect sunlight (10,000 lux) and commercial/retail lighting (400 lux). Notice the blue squiggle at the bottom? In the range of 400 nm to 490 nm, indirect sunlight exposes us to 27 times as much blue light as LEDs at typical indoor lighting levels.

Direct sunlight can be up to about 100,000 lux, a full order of magnitude greater than what is shown above. At this level of lighting, the LED spectrum would be squashed to a line at the bottom of the plot.

Given our typical exposures to daylight and electric light, our exposure to blue light has not been increased using LED lighting. The discussions of circadian impact are completely different, being related to wavelength of exposure, not flux. I contend that those who insist that the “massive doses of blue light” present in LED lighting are harming our health are misinformed.

There are also issues with respect to the wavelength dependence of certain effects, including the melanopsin action spectrum, the retinal thermal hazard function and the blue light hazard function. It is noted that in general, the potential for damage increases as wavelength decreases. Further discussion on these topics is beyond the scope and intent of this article.

Melanopic Green The Other Side of Blue

By Ian Ashdown, P. Eng. (Ret.), FIES
Senior Scientist, SunTracker Technologies Ltd.

Numerous medical studies have shown that exposure to blue light at night suppresses the production of melatonin by the pineal gland in our brains and so disrupts our circadian rhythms. As a result, we may have difficulty sleeping. It is therefore only common sense that we should specify warm white (3000 K) light sources wherever possible, especially for street lighting.

True or false?

To answer this question, we first need to define what we mean by “blue light.” Neither the Illuminating Engineering Society (IES) nor the Commission Internationale d’Eclairage (CIE) define the term in their online vocabularies (available here and here). However, UL, LLC (formerly Underwriters Laboratories Inc.) has recently introduced its UL Verified Mark , a “third-party product claims verification program.” One such Verified Mark is shown in Figure 1:

FIG. 1 – UL Verified Mark example.

The verification process for this mark is described thus:

“In accordance with LM-79-08, Section 9.1, measure the radiation emitted by the product across the visible spectrum of 380 – 780 nm. From the visible spectrum radiation measurement, determine the amount of ‘blue light’ radiation emitted between 440 – 490 nm. To calculate the percent of blue light emitted, divide the amount of blue light radiation by the amount of radiation measured across the complete visible spectrum.”

The lower wavelength limit of 440 nm seems somewhat arbitrary unless you also define “violet light,” but the upper wavelength limit of 490 nm makes sense; wavelengths in the region of 490 to 570 nm appear to be varying hues of green. This makes it easy – if we eliminate light of all wavelengths below 490 nm, we should not have any concerns about suppressing the production of melatonin and possible sleep disruption.

True or false?

To answer this question, we need to take a closer look at those medical studies. The human retina has a smattering of intrinsically photosensitive retinal ganglion cells, or ipRGCs. Similar to the more familiar rods and cones, these ipRGCs contain a photosensitive protein called melanopsin. The sensitivity of melanopsin varies with wavelength, as shown in Figure 2.

FIG. 2 – Relative melanopic sensitivity (from CIE 2015).

It is these ipRGCs that sense “blue light” and send signals to the suprachiasmatic nucleus (SCN), a tiny region of some 20,000 neurons in the brain that is responsible for instructing the pineal gland when to produce melatonin.

Looking more closely at Figure 2, however, it is evident that the ipRGCs’ spectral sensitivity peaks at 490 nm, as well as extending to the ultraviolet edge of the visible spectrum at 380 nm. Most important, fully half the spectral sensitivity of melanopsin is to green light.

Common sense is starting to look rather nervous …

The spectral sensitivity shown in Figure 2 is interesting enough, but it becomes even more so when we consider what it means for how we respond to the radiation emitted by white light LEDs.

Figure 3 shows the relative spectral power distributions (SPDs) of typical white light LEDs with correlated color temperatures (CCTs) of 3000 K and 4000 K, scaled such that both LEDs produce equal amounts of luminous flux.

FIG. 3 – White light LED spectral power distributions

Determining the relative response of ipRGCs to these LEDS is easy – we simply multiply their SPDs by the melanopic sensitivity function on a per-wavelength basis, as shown in Figure 4.

FIG. 4 – Examples of LED melanopic lumens. Left: 3000 K; right: 4000 K.

Common sense, it would seem, has good reason to be nervous. Yes, 3000-K LEDs produce less melanopic flux than 4000-K LEDs when they produce equal luminous flux. However, the difference is only ten percent. This is within the tolerance of architectural and roadway lighting design practices. As such, it should not be argued that 3000-K LEDs are required for nighttime lighting in order to minimize circadian rhythm disruption – the difference in melanopic flux does not support this. Rather, it is simply one of several factors that must be considered when designing and specifying lighting systems.

Blue-Blocking Glasses

Figure 4 highlights another issue: the efficacy of blue-blocking glasses, which are often marketed as promoting better sleep (Figure 5).

FIG. 5 – Blue-blocking glasses. (Source: www.swanwicksleep.com)

If we assume that the yellow filters provide a perfect cutoff at 490 nm, they are only 33% effective in blocking melanopic flux from 3000-K (warm white) LEDs and 43% effective with 4000-K (neutral white) LEDs. In reality, the filters likely let through some amount of blue light in the region of 470 nm to 490 nm, and so they may be even less effective.

Simply put, we cannot prevent melanopic flux emitted by white light sources from impacting our circadian rhythms unless we use deep-red filters. This is not to say that blue-blocking filters on eyeglasses or light sources do not work – they inarguably block blue light. However, melanopic flux includes both blue and green light.

From a marketing perspective, it is fair to say that blocking blue light may alleviate circadian rhythm disruption and loss-of-sleep issues, even if it is due to the placebo effect. (There are many other psychophysiological and environmental parameters involved in circadian rhythm entrainment that are not discussed here.) However, it is incorrect to claim that blocking blue light will eliminate melatonin suppression and so prevent circadian rhythm disruption. The facts state otherwise.

Electronic Devices

Finally, what about those evil electronic devices that threaten our sleep? Figure 6 shows the spectral power distribution of an Apple iPadÔ and the resultant melanopic flux when the display is set to full white (which has a CCT of 6700 K, somewhat higher than the 6500-K white point of most computer monitors).

FIG. 6 – Apple iPad melanopic flux

As shown by Figure 6, the best any optical filter or software-based change in the device white point (that is, a change in color temperature) can hope to achieve is a 50 percent reduction in melanopic flux.

What is more important, however, is that iPad screen luminance is approximately 400 cd/m2 (nits). This is on the order of 50 to 100 times the light levels recommended for residential street lighting. If we are to complain about light trespass from residential street lighting into our bedrooms causing sleep deprivation, we cannot ignore the influence of the televisions, computer monitors, and tablets that we often stare at for hours before going to bed, and in much closer proximity.

Conclusions

Research into the influence of spectral content and retinal illuminance on circadian rhythms is ongoing (e.g., Nagare et al., 2015). As such, this article should not be taken as evidence (or lack thereof) for the effect of “blue light” on our sleep patterns. Rather, it is a reminder to look beyond the marketing claims of “blue-light blocking” products and ask what this really means.

To answer the question of whether we should specify warm white (3000 K) light sources for street lighting, the answer is, “it depends.” All things being equal, the difference in melanopic flux between 3000-K and 4000-K LEDs is only ten percent. This is within the uncertainty of light design practices, and so more weight should be given to residents’ concerns, aesthetics, color discrimination, and energy savings when making design and specification decisions.

To answer the question of whether eliminating light of all wavelengths below 490 nm (that is, “blue light”) will eliminate any concerns about melatonin suppression and possible sleep disruption, the answer is clear: FALSE!

References

CIE. 2015. Report on the First International Workshop on Circadian and Neurophysiological Photometry, 2013. CIE Technical Note TN 003:2015.

Nagare, R., et al. 2017. “Does the iPad Night Shift Mode Reduce Melatonin Suppression?” Lighting Research and Technology. (http://journals.sagepub.com/doi/10.1177/1477153517748189; accessed 2018 May 6).

The Science of Light and Health: How to Interpret the Claims That Underlie Medical and Wellness Effects

By Douglas Steel, PhD
Founder and Chief Scientific Officer of NeuroSense

These are transformational times for the lighting industry. The cost of LED-based products has dropped dramatically. At the same time, increased sophistication and capabilities of tunable LED arrays, controls, and sensors now enable the commissioning of platforms that can precisely control light intensity, correlated color temperature, and relative spectral content. With this capability, we have entered a stage at which lighting can now be used not just for illumination, but to provide beneficial health effects. Supporting this is a new vocabulary of terms such as “human-centric lighting,” “bio-centric lighting,” “lighting for people” and others. However, few standards exist that provide guidance as to how lights should be controlled so as to confer benefits.1

The academic research community is actively working on expanding our knowledge of the health effects of light, and there is a large and growing body of published research papers that are continuously coming out. Reading, understanding, and integrating new results into our larger body of knowledge is a never-ending process. Taking this knowledge and applying it in practical applications is referred to as “translational science,”2 and in the interests of proper disclosure, this is the domain in which I practice. I will not be endorsing any specific products, technologies, or uses of light exposure.

Translational science requires reading a large number of research publications from many different journals and then citing relevant papers. It is essential that anyone who is making a scientific claim support that claim by citing previously published scientific data in the literature. One of significant challenges in the emergence of human-centric lighting as a practice is that significant claims are being made either without supporting evidence, or by citing evidence that has been misinterpreted, is flawed, or is not relevant to the claims being made.3 Persons trained in the sciences have (hopefully) received instruction in how to read and evaluate scientific papers, so within the field, scientists tend to “police” the shared understanding of a field of knowledge. In fact, within the scientific community there is a reputational “common knowledge” of who is and is not a credible source of new information. However, this understanding often doesn’t make it outside of the circle of scientists, with the result that the casual reader has no idea how to read and judge a scientific paper. It isn’t sufficient to simply read an abstract or the conclusions and take them as fact; in many cases a paper will be irrelevant to a claim being made, or be “fatally flawed” from a methodological error that negates all or part of the study.4

My recommendation is that anyone reading a scientific paper have a “scientist buddy” with whom to share a paper and ask them for a “second opinion.” Far from appearing uncertain, this is a common practice that reinforces a commitment to best available knowledge.

I’d like to share some of the most common insights that scientists utilize when evaluating the quality and relevance of a scientific paper. I’d like to invite readers to submit comments about these, as well as their own suggestions for other criteria.

  1. Is the research hypothesis-driven or phenomenological?5 Most scientific experiments are structured so as to answer a specific question, called a hypothesis (“I have an idea that X causes Y”). The study is then conducted, the results are collected and analyzed, and the conclusions then state whether the results supported the hypothesis, refuted them, were inconclusive, or provide something altogether different. The phenomenological format of study is more like a survey: If we do X, what happens? This is then examined under many different conditions. A phenomenological study doesn’t prove anything; it is a set of observations without a preconceived idea of what the outcome will be. They are often disparagingly referred to as “fishing expeditions.” When reading a paper, ask yourself “Did the investigator have a starting hypothesis, and did the experiment definitively prove or disprove the hypothesis?” Generally, phenomenological papers are not cited as evidence of a scientific claim because they didn’t test anything.
  2. Was the study done in a test tube or a living, intact organism?6 Every experiment has variability, and statistical tests tell us whether an effect was significant or not. Test tube studies tend to have much less variability in results than those done in an organism because there are fewer variables and “things going on.” However, it is hard to say with certainty that what happens in a test tube is an accurate predictor of what happens in an animal. So test tube studies are strongly indicative, but may or may not be relevant. In the case of circadian-related studies done in animals, it is important to note whether the animals are nocturnal or diurnal. In some cases since their response to light is the opposite of humans’, results may not be applicable.
    1. A special comment about optogenetic studies7. In recent years a new methodology has been developed in which modified genes are inserted into an organism. In optogenetic studies, these modifications make the gene sensitive to light, and exposure will selectively turn the gene off or on. These are brilliant experiments, but it is not valid to claim that the experiment proves that light has an effect on any organism that was not given the genetic modification. This is a special category of experiment in which light is being used a research tool.
  3. What was the sample size in the experiment?8 A biological response must occur to a sufficient magnitude and in a sufficient number of test subjects in order to provide a statistically significant result. If the effect being studied is small, or if there is a lot of variability between subjects, then the sample size (total number of subjects) must be higher (e.g., pharmaceutical human trials typically involve tens of thousands of patients). Despite this fact, some researchers persist in using a small number of subjects relative to the thing being studied. The result is statistically weak results – which leaves open the question of whether the effect being studied is in itself weak, or whether the inclusion of more subjects would have made the results more statistically significant.
  4. Did the experiment include both positive and negative control groups?9 A control group is a set of subjects or conditions that tests whether an experimental effect is unique and specific to one particular manipulation – or not. This can be illustrated through an example. Let’s say I make a claim that red light is capable of growing hair. I test red light exposure and see hair growth. As a positive control I instead apply Rogaine®. If I see hair grow, then I know that in my experiment, hair growth is capable of being stimulated. I next test blue light exposure and I see no hair growth; this is a negative control, in that under the same conditions there is a circumstance in which hair does not grow. Therefore red light is confirmed as having a specific effect. In all good experiments – including those using light exposure – there should be both positive and negative control groups. Without this, there is a risk that a particular exposure is due to a general light effect, and not to a particular wavelength in specific. The idea of what constitutes a proper control, and if that has been adequately tested, may take the skills of a trained scientist to determine if the proper test controls were included in a study.
  5. Was there within-subject or cross-subject design?10 In a cross-subject experiment, if there are two distinct treatment groups (one control and one receiving the treatment), each group receives just the single treatment. This increases the risk that an effect is due to something else that is unique to one group or the other. It also increases the variability of the experiment. A preferred design is within-subject, in which each person or animal receives the treatment condition at one point in time and the control condition at another time. Then statistical analysis can use each subject as their own control (thus, within-subject).
  6. Was it a placebo-controlled and/or double-blind study?11. Although it is not always possible, in human studies, in particular, it is preferable to use a placebo (inactive) treatment as a control for an experimental treatment. In this way, we can be sure that a person’s knowledge that they are receiving a particular treatment isn’t changing their response. While this is easy to do with pills, it is very difficult to do in experiments with light since the subject can see what they are exposed to. I’d be interested to hear how this is addressed by those involved in clinical studies involving light exposure. In double-blind experiments, neither the subject nor the investigator knows whether a subject is receiving a control or experimental treatment. Again, this can be tough to do in lighting studies.
  7. Do the statistics support the conclusions?12 We’ve all heard the jokes about how numbers can be manipulated to tell a story – or distort one. In scientific papers, if a hypothesis has been tested, read the Conclusions section to see if the statistics support the prediction being tested. This will be denoted by a “confidence interval” and a corresponding asterisk. A single asterisk denotes a confidence interval of 95%, which means that in 100 tests, the same result would occur 95 times. Two asterisks indicates a confidence interval of 99%, which indicates a likelihood that if the experiment is repeated, the same result would occur 99% of the time. Both of these indicate a strong likelihood of an effect being believable, but due to the inherent variability of living systems, many important biological effects do not manage to achieve high statistical significance. Beyond the confidence interval, another important question is whether the researchers have applied the correct statistical test for the kind of data and experimental design they are using. It is beyond the scope of this article, but a scientist or statistician will be able to evaluate the method by which statistical tests have been applied to experimental results.
  1. A special case of improper statistical misuse is called P-hacking.13 Also known as “data fishing” or “data dredging,” it is the research practice of testing many different combinations of experimental conditions without having formulated a hypothesis beforehand. Under these circumstances, as the number of tested conditions increases, the odds that some comparison will show a random correlation or high confidence interval also increases. It is not uncommon for a pair of nonsensical or meaningless datasets to reach statistical significance in large experiments with many different groups. Compounding this, if this practice is conducted repeatedly across multiple published articles with different sets of data, then the authors can and will often construct a “story” or contrived explanation to account for a nonexistent relationship between, within, and across data. Such occurrences can have a profound effect on scientific progress in a field of study because they represent “false leads” in uncovering the actual mechanism of action in fields such as circadian research. Good scientists have the ability to identify such meaningless correlations as they occur in published papers.

There are a number of other factors that informed readers use to assess the quality of a published research paper, and thereby infer the likelihood that the study results are believable. These include:

  1. The quality of the investigator, lab and academic institution that is publishing their results.14 In any given field of study, as with buying a new car, each researcher has a reputation based on their ability to conduct good experiments and publish thoughtful results and insightful conclusions that advance the field by addressing and answering the “big” questions. Such investigators publish papers regularly (but not too frequently) in high quality journals (see next bullet point), have the ability to place their results in a meaningful context relative to the rest of the field (rather than struggling to account for multiple findings that go against established knowledge), and support their conclusions with references that cite the prior research of a number of other research groups in the field. Over time and with increased experience at reading papers, one can develop a sense of who the quality researchers and groups are.
  2. The quality of the journal in which results are published.15 Scientific journals vary in their quality and standing in their field, and trained scientists have learned to know how journals rate relative to one another in terms of reputation and quality—and thus the believability of the results. There are many different scientific journals; some cater to a particular scientific topic, and some take a wider survey; in either case, journals tend to partition into tiers of quality and the respect they command among researchers. The best journals carry the most prestige but are more difficult to get published in, and require a longer time for review, edits, and general publisher processing. Sometimes it is preferable to publish in a second-tier journal in order to get one’s results published more quickly. Journals are ranked with a scoring system called the Impact Factor,16 which measures how many other scientists cite papers published in a particular journal. This suggests that cited papers are influential in the field and thus will be cited as important by more researchers. This can be used as a measure of how good a journal is, although it is not a perfect system. One should also consider the ranking of a journal within the field specified by the publisher. For example, the journal Lighting Research and Technology has an Impact Factor of 1.921 (ref: http://journals.sagepub.com/impact-factor/lrt), but it is ranked by Sage Journals in the Construction and Building Technology area – not exactly relevant for biomedical research studies.
  3. The references at the end of a paper.17 Authors of published papers make assertions throughout their manuscript and construct a meaningful context for the experiments they are conducting. Each assertion is based on existing knowledge and should be supported by a citation of a previously published paper. These citations are numbered throughout a paper and can be found in a list at the end of the paper. It is important to read the citations to determine whether they are relevant and valid for the assertion they are being used to support. Unfortunately, in some cases authors cite irrelevant, invalid, or very weak existing papers that may not support the hypothesis or rationale for an experiment. As an example, many papers or discussions of the “hazard” posed by blue LED light exposure cite papers that aren’t directly relevant to what a normal person would experience on any given day.18 Such citations often refer to papers in which the subjects were albino rats exposed to 16 or more hours of continuous “forced” blue LED light with no shelter available. Or, the papers might document very good and rigorous effects of blue light exposure on rodent eyes kept in a petri dish for several hours. While such studies might have been conducted perfectly, with good research methodology and strong, statistically significant effects, such results would have little relevance if they were compared to light exposure of a person in normal daily life. The failure to cite relevant prior research studies is an insidious form of misdirection that occurs all too frequently.
  4. The peer review process.19 When a researcher sends a manuscript to a journal for consideration for publication, the editor strips the manuscript of identifiers and sends it out to 2 – 3 other scientists to review and ask questions, suggest changes, and ultimately recommend or reject a manuscript for publication. Such a process works well when a manuscript is relevant to the field covered by a journal, but problematic if the journal is in an unrelated field. This is because the reviewers might have no training in the field covered in the manuscript, and thus they fail to detect and point out errors. The intention of the process is to raise the quality of papers, but in recent years the peer review process has come under fire, and reforms are likely coming (a good general discussion can be found here: https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century).

At present, the prevailing attitude among scientists is generally to assume that peer review may have been insufficient to ensure that a published paper is reputable, and to use one’s own skills and insights along with communications with professional colleagues to determine whether a paper is likely to be valid.

We are currently in the midst of great uncertainty about whether any of the research results coming from studies of light effects upon health are ready to be translated into applied practices at healthcare facilities and in bedrooms. Lighting, controls, and IoT companies are eager to enter this potentially profitable field, while many scientists are urging restraint and caution.

We are also seeing the emergence of self-proclaimed unqualified “experts” who profess to understand the state of knowledge in the field. It is likely now apparent that part of this uncertainty arises from the complexity in reading and understanding the scientific literature domains that are relevant to the development of standards and guidelines for health-conferring light exposure protocols. Those of us involved in the process of technology translation hope to provide education and clarity, and to bring together a diverse group of lighting subject matter experts and practitioners to develop a pathway to implementation.

References:
1Halper, M., (2017), European lighting regulations could help usher in human centric lighting. LEDs Magazine, March 2017, pp. 39-42.
2Cohrs, R.J. et al., (2014). Translational Medicine definition by the European Society for Translational Medicine. New Horizons in Translational Medicine. 2(3), pp.86–88. DOI:http://doi.org/10.1016/j.nhtm.2014.12.002
3Willmorth, K., June (2017), The Pseudo Science Marketing of Human Centricity, commentary, Lumenique blog, https://lumeniquessl.com/2017/06/10/the-pseudo-science-marketing-of-human-centrcity/
4Pain, E., (2016), How to (seriously) read a scientific paper, Science 361(6397), doi:10.1126/science.caredit.a1600047
5Jalil, Mohammad Muaz, Practical Guidelines for Conducting Research – Summarising Good Research Practice in Line with the DCED Standard (February 2013). Available at SSRN: https://ssrn.com/abstract=2591803 or http://dx.doi.org/10.2139/ssrn.2591803
6The Marshall Protocol Knowledge Base, (2012), Differences between in vitro, in vivo, and in silico studies, https://mpkb.org/home/patients/assessing_literature/in_vitro_studies
7Deisseroth, K.; Feng, G.; Majewska, A. K.; Miesenbock, G.; Ting, A.; Schnitzer, M. J. (2006).“Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits”. Journal of Neuroscience. 26 (41):10380–6. doi:1523/JNEUROSCI.3863-06.2006. PMC 2820367 . PMID 17035522
8Biau, D., S. Kerneis, and R. Porcher, (2008), Statistics in Brief: The importance of sample size in the planning and interpretation of medical research, Clin Orthop Relat Res, 466(9): 2282-2288.
9Gross, A. and N. Mantel, (1967), The effective use of both positive and negative controls in screening experiments, Biometrics 23(2): 285-295.
10Charness, G., U. Gneezy, and M. Kuhn, (2012) Experimental methods: Between-subject and within-subject design, Journal of Economic Behavior and Organization, 81:1-8.
11Kaptchuk TJ. (2001) The double-blind, randomized, placebo-controlled trial. Gold standard or golden calf? J Clin Epidemiol. 54:541-549.
12Kinney, J., (2001) Statistics for Science and Engineering, London, Pearson, ISBN-10:0201437201
13Head, M., L. Holman, R. Lanfear, A. Kahn, and M. Jennions, (2015) The Extent and Consequences of p-Hacking in Science, PLoS Biology 13(3): e1002106. https://doi.org/10.1371/journal.pbio.1002106
14Petersen, A., S. Fortunato, R. Pan, K. Kaski, O. Penner, A. Rungi, M. Riccoboni, H. Stanley, and F. Pammolli, (2014), Reputation and Impact in Scientific Careers, PNAS 111(43): 15316-21.
15Van Harten, J., 2014, Measuring Journal and Research Prestige, Elsevier, http://www.inaf.ulaval.ca/fileadmin/fichiers/fichiersINAF/FAST/Atelier_ecriture/Laval_–_Session_3_–_Bibliometrics_DEF.pdf
16https://www.elsevier.com/authors/journal-authors/measuring-a-journals-impact
17http://www.psychology.ucsd.edu/undergraduate-program/academic-writing-resources/writing-research-papers/evaluating-references.html#Results
18Ashdown, I., (2014) Blue Light Hazard…or Not? WordPress Blog https://www.researchgate.net/publication/273763540_Blue_Light_Hazard_or_Not
19https://www.elsevier.com/reviewers/what-is-peer-review

Lighting and the Internet of Things

By Robert F. Karlicek, Jr., Ph.D.
Professor of Electrical, Computer and Systems Engineering
Director, Center for Lighting Enabled Systems & Applications
Rensselaer Polytechnic Institute

Figure 1: The projected global growth rate of IoT connected devices. (Source: IHS, Statistica 2018)

The Internet of Things (IoT) is a hot topic these days, driven by the explosion of low-cost sensors, microprocessors, and wireless communications to provide new types of services for consumers and businesses. When these IoT platforms are dispersed in any environment huge amounts of data about energy use, environmental conditions, and human activity can be generated. These data are perceived to be extremely valuable and can be used to provide new functionality, ranging from simple voice activated commands such as Alexa answering a parent’s question, “Are the lights still on in Jimmy’s room?” to complex building management systems that control lighting; heating, ventilation, and air conditioning (HVAC); security; and space utilization using algorithms that process whole building occupancy, temperature, CO2 level information and humidity data, among others. Statistically speaking, there are three times more IoT sensors than humans (see Fig. 1), and deployment of new IoT devices is projected to grow rapidly.

IoT platforms are becoming a disruptive force in the lighting industry, as lighting systems have three properties coveted by key segments of the developing IoT market: ubiquity, vantage point, and access to power.1 In IoT speak, a light fixture could be called an enabling powered system with a view containing one or more “things” (typically sensors) that use either wired or wireless connectivity to feed data to a control system. Sensor data can be processed through cloud computing but will more likely be processed locally (in the light fixture itself, in the room, or elsewhere in the building) to reduce the time between sensing and system response (latency) and to address cybersecurity issues. The value of IoT for lighting companies comes from data generation using sensors in light fixtures, creating intelligent services that consumers and building managers find indispensable. This is increasingly attractive to lighting companies as solid state lighting (SSL), a disruptive technology in its own right, gets increasingly commoditized and lighting company investors and shareholders look for new non-lighting paths to revenue growth.2

The result of these evolving new market opportunities means IoT enabled lighting is slowly entering commodity lighting markets with product offerings from almost all of the main lighting fixture manufacturers as well as many smaller startup companies. IoT offerings in lighting are usually referred to as connected, intelligent or smart lighting systems, and most IoT enabled lighting systems contain simple, passive infrared (PIR) occupancy and daylight sensors; but in the future, they may contain more advanced integrated sensors3 (e.g., CO2 sensors, IR imagers, radar sensors). Currently, different connected lighting product offerings are not fully “interoperable,” so products from different vendors cannot typically work together on the same IoT communications protocol even if they use an “open” (nonproprietary) communications platform. The issue of interoperability is widely recognized, and IoT industry groups are working to address it.

As IoT gradually infiltrates the lighting market (and almost all other markets), significant progress will need to be made in analyzing all of the data generated by the sensors. Analytics that digest the information to provide value-added services will increasingly depend on machine learning (ML) and artificial intelligence (AI) to maximize the benefit from IoT platforms. Though ML/AI systems themselves are a disruptive technology in general, their practical use is only just beginning to be realized. They are rapidly being developed in other markets like autonomous vehicles, healthcare analytics, and voice activated smart home technologies and will be applied to adaptive control of lighting and building management systems.

To summarize the current situation, the old curse “may you live in interesting times” certainly applies to the whole lighting industry, which is in the throes of major disruptions from three convolved technology platforms: 1) the continuing introduction of new SSL technologies, 2) the gradual introduction of fixture based IoT concepts, and soon, 3) the use of increasingly sophisticated ML/AI embedded systems for lighting (and building management) control. While it will be challenging for those in the lighting markets to navigate these coming changes, many already recognize that the long-term trends are clear:

Lighting cannot escape an IoT future

Figure 2: Internet companies are in a financial position to control connected lighting as well as IoT related services and value creation.

IoT based services are the province of networking, telecommunications and information technology companies that will transmit sensor data to end users who provide value-added data based services. These companies will ultimately address broad interoperability issues, networking platforms, privacy/cybersecurity, and data ownership management. Non-lighting companies have the deep pockets4 (see Fig. 2), are already spending significant amounts of capital on IoT and ML/AI technologies, and will be best positioned to monetize a connected future – including lighting. Broad IoT applications are being addressed by large consortia within the IoT industry, where few, if any, lighting companies are represented.

What’s lighting got to do with IoT?

It is highly probable that the answer to this question is not much, unless lighting companies can maintain control of the sensors and the socket. Of course, lighting system design will always be important, no longer so much for energy savings, but increasingly for human factors considerations and possibly Li-Fi.5 Energy considerations will still be important but only at the controls level, ultimately moving to automatic lighting control systems that will rely on sophisticated occupancy sensors, IoT connectivity, and ML/AI data analytics to squeeze energy savings out of responsive light utilization concepts and provide color tunable lighting designed to improve human health and wellness (in response to both occupancy sensing/tracking and daylighting).6

Is there a bright side to lighting and IoT?

There is still significant room for innovation and an upside in the lighting industry. It can come from embracing the IoT future, including the continued development of new lighting form factors and new optics capable of efficient dynamic color mixing and light pattern shaping (e.g., Lensvector,7 others). By working closely with IoT, IT, and lighting design software tool developers, the lighting community (designers and manufacturers) can help shape the future of the connected lighting industry.

Perhaps one of the best opportunities for lighting companies to benefit from an IoT future is to own the sensors and explore ways to make the light emission from the fixture an integral part of a luminaire’s sensory capabilities. Building off of daylight sensors incorporated into luminaires, there are new ways to use reflected light sensing of digitized illumination to perform highly accurate occupancy tracking and even generate pose-detection (e.g., standing, sitting, fallen) data. With improved lighting software design tools that take into account the spectral reflectance of a space’s surfaces, compensation for glare, and ocular light dose for human health and wellbeing, better lighting spectral power distributions could be calculated more accurately than using other non-light based techniques. When the lighting system can cost effectively sense its environment (ideally using privacy-preserving low-cost color and time-of-flight measurements8, 9, 10), lighting becomes an integral and indispensable part of any lighting IoT solution.

Besides ubiquity, vantage point, and power, the lighting industry can generate invaluable lighting based data and information not only for lighting control systems but also for other IoT connected systems in building management, healthcare operations, communications, and even horticulture. By making lighting enabled sensing a requirement for IoT system operation, IoT solutions will need to embrace lighting system design and connectivity for maximum societal benefit. However, if the lighting industry cedes the sockets (and poles) and sensors (and data) to other enterprises outside of the lighting industry, a significant market opportunity will have been missed, and lighting will increasingly become a commodity non-smart plug-in to someone else’s IoT connected business future.

FOOTNOTES:
1 Clear from digital ceiling concepts promoted by Cisco, for example, see https://industrial-iot.com/2017/04/smart_buildings_digital_ceiling/, Accessed 6/21/2018
2 See Philip Lighting (now called Signify) in http://www.usa.lighting.philips.com/content/B2B_LI/en_US/internet-of-things.html, Accessed 6/18/2018
3 There are many now available for integration in lighting systems, for example, see https://gooee.com/products/sensors/, Accessed 6/20/2018
4 Market capitalizations for major internet and telecomm companies are ~100 times those of lighting companies
5 The IEEE 802.11 now has formed a light communications study group (see http://www.ieee802.org/11/Reports/lcsg_update.htm) looking at standards mobile wireless communications using light.
6 The most recent DOE funding opportunity seeks research on light utilization efficiency and lighting and health topics. See https://www.energy.gov/eere/ssl/articles/energy-department-announces-15-million-early-stage-solid-state-lighting-research, Accessed 6/1/2018
7 See, for example, http://lensvector.com, Accessed 6/21/2018
8 S. Afshari, T.-K. Woodstock, M.H.T. Imam, S. Mishra, A.C. Sanderson, and R.J. Radke, The Smart Conference Room: An Integrated System Testbed for Efficient, Occupancy-Aware Lighting Control. ACM International Conference on Embedded Systems for Energy-Efficient Built Environments (BuildSys), November 2015
9 L. Jia, S. Afshari, S. Mishra and R.J. Radke, Simulation for Pre-Visualizing and Tuning Lighting Controller Behavior. Energy and Buildings, Vol. 70, pp. 287-302, February 2014
10 L. Jia, S. Afshari, S. Mishra and R.J. Radke, Simulation for Pre-Visualizing and Tuning Lighting Controller Behavior. Energy and Buildings, Vol. 70, pp. 287-302, February 2014

The Science of Near-Infrared Lighting: Fact or Fiction

By Ian Ashdown, P. Eng. (Ret.), FIES, Senior Scientist, SunTracker Technologies Ltd.

There is a common-sense argument being presented in the popular media that since humans evolved under sunlight, our bodies must surely make use of all the solar energy available to us. Given that more than 50 percent of this energy is due to near-infrared radiation, we are clearly risking our health and well-being by using LED lighting that emits no near-infrared radiation whatsoever.

Fact or fiction?

To examine this issue, we begin with a few definitions. There are several schemes used to partition the infrared spectrum. ISO 20473, for example, defines near-infrared radiation as electromagnetic radiation with wavelengths ranging from 780 nm to 3.0 μm (ISO 2007). Meanwhile, the CIE divides this into IR-A (780 nm to 1.4 μm) and IR-B (1.4 μm to 3.0 μ), while noting that the borders of near-infrared “necessarily vary with the application (e.g., including meteorology, photochemistry, optical design, thermal physics, etc.)” (CIE 2016).

The terrestrial solar spectrum that we are exposed to on a clear day is shown in Figure 1. This varies somewhat depending on the solar elevation, which is in turn dependent on the latitude, time of day, and date. However, Figure 1 is sufficient for discussion purposes.

Figure 1 – Terrestrial solar spectrum. (ASTM G173-03)
Figure 1 – Terrestrial solar spectrum. (ASTM G173-03)

Compared to sunlight, modern-day electric lighting, and in particular LED lighting, is sorely deficient in near-infrared radiation. Figure 2 illustrates the problem, where the terrestrial solar spectrum has been approximated by a blackbody radiator with a color temperature of 5500 K. Look at the spectrum of incandescent lights – they clearly provide the near-infrared radiation that we need. By comparison, 3000-K LEDs (and indeed, any white light LEDs) provide no near-infrared radiation whatsoever.

Figure 2 – Spectrum comparisons.
Figure 2 – Spectrum comparisons.

The same is true, of course, for fluorescent lamps. Given how much time most people spend indoors, we have been depriving ourselves of near-infrared radiation since the introduction of fluorescent lamps in the 1950s!

This is only common sense, but it was also common sense that led Werner von Siemens to proclaim, “Electric light will never take the place of gas!” Common sense notwithstanding, the above two paragraphs are patent nonsense.

A Sense of Scale

Figure 2 is deliberately misleading, even though it was recently published in a trade journal elsewhere without comment. The problem is one of scale. If we go back to the early 1950s with its predominantly incandescent lighting in homes and offices, illuminance levels were on the order of 50 to 200 lux. Meanwhile, outdoor illuminance levels are on the order of 1000 lux for overcast days, and 10,000 to 100,000 lux for clear days.

Even on an overcast day, we would have received roughly five to ten times as much near-infrared radiation outdoors as we would have indoors. On a clear day, it would have been five hundred to one thousand times. Given this, properly scaled incandescent and 3000-K LED plots in Figure 2 would both be no more than smudges on the abscissa.

Figure 3 – Infrared “smudge” (see text for explanation).
Figure 3 – Infrared “smudge” (see text for explanation).

Common sense should also tell us that we have survived quite nicely without near-infrared radiation in our daily lives ever since we began spending our time in offices and factories rather than working in the fields during the day. It does not matter whether the electric light sources are incandescent, fluorescent, or LED – the amount of near-infrared radiation they produce compared to solar radiation is inconsequential.

This does not mean, however, that near-infrared radiation has no effect on our bodies. There are hundreds, if not thousands, of medical studies that indicate otherwise. For lighting professionals, it is therefore important to understand these effects and how they relate to lighting design.

Low Level Light Therapy

Many of the medical studies involving near-infrared radiation concern low level light therapy (LLLT), also known as “low level laser therapy,” “cold laser therapy,” “laser biostimulation,” and most generally, “photobiomodulation.” Using devices with lasers or LEDs that emit visible light or near-infrared radiation, these therapies promise to reduce pain, inflammation, and edema; promote healing of wounds, deeper tissues, and nerves; and prevent tissue damage.

Laser therapy is often referred to as a form of “alternative medicine,” mostly because it is often difficult to quantify its beneficial effects in medical studies. Unfortunately, the popular literature, including magazine articles, personal blogs, product testimonials, and self-help medical websites, often reference these studies as evidence that near-infrared radiation is essential to our health and well-being. In doing so, they overlook two key points: irradiance and dosage.

Irradiance

The adjective “low level” is somewhat of a misnomer, as it is used to distinguish LLLT medical devices from high-power medical lasers used for tissue ablation, cutting, and cauterization. The radiation level (that is, irradiance) is less than that needed to heat the tissue, which is about 100 mW/cm2. By comparison, the average solar IR-A irradiance is around 20 mW/cm2 during the day, with a peak irradiance reaching 40 mW/cm2 (Piazena and Kelleher 2010).

This is not a fair comparison, however. In designing studies to test LLLT hypotheses, there are many parameters that must be considered, including whether to use coherent (laser) or incoherent (LED) radiation, the laser wavelength (or peak wavelength for LEDs), whether to use continuous or pulsed radiation, and the irradiance, target area, and pulse shape. Tsai and Hamblin (2017) correctly noted that if any of these parameters are changed, it may not be possible to compare otherwise similar studies.

Solar near-infrared radiation has its own complexities. In Figure 1, the valleys in the spectral distribution are mostly due to atmospheric absorption by water and carbon dioxide. Further, the spectral distribution itself varies over the course of the day, with relatively more of the visible light being absorbed near sunrise and sunset. Simply saying “solar near-infrared” is not enough when comparing daylight exposure to LLLT study results.

Complicating matters even further is the fact that near-infrared radiation can penetrate from a few millimeters to a centimeter or so through the epidermis and into the dermis, where it is both absorbed and scattered. The radiation is strongly absorbed by water at wavelengths longer than 1150 nm, so there is an “optical window” between approximately 600 nm and 1200 nm where low level light therapy devices operate.

The biochemical details of how near-infrared radiation interacts with the human body are fascinating, with the primary chromophores hemoglobin and melanin absorbing the photons and then undergoing radiationless de-excitation, fluorescence, and other photochemical processes. For our purposes, however, these details are well beyond the focus of this article.

What is of interest, however, is that whatever positive results may be attributed to a given medical study, because of the differences elucidated above it is difficult to compare them with exposure to solar near-infrared radiation. Further, the irradiance levels are considerably higher than what we would experience outdoors on a clear day, and much higher than we would experience indoors, even with incandescent light sources. Given this, it is generally inappropriate to consider LLLT studies as evidence that near-infrared radiation is essential to our health and well-being.

Dosage

The Bunsen-Roscoe law, also known as the “law of reciprocity,” is one of the fundamental laws of photobiology and photochemistry. It states that the biological effect of electromagnetic radiation is dependent only on the radiant energy (stated in joules), and so is independent of the duration over which the exposure occurs. That is, one short pulse of high irradiance is equal one or more long pulses at low irradiance, as long as the energy (duration times irradiance) is the same.

Unfortunately, human tissue does not obey this law. Instead, it exhibits a “biphasic dose response,” where larger doses (i.e., greater irradiance) are often less effective than smaller doses. At higher levels (greater than approximately 100 mW/cm2), the radiant power induces skin hyperthermia (that is, overheating), while at lower levels, there is a threshold below which no beneficial effects are observed (Huang et al. 2009). This is presumably due to various repair mechanisms in response to photo-induced cellular damage.

This is not to say that solar near-infrared radiation may not have a beneficial effect. As an example, a study of wound healing in mice using 670-nm red LEDs demonstrated significant increases in wound closure rates beginning at 8 mW/cm2 irradiance (Lanzafame et al. 2007). This is comparable with an average 20 mW/cm2 solar IR-A irradiance on a clear day. However, this is also orders of magnitude greater than the average irradiance that might be expected indoors from incandescent light sources.

As an aside, it should be noted that treatment of dermatological conditions with sunlight, or heliotherapy, was practiced by ancient Egyptian and Indian healers more than 3,500 years ago (Hönigsmann 2013). However, this involved the entire solar spectrum from 300 nm (UV-B) to 2500 nm (IR-B); it is impossible to relate the effects of such treatments to near-infrared radiation alone.

Near-Infrared Radiation Risks

Based on the evidence of low level light therapy studies, there appears to be scant evidence – if any – that a lack of near-infrared radiation in indoor environments is deleterious to our health and well-being. If anything, the minimum required irradiances and the biphasic dose response argue against it.

There are, in fact, known risks to near-infrared radiation exposure. Erythema ab igne, for example, is a disorder characterized by a patchy discoloration of the skin and other clinical symptoms. It is caused by prolonged exposure to hearth fires, and it is an occupational hazard of glass blowers and bakers exposed to furnaces and hot ovens (e.g., Tsai and Hamblin 2017). It is not a risk to the general population, however, in that the irradiance is usually many times that of solar near-infrared irradiance.

More worryingly, IR-A radiation can penetrate deeply into the skin and cause tissue damage, resulting in photoaging of the skin (Schroeder et al. 2008, Robert et al. 2015), and at worst, possibly skin cancers (e.g., Schroeder et al. 2010, Tanaka 2012). Sunscreen lotions may block ultraviolet radiation that similarly causes photoaging and skin cancers, but they have no effect on near-infrared radiation.

Evolutionary Adaptation

Excess amounts of ultraviolet radiation can cause erythema (sunburn) in the short term, and photoaging and skin cancers in the long term. Curiously, pre-exposure to IR-A radiation preconditions the skin, making it less susceptible to UV-B radiation damage (Menezes et al. 1998). This is probably an evolutionary adaptation, as the atmosphere absorbs and scatters ultraviolet and blue light in the morning hours shortly after sunrise (Barolet et al. 2016). This morning exposure to IR-A radiation is likely taken as a cue to ready the skin for the coming mid-day exposure to more intense ultraviolet and near-infrared radiation. Late afternoon exposure to decreased amounts of ultraviolet radiation may further be taken as a cue to initiate cellular repair of the UV-damaged skin. In this sense then, solar near-infrared radiation is an identified benefit.

Conclusion

So, are we risking our health and well-being by using LED lighting that emits no near-infrared radiation, or is this patent nonsense as stated above? Perhaps surprisingly, the answer is that we do not know.

The above discussion has focused on the effects of near-infrared radiation on the skin and low level light therapy. Given that the irradiances and dosages of LLLT are much greater than those experienced from indoor lighting (including incandescent), it is inappropriate to cite LLLT medical studies in support of near-infrared lighting.

This does not mean, however, that there are not benefits to long-term exposure to near-infrared radiation, or risks from the lack thereof. The problem is in identifying these possible benefits and risks. Without obvious medical consequences, epidemiological studies would need to be designed that eliminate a long list of confounding factors, from light and radiation exposure to diet and circadian rhythms. They would also need to be performed with laboratory animals, as human volunteers are unlikely to agree to completely avoid exposure to daylight for months to years at a time.

In the meantime, we as lighting professionals must work with the best available knowledge. Lacking any credible evidence that very low levels of near-infrared radiation is necessary for our health and well-being, there appears to be no reason not to continue with LED and fluorescent light sources.
 
References
Barolet D et al. 2016. Infrared and skin: Friend or foe. J Photochem Photobio; B: Biology. 155:78-85.International Commission on Illumination [CIE]. 2016. International Lighting Vocabulary, 2nd ed. CIE DIS 017/E:2016. Vienna, Austria: CIE Central Bureau.

Hönigsmann H. 2013. History of phototherapy in dermatology. Photochem Photobio Sci. 12:16-21.

Huang AC-H et al. 2009. Biphasic dose response in low level light therapy. Dose-Response. 7(4):358-383.

International Organization for Standards [ISO]. 2007. ISO 20473:2007, Optics and Photonics – Spectral Bands. Geneva, Switzerland: ISO.

Lanzafame RJ et al. 2007. Reciprocity of exposure time and irradiance on energy density during photoradiation on wound healing in a murine pressure ulcer model. Lasers in Surg Med. 39(6):534-542.

Piazena H, Kelleher DK. 2010. Effects of infrared-A irradiation on skin: Discrepancies in published data highlight the need for exact consideration of physical and photobiological laws and appropriate experimental settings. Photochem Photobio. 86(3):687-705.

Menezes S et al. 1998. Non-coherent near infrared radiation protects normal human dermal fibroblasts from solar ultraviolet toxicity. J Investig Derm. 111(4):629-633.

Robert C et al. 2015. Low to moderate doses of infrared A irradiation impair extracellular matrix homeostasis of the skin and contribute to skin photodamage. Skin Pharm Physiol. 28:196-204.

Schroeder P et al. 2008. The role of near infrared radiation in photoaging of the skin. Exp Geron. 43(7):629-632.

Schroeder P et al. 2010. Photoprotection beyond ultraviolet radiation – Effective sun protection has to include protection against infrared A radiation-induced skin damage. Skin Pharm Physiol. 23:15-17.

Tanaka Y. 2012. Impact of near-infrared radiation in dermatology. World J Derm. 1(3):30-37.

Tsai S-R, Hamblin MR. 2017. Biological effects and medical applications of infrared radiation. J Photochem Photobio; B: Biology. 170:197-207.

FIRES Policies and Submission Information

The Forum for Illumination Research, Engineering, and Science (FIRES) is the IES online space for our community to openly share and discuss the latest research and innovations in illumination science and engineering. As a space for the free dissemination of knowledge and exchange of ideas, FIRES is intended to foster relationships between individuals and larger institutions, and reignite the emphasis on science and engineering in the lighting industry.

Editorial Disclaimer

The views expressed in articles published on FIRES do not necessarily reflect those of IES or represent endorsement by the IES.

Article Guidelines

The IES invites you to contribute research, engineering and science based articles to this Forum. Additionally, the IES will periodically hold open calls for submissions for topical articles. All articles are reviewed based upon the following criteria:

  • Originality – will the readers learn something new?
  • Relevance – is the topic relevant to current industry issues?
  • Technical Merit – is the subject matter technically oriented and significant to illuminating engineering?
  • Rigor – is the article supported by authoritative research?
  • Readability – is the author able to communicate their ideas fully?

Articles that have been previously published elsewhere will be considered, but the author must confirm that they are within their legal rights to republish the work. Authors retain full ownership of their content published on FIRES, however, all submissions also grant copyright permission to the IES, through publication on this Forum or in any other IES publication.

It is the responsibility of the author to confirm their right to reproduce images, videos, audio, or any other third-party content within their article.

Articles must be supported by in-text citations and corresponding references in a consistent format of the author’s choice. Sources for further reading are also encouraged.

Our Editorial Board reserves the right to edit articles for clarity, spelling, grammar, and formatting. Articles may not contain disparaging language, and shall not be used to promote specific products. Revisions will be required for articles that include product promotions or otherwise represent commercial interests.

While FIRES article have no formatting requirements, we do require that the contents be focused and be understandable to practitioners in various stages of their careers.  All articles must explain a problem or concept, describe possible solutions, explain the actions taken, and provide a conclusion that can be reasonably drawn from the article’s content.

Comment Policy

The IES welcomes readers to comment on articles to support an ongoing discussion. All comments are reviewed by the Science Advisory Panel and IES staff prior to posting to ensure relevancy to the article and professional language. Any comments containing disparaging language or product promotions will not be posted.

Submissions

The FIRES Advisory Board maintains an editorial calendar based on invited articles and outside submissions. Abstracts are reviewed by the Editorial Board twice monthly.

Interested in sharing your research in FIRES?

Please email the following information to research@ies.org:

Full name(s) of author(s)
Institution/affiliations
Is the article time-sensitive?
Has this article been published, or will it be published in any other form?
1 page abstract (with full article, if completed)