# Theory of Deep Samples¶

This document derives the techniques for splitting and combining two non-solid samples of equal depth and thickness. These should should be used by deep image “flattening” algorithms to compute the combined colour of two samples. The formulas are defined in the document Interpreting OpenEXR Deep Pixels. This document derives those formulas, and is for information only.

## Definitions¶

symbol |
descripion |
---|---|

\(r\) |
ratio of length of original sample to a subsample: \(0\le r \le 1\) |

\(\mathbf{C_i}\) |
Colour of sample \(i\) |

\(\mathbf{C'_i}\) |
Colour of a subsample of \(i\) |

\(\alpha_i\) |
alpha value of sample \(i\) |

\(\alpha'_i\) |
alpha value of a subsample of \(i\) |

\(T_i\) |
transparency/transmission of sample \(i\); \(T_a=1-\alpha_a\) |

\(\mathbf{c_i}\) |
colour per unit length (‘instantaneous colour’) of \(i\) |

\(t_i\) |
transparency per unit length (‘intanstaneous transparency’) of \(i\) |

Subscripts \(_a\) and \(_b\) refer to the input samples; subscript \(_c\) refers to the combined output sample

## Sample Model¶

Many deep compositing operations require subdividing a sample into two subsamples, or else clipping a sample at a given point. When this happens, the alpha value of the subsample(s) must be recomputed, and the colour will also change. To derive an equation for computing the new alpha, the amount of light attenuated by each subsample must be considered:

In this case, light passes through two objects, from left to right. Each objects attenuates the light. Light passing through any light absorbing object will by attenuated according to the **transparency** \(t\) of that object. Thus, if
\(i\) is the amount of light entering the object, the light
\(o\) leaving the object is given by

Where light passes through two objects, the light is attenuated twice (the output from the first is fed into second)

where \(t_2\) and \(t_1\) are the transmission of the
individual samples. In general, if the transmission \(t\) of all
objects are the same, the light emerging from \(n\) adjacent samples
is given by the *Beer-Lambert* law:

where \(T\) is the total transmission. If \(t\) is considered
the *transmission per unit length* and \(n\) the length, the
equation is continuous, applying also for non-integer values of
\(n\) 1.

An OpenEXR deep sample can then be modelled as an object of a specified length \(n\) which absorbs light according to equation Beer’s law (3)

### Alpha vs Transparency¶

OpenEXR images use alpha \(\alpha\) instead of transparency.
\(T=1-\alpha\): if \(\alpha=1\) (so \(T=0\)) then all light
is absorbed; if \(\alpha=0\) (so \(T=1\)), then all light is
transmitted and the material is transparent. Subsituting \(T\) into (1) and (2) combined alpha of two objects is given by the *screen* compositing operation:

where \(\alpha_c\) is the combined alpha value, and \(\alpha_1\) and \(\alpha_2\) are the alpha values of the two samples.

Throughout this document, \(T\) is preferred to \(\alpha\) where it gives simpler equations.

### Computing \(t\)¶

OpenEXR images samples store the \(\alpha\) of the entire sample. This gives us the total transmission of the sample, not the transmission per unit length \(t\). The total transmission \(T\) and the sample length \(n\) can be used to compute the transmission per unit length:

### Sample Properties¶

The following are assumptions made by the formulas derived here:

Samples have

**constant optical density and colour**: if a sample has length \(n\) and a subsample of length \(n'\) is extracted from it, the RGBA colour of the subsample will be the same regardless from where in the sample it is extracted. In particular, if a sample is split into \(k\) subsamples of equal length, each subsample will have the same RGBA colour.Sample attenuation is

**non-scattering**and**pixel independent**: light travelling through the sample is either absorbed or transmitted; it is not reflected back down the sample or scattered into neighbouring samples. Scattering causes point lights to appear blurred when passing through fog, and also tends to make fog look more optically dense than it really is, since detail is lost very quickly, even though light is being transmitted. This effect is not modelled with OpenEXR volumetric samples, and must be approximated by including the scattered light within the volume or applied as a post-process. Since the light attenuation profile throughout a pixel will not follow the Beer-Lambert equation (3), extra samples must be used to model the pixel.Sample behaviour is

**unit**and**scale independent**: if a sample is divided into \(k\) subsamples of equal length, then scaling the depth channels \(z_\textsf{front}\) and \(z_\textsf{back}\) of the deep image will not change the colour of the samples. When merging two deep images, prescaling the depth of each image by the same amount then merging the images is identical to merging the original images then scaling the depth of the result. This scaling property allows any unit to be used to store depth, and the unit to be changed by scaling the depth channels without modification of the RGBA channels.Sample behaviour is

**position independent**: moving a sample in depth will not change the RGBA values obtained by subdividing it. If a sample is \(10\) units long and a subsample of \(2\) units extracted from it, the subsample will have the same RGBA values regardless of the position in depth of the sample. Shifting two images by adding a constant \(c\) to the depth channels of each image then merging them is identical to merging the original images then shifting the result.Sample splitting is

**non-destructive**: using the equations derived here, the samples of a pixel can be arbitrarily subdivided. Recombining the subsamples with the*over*compositing operation will yield the RGBA colour of the original sample (assuming sufficient numerical precision).

## Sample Splitting¶

### Alpha of a Subsample¶

When a subsample of length \(n'\) is extracted from an original sample of length \(n\), the alpha value \(\alpha'\) for subsample must be computed.

The total transmission \(T'=t^{n'}\). Substituting for \(t\) computed for the entire sample length using (5) gives:

Here, \(n\) refers to the *length* of the original sample, but
samples are specified in OpenEXR with a front and back depth. Thus, if a
sample \(T\) with front and back depths \(z_\textsf{front}\)
and \(z_\textsf{back}\) is split at point \(z\) in space,
the transmission \(T'\) of the front subsample is given by

Since
\(({z-z_\textrm{front}})/({z_\textsf{back} - z_\textsf{front}})\)
is the *ratio of the original sample length to the extracted sample
length*, we can substitute this ratio for \(r\) for the remainder.
This is equivalent to considering subdivision of a sample of unit
length.

### Colour of a Subsample¶

A sample with RGB information is modelled as a cylinder. The alpha channel(s) of the sample are represented by some non-scattering, absorbent material within it, which attenuates the light passing through the sample. The colour channel(s) of the sample is represented by infinitely many light sources, which emit coloured light (but don’t absorb anything), with identical intensities \(\mathbf{c}\), evenly spaced through the sample:

A light source at distance \(x\) into the sample will be attenuated by the amount of absorber between it and the front of the sample. According to Beer-Lambert, the total transmission of that part of the sample \(t\) is given by

The light which reaches the front of the sample from the individual light source at distance \(x\) is \(\mathbf{c} T^{x}\). If there were a finite number \(N\) sources, the total light reaching the front of the sample would be:

where \(k/N\) is the position of light \(k\), equal to \(x\). As \(N\) tends to infinity, this becomes

The value \(\mathbf{C}\) is the sample’s RGB value, as stored in the OpenEXR image. Rearranging gives us the colour of each light (or perhaps the ‘instantaneous colour’ of the sample)

If the entire sample is reduced to a subsample length \(r\), only a subsection of lights are included, and we must compute the new RGB colour \(\mathbf{C'}\) for the subsample. Hence:

Substituting for \(\mathbf{c}\) from (7) gives

Noting that \(T-1=-\alpha\) and \(T^{r}-1=T'-1=-\alpha'\) and multiplying top and bottom by \(-1\) gives:

This is exactly equivalent to **unpremultiplying** the colour by the
original alpha value, computing the new alpha value, and
**premultiplying** by the new value.

### Splitting Transparent Samples¶

When \(\alpha=0\) (or alternatively, \(T=1\)), unpremultiplying in (8) would require a division by zero. A special case is derived for \(\alpha=0\). The sample model of the colour provided by many light sources still applies, but there is now no attenuation. Therefore, each light contributes equally to the observed colour \(\mathbf{C}\). The ‘instantaneous’ colour equals the final colour, (assuming sample length is normalised to 1)

A subsection \(\mathbf{C'}\) of length \(r\) is given by

That is, if \(\alpha=0\), then scaling a sample’s length by \(r\) scales the colour by the same amount.

## Sample Merging¶

Now, consider combining two samples \(a\) and \(b\) together. This operation is required when “tidying” a deep sample list, which is essential before flattening a deep image into a regular one. We assume that \(z_{\textsf{front}_a}=z_{\textsf{front}_b}\) and \(z_{\textsf{back}_a}=z_{\textsf{back}_b}\) so the lengths of each sample are the same, and they fully overlap in space. If this is not the case, the samples should be subdivided and those subsamples merged separately as described in the document “Interpreting OpenEXR Deep Samples”

The same sample model using discrete light sources is used. However, at each location, there are now two light sources, \(\mathbf{c_a}\) and \(\mathbf{c_b}\), and the light is attenuated by both \(t_a\) and \(t_b\). We can treat this as being attenuated first by \(t_b\), then by \(t_a\), so the total light reaching the end of the sample is:

By substitution into (6) and then substituting for \(\mathbf{c_a}\) and \(\mathbf{c_b}\) from (7) gives

Multiplying both parts of the top line by -1 gives

Note that the combined transmission is given by \(T_c=T_{a}T_{b}\), implying that the combined alpha follows the screen equation (6). Substituting into both top and bottom of (10) gives the formula for final colour:

This is the **premultiplied** combined colour. For the unpremultiplied
colour, the \(\alpha_c\) term can be omitted. This gives the
transmission weighted average of the unpremultiplied input colours.

### Merging when One Sample is Transparent¶

Now suppose one of the samples (say, \(b\)) has no alpha, so \(\alpha_b=0\) and \(T_b=1\). From (9) \(\mathbf{c_b}=\mathbf{C_b}\). The combined colour \(\mathbf{C_c}\) is attenuated according to \(T_a\), the transmission of \(a\) alone, and the combined colour is given by:

and substituting into (6) gives:

### Merging Two Transparent Samples¶

Where both samples are transparent, the colours simply add together:

### Merging Solid Samples¶

For numerical stability, we must give a sensible value for \(\mathbf{C_c}\) when \(\alpha_b=1\). Equation (11) is undefined, since \(\log(1-\alpha_b)=\log(0)=-\infty\). We follow the initial model of light sources in an absorbing material, but now the absorbing material absorbs all light. Thus, we must be observing only its closest light source (that at \(x=0\)) in the solid sample: the sample will absorb light from its other sources. Thus, we can treat \(b\) as an infinitely thin ‘discrete’ sample at the front of the sample, which will block all light behind it; we can simply composite \(b\) over \(a\). Therefore, if \(\alpha_b=1\) and \(\alpha_a<1\), the total observed colour will be \(\mathbf{C_b}\). Transposing \(a\) and \(b\) gives \(\mathbf{C_c}=\mathbf{C_a}\) if \(\alpha_a=1\)

Following the definitions above, when \(\alpha_a=1\) and \(\alpha_b=1\), then \(\mathbf{C_c}=\mathbf{C_a}+\mathbf{C_b}\). However, for stability, it makes sense to define

This approach gives more stable results when there is random sampling
error in the depth channel. Assume that the front depth of sample
\(A\) differs by some small amount from that of \(B\), so that
\(z_a=\delta+z_b\). Only the front sample will be visible: if
\(\delta<0\), then \(\mathbf{C_c}=\mathbf{C_a}\); if
\(\delta>0\), then \(\mathbf{C_c}=\mathbf{C_b}\). Setting
\(\mathbf{C_c}=\mathbf{C_a}+\mathbf{C_b}\) when \(\delta=0\)
gives an image which is (potentially) twice as bright. If \(\delta\)
is random noise due to sampling, we will get a random pattern of bright
pixels whenever \(\delta=0\). Using the mean colour seems a safer
option. However, this is *not* associative: when combining three solid
samples \(d\),\(e\) and \(f\) does not give the same
result as combining first \(d\) and \(e\), then combining the
result with \(f\).

## Summary¶

To **clip a sample** covering range \(z_\textsf{front}\) —
\(z_\textsf{back}\) into a subsample
\(z_\textsf{front}\) — \(z\), or **extract a subsample** of
length \(l'\) from a sample of length \(l\):

where \(\alpha\) and \(\alpha'\) are the alpha values of the original sample and the subsample respectively, and \(\mathbf{C}\) and \(\mathbf{C'}\) are the colours of the original sample and the subsample respectively

To **merge two samples** \(a\) and \(b\) into a combined sample
\(c\):