- v1=f(k1 q);
- v2=f(k2 q);
- v3=f(k3 q);
- v4=f(k4 q);
- v5=f(k5 q);

Thus, without loss of generality, k3=1, and we have:

- v1=f(q/k^2);
- v2=f(q/k);
- v3=f(q);
- v4=f(k q);
- v5=f(k^2 q),

- v1=f(q/4);
- v2=f(q/2);
- v3=f(q);
- v4=f(2 q);
- v5=f(4 q),

Specifically, for 2 marks, as follows: determine the "a" and "c" values for 1 mark, and plot the comparagraph corresponding to these "a" and "c" values directly on top of the comparagram for 1 mark.

Lighten the dark lictures and darken the light lictures, appropriately. Specifically, you'll end up with a plurality of lictures that each estimate the true quantity of light, q, falling on the image sensor:

- q~1=4 f^1(v1);
- q~2=2 f^1(v2);
- q~3=f^1(v3);
- q~4=0.5 f^1(v4);
- q~5=0.25 f^1(v5).

Begin by constructing "certainty" images from each of the input images. The certainty image is the input image evaluated at the pixelspace Certainty function, C[0..255] which is for each pixel value, indicative of the certainty (confidence) in that pixel value. Clearly pixel values of 0 or 255 have zero certainty, whereas more moderate pixel values have greater certainty. C is the slope of the response function in pixel space. See chapter 4 of the book, or this paper, http://wearcam.org/comparam.pdf, Equation 13.

For an intuitive understanding of this, see Figures 7 and 9 from the above paper. For each of the five images in Figure 7, there is a corresponding certainty image in Figure 9. Intuitively speaking, the certainty images show us how moderate ("midsy") the input images are. Moderate areas of the input image give rise to high certainty values where the certainty image is white or almost white. Extreme areas of the input image (very black and very light) give rise to dark areas of the certainty image.

For 2 marks, show the certainty images.

Compute a weighted sum to get the best estimate of q. Weight by the certainty images:

q=sum(Ci qi)/sum(Ci).

For the remaining 2 marks, do something useful with the result, q.

You can feed it into tensor flow, or face recognition, or anything else, directly, where you can show a better result with the q, than with any of the individual input images.

Alternatively, you can generate a picture, v = f(p(q)), where p is some processing. This will show the true power of quantimetric image processing on an licture (lightspace picture) comprised of multiple exposures. A simple example of p is unsharp masking, deblurring, or image sharpening, to produce a picture that is sharper and clearer than anything else you can do with any of the input images.

Post your results to the following Instructable: https://www.instructables.com/id/HDR-EyeGlass-From-Cyborg-Welding-Helmets-to-Wearab/