darktable article lede image
darktable and research

darktable and research

you might have noticed our equalizer tool, and been confused by it and the many controls. that’s probably partly because you didn’t see a similar thing before, we had to develop it first.

very short history

behind the ui is a powerful frequency domain processing technique, based on wavelets. the most commonly used wavelets are based on the lifting scheme [swe97], work in a data-independent way, and are decimated (i.e. the coarse coefficients are much more sparse than the fine ones). while this allows for very fast implementations, data-independent wavelets can lead to blurring or ringing around edges (depending on what you do to the coefficients during image enhancement). raanan fattal had quite an inspiring paper at siggraph 2009 [fat09] introducing edge weights into the lifting scheme to overcome this. while the method is fast, produces okay results (the legacy equalizer version I was based on this), it has some problems caused by the decimation. in particular, decimated wavelets are not shift-invariant, which means your results will change if you slightly crop the image for example. actually the same author also had a solution for that earlier already [far07], in a different context, and not quite as fast.

development of the new equalizer

starting from here, we tried to make it fast and apply wavelet shrinkage to achieve fast noise filtering for global illumination [dsh10]. but since the power of wavelets doesn’t stop there, we also wanted to be able to use it for local contrast enhancement [hdl11]. this is especially sensitive to artifacts like ringing and halos, which can be avoided by careful user interaction or an automated search. for the details about this, i’d like to refer the interested reader to the papers, so i don’t bore the rest of you to death with it here.

the ui

since darktable’s audience are mostly pro users, we want maximal user control and speed, so we don’t do any automated parameter searches.

you can manipulate the wavelet transform in LCh space based on five curves in three tabs:

  • L gain, L threshold (tab 1: luma)
  • C gain, C threshold (tab 2: chroma)
  • softness/edge weight (tab 3: sharpness)

all of which allow you to fine tune parameters for each frequency band separately: on the left you adjust coarse/large structures, on the right are the fine details (thus equalizer. it’s just the same as an audio equalizer. works on frequency bands, bass is left, treble is right).

L and C gain are straight forward, it just enhances local contrast for the given frequency band. pull it down below the middle of the screen to remove contrast from this band.

the L and C thresholds (bottom curve, defaults to zero everywhere) affect the wavelet shrinkage step and result in denoising. select one of the denoising presets to get a feel of how to use it.

the last tab gives you access to the edge-avoiding part of the wavelet transform. pull down the curve to all zero to reduce it to a standard, non-edge-aware wavelet. a double-click on any curve will reset it to its default. to get a feel for the effect of the edge weight, start from the sharpen (strong) preset and look at how high contrast edges change when playing with the sharpness curve.

an example

consider this black and white image of a cloud over nelson (right click and open in new tab to view full-res image):

original

we want to remove coarse contrast to flatten the look a bit, but enhance small details, to give it a more textured feel. this can be done at the same time, by manipulating the respective frequency bands in the equalizer module, like so:

sharp

the vertical gray bars in the background show you the sample points where the curve you drew is actually evaluated for the given zoom. since wavelets work on a discrete number of bands, depending on the actual pixel resolution, we use this step to translate the user input (what you actually meant) to what the algorithm understands (the closest we can get you at the current zoom). this actually makes some sense in the context of darktable’s scale-invariant pixel pipeline, and given the fact that our wavelets are shift invariant, and thus able to detect edges at every location and scale.

further possibilities

quite some. this is the exercises for the reader section:

  • increase saturation only for small things (e.g. berries)
  • achieve a bloom effect
  • chroma denoise your image

if cheating by looking at the presets, try to understand why they do what they do! :)

another useful hint: the scrollwheel works like it does in blender’s proportional edit.

references

  • [swe97] wim sweldens,

    the lifting scheme: A construction of second generation wavelets.

    siam j. math. anal. 29, 2 (1997).

  • [far07] raanan fattal, maneesh agrawala, szymon rusinkiewicz,

    multiscale shape and detail enhancement from multi-light image collections.

    siggraph 2007.

  • [fat09] raanan fattal,

    edge-avoiding wavelets and their applications.

    siggraph 2009.

  • [dsh10] holger dammertz, daniel sewtz, johannes hanika, hendrik lensch,

    edge-avoiding a-trous wavelet transform for fast global illumination filtering.

    high performance graphics 2010.

  • [hdl11] johannes hanika, holger dammertz, hendrik lensch,

    edge-optimized a-trous wavelets for local contrast enhancement with robust denoising. (see attached pdfs below)

    pacific graphics 2011.

    • hdl11_talk.pdf – Talk September, 17 2011
    • hdl11_paper.pdf – Edge-Optimized À-Trous Wavelets for Local Contrast Enhancement with Robust Denoising
Filed under: Blog Development
These are comments from the old website, archived as static HTML
  1. This comment is unrelated to this specific post, but I have a feature request.

    Darktable is coming together nicely with many compelling features and alternative toning methods (as mentioned on this post and others). But as a professional photographer one of my absolute most important needs is complete IPTC metadata editing. Darktable has a few of the most-used fields available, but I use nearly every editable field in the IPTC standard as I annotate images -- categories, source, credits, instructions and on down to the IPTC contact info I find critical to let those who find my images find me.

    Please continue developing this feature!

    Thanks.
  2. Job van der Zwan on Thu Jan 12 00:58:16 2012:
    This comment is related to this specific post ;)

    Thank you for the explanation! Now I have a bit more feeling for what the different sliders do and how to play with them.
  3. Kevin, I have posted your feature request into a ticket here: https://sourceforge.net/apps/trac/darktable/ticket/428#comment:1
    Feel free to add more information or questions or whatever. :-)