No right-click allowed. © Mareike Keicher

Tag : Stefan Kohler

sharpening for billboards

Resharpening for printing

We know the problem – you want to send a (hopefully) cleanly edited image for printing and look for suitable resharpening methods. So most people open a little sharpen filter and twist the radius until the picture is appropriately sharp. Of course, we know that you have to sharpen more for printing, so it rotates a little further than you would for screen playback.

Was everything done right? Probably not.

If you deal with images and retouching a lot, you will immediately recognize an over-sharpened image. Usually, it is the hair that is to dry, but also, the skin is increasingly suffering from over-sharpening.

The formula of happiness

Fortunately, there is a formula for printing that provides optimal sharpening!

general formula for resharpening

The variables for sharpening

  • r is the radius. Photoshop uses this variable, for example, when unsharp masking (also useable for sharpening in Capture One). It’s the aim to find this one out.
  • For this, we first need the distance (d). This is the distance from which the image should usually be viewed. The unit is inches (remember: 1 inch corresponds to 2.54 centimeters).
  • Furthermore, the resolution of the image is essential (res). The unit is dpi, i.e., dots per inch.

An Example

I want to improve my living room wall with a beautiful picture of a girl in the hedge, and of course, I want to achieve maximum sharpness. My couch is 4 meters away from this picture, and I sit there every day. This is my optimal viewing distance (4 meters = 157.48 inches). My picture is resolved with 240dpi, and this results in the following calculation:

formula for resharpening example

The result – my optimal sharpening radius is 15.11808, meaning 15.1 – because Photoshop does not allow precise information.

This value is ingenious. But enjoy this value with caution, because it is based on the optimal print medium – and that doesn’t exist. On canvas or matte paper, you need experience or talk with your print contact person, because sometimes a higher or lower sharpening may be necessary. There are, therefore, methods in Photoshop, such as smart-objects or sharpening on separate levels for later adjustments, meaning full control.

Nevertheless, one thing stands out here: the size of the image is entirely irrelevant; only the viewing distance and the resolution per inch determine the sharpness.

Background information: What does the viewing distance have to do with it?

Well, the distance determines which details our eyes can still perceive in the distance. These details should, of course, correspond to the sharpness radius again.

By the way, if you don’t feel like filling out the formulas manually or even remembering them, you will find a practical calculator on this German website www.fineartprint.pro. The page is worth a bookmark.

The essence of this text comes from Paul Santek, the operator of the site mentioned – I had the pleasure of listening to a session of him at the BarCamp event, and I am pretty blown away by the topic of printing.

Did you miss last week’s blog article? Get more background information regarding sharpness here.

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.

Photo by ROMBO from Pexels

The concept of sharpness

You see them again and again, totally over-sharpened images. Funny Jesus halos were created, hair looks super dry, and somehow everything becomes cheap again.

What is sharpness?

Sharpness is in contrast. The contrast in the form of differences in brightness at edges and details – but also color contrast or saturation contrast. Yes – even the image content can affect image sharpness.

Adobe Photoshop looks for edges when sharpening them and makes them lighter on one side and darker on the other. Photoshop has no idea of color contrasts.
Photoshop can only make luminance contrasts:

scharpen_01scharpen_02

Let’s take a closer look and do the same thing again:

scharpen_03

scharpen_04

Pay attention to the eye – that’s nice and sharp in this image – unfortunately, the skin texture suffered from it, the hair is strawy, and the hands “glow”.

But for luminance processing, there is an excellent, manual tool: Dodge & Burn. So here the ” manually sharpened “version:

scharpen_05scharpen_06

In this example, too, the eye is sharp – and there are no side effects. Incidentally, this took almost 2 minutes.

Sharpness through color contrasts

Let’s take a look at this image here:

scharpen_07

The two colors are very similar – the red and the orange differ only by a small offset in color. Saturation and luminance are the same. If we now change one of those color fields in hue (but leave the saturation and luminance the same), this is the result:

scharpen_08

The separation between the two fields is so strong that even JPG compression reaches its limits and shows artifacts in the middle. The contrast is extreme here – only by changing the color tone.

Transferred to an image, you could apply it as follows:scharpen_09scharpen_10

Here the color tone was changed only minimally, and a sharper image emerged. A very subtle effect – again without side effects.

Conclusion

If you want to have sharp images, you can keep the contrasts and contrast edges in mind when retouching – this usually makes subsequent sharpening obsolete. At Dodge & Burn, I always try to darken the edges a few percents more and lightly lighten the other side of the edge. When it comes to colors, I pay attention to the color harmonies, so that you automatically achieve very harmonious, but at the same time, sharp contrasts.

A little hint at the end

If you sharpen while using the raw converter (meaning before retouching) or pull the saturation upwards, you will get a sharper picture, but in the end, you do twice the work. All of the problems highlighted by this global sharpening also want to be eliminated again. It is better to use the raw conversion for a somewhat flatter but balanced image and to increase the sharpness in the image during retouching deliberately.

Stay tuned for the next blog article regarding sharpness in the upcoming week.

You might also be interested in those:

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.

Foto von cottonbro von Pexels

New tutorial – Limited Time Offer

There is excellent news for all our German fans! In the course of the last Coronavirus events, we took part in a great tutorial campaign to support other photography artists and recorded a 30-minute retouching video for retouching hair.

The tutorial should be available on March 26th up to March 28th 2020. Whole three days! Limited Offer! Click here to get to the shop.
We hope we can use it to fill your free time sensibly and hope you have a lot of fun with it!

Attending artists that also created tutorials include:

We hope to have good news for our international fans soon. Of course, we didn’t forget you!

Enclosed a link to a German podcast on the background of the campaign of die3tage.com.

Stay healthy!

File structure to display the right black and white version for D&B

Luminosity vs Brightness

Conny Wallström posted a video on YouTube about “Brightness vs. Luminosity inside of Photoshop”. As always, a somewhat more technical topic, but fascinating:
It is about the possibilities of transforming an image into its black-and-white representation.
Primarily when you use this as a help layer while retouching, you have to be extremely careful — otherwise, you get massive color problems very quickly.

Conny shows the different interpretation possibilities in Photoshop:


 

The initial picture:

Bildschirmfoto 2015-05-12 um 23.05.04

Here we have six color areas of different colors. If we look at red and green in Photoshop, we have the following values:

In the RGB color model:

RED: R=255, G=0, B=0
GREEN: R=0, G=0, B=0

RGB = Red, Green, Blue.

In the HSB model, this is what it looks like

RED: H=0°, S=100%, B=100%
GREEN: H=120°, S=100%, B=100%

HSB = Hue, Saturation, Brightness.

If the saturation is set to 0% in HSB mode, pure white is the result.

Reduce saturation

If you make the example shown above black and white via Photoshop’s “Reduce Saturation”, this picture results:

Bildschirmfoto 2015-05-12 um 23.14.05

This indicates that this function is based on the HSL (Hue, Saturation, Lightness) color model. In the HSB model, all surfaces would be white; here they are neutral grey.
The “hue/saturation” adjustment layer also uses the HSB model.

Image Mode “Grayscale”

When our image is desaturated via the menu item “Image -> Mode -> Grayscale”, a different image is created: It is not based on the HSL or the HSB model. What we see here is perceived or subjective brightness (luminosity).

Bildschirmfoto 2015-05-12 um 23.18.13

Blending mode color

One way to look at an image in the luminosity equivalent of the colors is to have a black or white color layer above the image in the “color” blending mode.

Bildschirmfoto 2015-05-12 um 23.25.01

 

In practice

Why should the difference between luminosity and brightness matter?

Especially in Dodge&Burn work, it is helpful to hide the colors via a help layer temporarily. However, it is important to choose a help layer that interprets the subjective brightness. This is not the case in HSB mode.
If you work on an image in HSB grayscales, you can change hue (doesn’t matter how much), you won’t see any difference in the image — at least not as long as you have the help layer active. This can destroy a few hours of work.

Conny has provided a lovely example:

gelbblau
Yellow is always perceived as brighter than blue.
gelbblauHSB
The same image desaturated in the HSB model; both colors appear equally bright.
Im HSL-Modell entsättigt ist die subjektive Helligkeit wieder korrekt
In the HSL model desaturated, the subjective brightness is correct again.

Another example of his own:

Ein Farbverlauf quer durch die Farbskala
A gradient across the color gamut.
Auch dieses Bild wird im HSB-Modell einfach neutralgrau
This image is also only neutral grey in the HSB model.
Mit der schwarzen Ebene im Ebenenmischmodus Farbe bleibt die wahrgenommene Helligkeit erhalten.
With the black layer in blending mode “color”, the perceived brightness is maintained.

 

Hier wurde der HUE-Wert in einer Farbton-Einstellungsebene um 180° erhöht
Here the HUE value was increased by 180° with the hue slider.
HSB-Konvertierung: wieder wird alles grau
HSB conversion: everything turns grey again.
HSL gibt uns hier die korrekte Luminanz aus
Here again with the layer in “color” blending mode

Especially in portraits, where we are careful not to get color shifts and/or use a lot of time to correct them, the knowledge of these help layers is very important. Photoshop works in different places with these three modes – you should always be aware of which one is used where.

This article is a partial translation from the video “Brightness vs. Luminosity inside of Photoshop” by Conny Wallström.
Some things have been translated very freely and expanded with their own examples.
I strongly recommend you to watch the video above one or the other time – there you can learn a lot.

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.

Solar-Curve-6-points_colorpreview

The “Solar Curve” (Solarization curve)

What is the “Solar Curve”?

The “Solar Curve” looks like a wave created from the normal linear curve inside of Photoshop. Normally 4 or 6 points are inserted. Mathematically one divides the entire range (0-255) into 5 or 7 parts and sets accordingly the points:

4-point Solar Curve

If you want to set 4 new points, simply divide the 255 (maximum) by five and get 51. These points result accordingly:

Input / Output
0/0
51/255

101/0
152/255
203/0
255/255

It will look like that:

6-point Solar Curve

If you want to set 6 new points, you simply divide the 255 (maximum) by seven and get 36. Accordingly, these points result in:

Input / Output
0/0
36/255
72/0
109/255 * actually 108, has been rounded up
145/0
181/255
218/0 * actually 217, has been rounded up
255/255

It will look like that:

What does the solar curve do with the image?

Example:
solar-curve_00
solar-curve_01
As you can see in the picture, the Solar Curve is used to map the small contrasts to extreme changes. This creates a very alienated but also enlightening view. This view is ideal for revealing sensor spots/freckles or a single hair, but also to be able better to judge the gentle transitions between light and shadow.

Here it reveals everything that could be somehow “dirty”. So here’s a round of criticism of your own work (from a time when the solar curve was not yet part of my standard workflow):

solar-curve_02

On the left, you can see a bit of banding; at the top, a sensor spot/freckle was overlooked, and there are still a lot of spots on the forehead. Yeah. Did you see that in the picture above?

Why do you need such precise editing?

This question often pops up (“No one sees that anyway”), and usually, that’s true.
One thing you must not forget: not every screen is the same. Those aspects that your screen may not display could be displayed on another display (possibly a disastrous low-budget discount screen that had been in the public office of the pro-chain smoker association for years and had its best times – if you can speak of it) can look completely different – and indeed by unnatural extreme shifts, such editing errors are no longer almost invisible.

Another example is backlit displays – everything that is printed and then backlit should be very smooth and clean. Mirror foil is also really mean and does not always reveal the best in retouching.

When one works for a client, you never know what he will do with the files – maybe just a brochure is planned, but a bit later could be an exhibition/fair asking for a large format display. You never know.

Working with a visible Solar Curve?

That does not sound tempting?
Immediately seeing where these minimal changes have yet to be made, seeing the little luminance issues burning in the night at Dodge & Burn, that sounds great, doesn’t it?
I use the Solar Curve for sensor spots/freckles, clone stamp, double-check, and hair. To sum up, everything you want to remove 100% clean and has very low contrasts.
Miraculously, you can use this view as a negative or with additional contrast enhancement to unmask even more of the problem areas.

For everything else: Leave it. You run the risk to retouch any naturalness away.

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.

banding on background

Banding – the whole truth in a nutshell

Presumably, each of you knows: You retouched forever on an image, retouched the fingers sore, and if you want to show the final image, quite strange rings in color gradients arise.
These rings are called “banding” and are nothing more than tonal value demolitions.

Before I explain where these tonal separations come from and how to avoid them, I would like to show you a small example:

As you can see in the background, we have — just on the right side a problematic gradient, rather a staircase in the brightness curve. This is banding.

How is banding created?

You can break it down very quickly: If there are more pixels than brightness levels, banding can occur.

Let’s take a look at how a colorful image is put together. We have three channels – red, green, and blue. Each of these channels has up to 256 levels. So 256 * 256 * 256 different pixel values are possible. Different pixel values mean that they differ in luminance, hue, or color saturation.

In our example, we have the same value on all three channels because it is a black and white picture. So only 256 values available. The first dot (the brighter one in the middle of the image) has an RGB value of 78, the second dot an RGB value of 28, the distance is 452px. In purely mathematical terms, this means 50 steps to 450px, so every 9px, the value should change by one step. That should, in theory, still be a clean gradient with invisible levels – but it is not.

The measured case is, of course, the optimal case. During the editing, however, a lot was pushed back and forth, made a bit lighter here, a bit darker there. This, of course, changes the usable area, and holes are created in the histogram.

With a tonal correction, I enlarged the used area and marked the holes – so it is more visible:

These holes are breaks in the tonal values, which show up as problematic gradients.

Problems caused by compression

To understand why banding can happen when uploading to the web, we shortly and superficially deal with the JPEG algorithm. How are pictures compressed?

  1. Compression by conversion to the YCbCr color model

The image is not saved as RGB channels (red-green-blue) but as Y-Cb-Cr (luminance, blue-yellow, red-green). While the luminance channel is stored in full resolution, the other two are calculated smaller. So that looks very simplified as follows:

The RGB image is decomposed into a 100% version of the luminance (Y), and a scaled version for each of the red-green color information (Cb) and the blue-and-yellow information (Cr). It has been found that this process saves a lot of data and, at the same time, still looks very good – except for the true banding.

* note: The process is much more complicated, who would like to deal with this, here is more information that we just sweep under the carpet at this point in order not to bore you with subsampling and pixel rearrangements, etc.

Next, these three channels are broken down into blocks – typically first in 8×8 pixels. These blocks are then smoothed a bit, and so the tone differences are further reduced within the blocks. Since not the absolute value, but the difference to the previous value is saved, that also saves data again.

The image will be much smaller, and if you want to see the efficiency of this method, just compare an uncompressed TIF against a JPG and pay attention to the file size.

Of course, this block formation and smoothing lead to further holes in our color gradient – making the banding stronger. The higher the compression, the more banding.

I could keep writing for hours, but the decomposition and different scaling, block formation, and pixel-to-tone ratio are the strongest factors in tonal value holes.

With this background knowledge, one can draw an interesting conclusion to the camera:

The higher the resolution and the better the noise performance, the greater the tendency for banding.

What can we do against banding?

If you edit an image, you can already see in the raw conversion if it could tend to banding: Whenever large-scale, smooth gradients take place, it is better in 16-bit edit. In 16-bit, instead of the 256 values per channel are 65,536 possible values available. During the processing, therefore, no holes should arise in the histogram.

By the way, you can find a detailed article about 8-bit vs. 16-bit here.

Before saving as a JPG, you can prevent the block formation with a slight noise. Noise means more difference between the pixels. We compare an 8×8-pixel area before and after the noise:

As you can see, the individual pixels are now more different from each other, and smoothing will be less successful here – the JPG algorithm has been tricked.

The best way to create such a noise is to create a new layer and fill it with 50% gray using “Edit -> Fill”. Now the noise is created with “Filter -> Noise -> Add Noise”. The new layer is now placed in the “soft light” blend mode, and the intensity is varied via the layer transparency. It is important, of course, that this noise is generated at the very end of the workflow – first zoom out, then sharpen, then noise. If you change the order, you either sharpen the noise (and amplify it), or you prevent the noise from being reduced. So, at the very end, just before saving.

How to remove banding?

If you want to remove the banding in an already saved image, that is a relatively simple process:
First, the image is converted into 16-bit to get more latitude in the tonal values. Then the banding can be removed with the Gaussian Blur – thus, the holes are closed in the histogram. The further storage is then as described above. Of course, it’s better to prepare the image in the right way during the raw conversion (correct usage of the color depth) and very careful with large, soft brushes to go.

Further tips and tricks:

Converting to 16-bit and then reducing it to the background layer can also help. Jan Wischermann shows those and some more tips here in the following video:

 

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.

Photo by Lukas Gevaert from Pexels

The ethnical responsibility in beauty retouching

I recently had an interview on the ethnical responsibility of retouching. I found the experience very moving and very thought-provoking.

In this article, I would like to write down a few of these thoughts. What we see in advertisements and magazines every day feels like reality to people outside of the retouching industry: Impeccable models, even skin, great characters. If the retouching is carried out correctly, the processing won’t be visible — which creates an own reality. This is for many people (especially young people) a dangerous path. To aspire idols seems to be natural, at least until we eventually find our way and pull our thing through.

For young people, this distorted reality (by retouching) can be dangerous.
Liquify” is probably the best example: it can be done in Photoshop with a few clicks and is hard work in “real life”. Overdone retouching far beyond the target idea is either not possible, dangerous, unhealthy, or self-destructive when it is made in the way of unnatural anatomy beyond the reasonable limits of the human body.

We have a significant ethnical responsibility in retouching here – and I’m very worried about the future generations.

When we retouch within the limits of doability, I think the world is fine. If someone has the will to follow this (photoshopped) ideal, it is possible to do so through hard work — through extremely hard work. It is possible, even without pathological, self-destructive methods …

In Israel, Photoshop’s use in advertising has been banned. For many, that’s the right approach. I think this is a possible new danger:

The advertising is based on stereotypes; advertising WANTS super-thin models — preferably 1.80 m tall and 90 pounds (45 kg) heavy, dress size under 34. The retoucher supports all this process by streamlining a little here and there, making something else more prominent, and so on …

Cycle of ethical retouching responsibility

This is the usual process in which almost everything influences each other. Advertising determines the ideal of beauty and thus influences the requirements of modeling agencies for their models through photographers and clients. These requirements affect people who have chosen to become a “model” as a career choice or to be part of that industry.

Cycle of ethical retouching responsibility

If the retouching is crossed out of this cycle — what will be the changes?
In the end, nothing will change — only the requirements of the model agencies will be stricter — because the unhealthy-self-destructive-super-thin-anatomy models will get the jobs. For the young people, the argument “This is photoshopped!” can no longer be maintained, and the pursuit of an unhealthy perfection becomes the new reality.

In my opinion — greatly simplified (the interview went over almost 3 hours) — as a retoucher, you have a lot of responsibility and should be aware of it. Unnatural retouching is not only extremely disturbing for me; I think it is dangerous. Be careful and pay attention to the anatomy of humans, learn as much as possible, and consider what shadows are essential, which body parts may be deformed and which wrinkles to keep.

Similar articles:

Do you have any suggestions, additions, is this post out of date, or have you found any mistakes? Then we look forward to your comment.
You are welcome to share this post. We are very grateful for every recommendation.