Screen Space Gradient Shader with Dithering in Unity

office2-small

On Friday I came across an interesting art related problem with a really simple technical solution that I think is worth sharing. Let’s talk gradients:

In Startup Freak the levels consist of an office, and a city background. The background itself consists of 2 parallex layers: the “near” buildings that are fully colored, and the “far” buildings that are more of a silhouette with a subtle blue gradient. Rob Hayes did a fantastic job of creating the art style and colors for these. Each building was a separate sprite so I could mix and match them and create varied backgrounds.

Once I sat down to actually build a level, I hit a hurdle with the far backgrounds: see, the gradients on the different buildings did not strictly match each other at a given height. More importantly, I wanted to put the buildings at varied heights so the mismatch would be a problem regardless. What I really wanted was one smooth, continuous background. Instead I got this:

background-original-small

As you can see the gradient mismatches are very jarring. Now a more established team might just consider this an art problem and have the in-house artist rebuild the gradient for each background after they have been composed (and it probably wouldn’t be too hard). But I’m outsourcing the art and don’t have the luxury to recommission a lot of work. Enter the gradient shader:

A Linear Gradient Shader:

As it turns out, creating a gradient shader in screen space, especially a 2 color linear gradient, is fairly trivial. Note that I’m using a tile map so the background consists of layer meshes with an even grid of quads. The regularity of the grid means that I can calculate the gradient color on a per-vertex basis. The color then gets automatically blended between vertices. If you are drawing uneven meshes or entire sprites as one quad, there is no reason why you can’t do the exact same calculations in the fragment shader instead. I doubt you would notice any performance hit from it on most hardware.

First we declare a few properties to use in the shader:


Properties
{
    _MainTex("Texture", 2D) = "white" {}
    _BottomColor("Bottom Color", Color) = (1,1,1,1)
    _TopColor("Top Color", Color) = (1,1,1,1)
}

Note that we still want the original texture, but only for its alpha channel. There is property more optimal ways of storing this data so we are not wasting 3 channels per texture…but my game is small and for PC and who cares 🙂

The vertex shader looks like this:


v2f vert(appdata_t IN)
{
    v2f OUT;
    OUT.position = UnityObjectToClipPos(IN.position);
    OUT.uv = TRANSFORM_TEX(IN.uv, _MainTex);

    float factor = mad(OUT.position.y, _ProjectionParams.x*0.5, 0.5);
    factor = clamp(factor, 0, 1);
    OUT.color = lerp(_BottomColor, _TopColor, factor);
    return OUT;
}

Very simple: we normally have to transform the world space position to screen space position anyway so OUT.position actually contains a 2D vector in the range [-1,-1] to [1, 1]. All we need to do is map the y value to the [0, 1] range such that 0 is the bottom of the screen and 1 is the top. We use clamp() to make sure we don’t go outside this range (little more useful later) and finally use lerp() to calculate the color at this vertex.

Just a note: in certain scenarios even on the same video card the projection value from UnityObjectToClipPos is flipped. I still haven’t been able to determine why, but the value _ProjectionParams.x takes care of this.

The fragment shader is even simpler:


half4 frag(v2f IN) : SV_Target
{
    half4 texCol = tex2D(_MainTex, IN.uv);

    half4 c;
    c.rgb = IN.color.rgb;
    c.rgb *= texCol.a;
    c.a = texCol.a;
    return c;
}

We use the RGB values from the color we calculated in the vertex shader, and get the alpha from the texture which dictates the outline of the building. As is typical, we need to multiply the output color with the alpha value so that alpha blending works as expected (I’m using Blend One OneMinusSrcAlpha mode).

Here is how the far background looks at this point:

background-gradient-nooffset-small

Gradient Offset

Ahhh much better, but it still doesn’t look quite right. It’s a little too flat. When I compared it to the original sprites, I realized that Rob was using a much more “crunched” gradient: that is the start and end point of the gradient was not at the extremities of the image, but rather somewhere in the middle. Guess what, we can easily do this in our shader with an Offset value:


float factor = mad(OUT.position.y, -0.5, 0.5);
factor *= 1 + _Offset*2;
factor -= _Offset;

// if (factor <= 0 || factor >= 1.0)
// {
//     OUT.color = half4(1, 0, 0, 1);
// }
// else
// {
    factor = clamp(factor, 0, 1);
    OUT.color = lerp(_BottomColor, _TopColor, factor);
// }

return OUT;

You can use other formulas if you want a non-symmetric offset. I also found the commented out code above useful in debugging this. It basically shows us where the gradient starts and ends:

background-gradient-offset-debug-small

Testing to see where the gradient start and end points are (Offset = 1)

background-gradient-offset-small

Screen space gradient with Offset = 1

Banding Problems

This already looked pretty good, but I started noticing very subtle banding going on here, a problem that plagues gradients all over the digital world. It might not be very noticeable here but at full screen it’s definitely visible. It so happened that I recently watched a fantastic GDC lecture by the makers of Inside that touched on the subject of banding. The solution they talk about is adding tiny amounts of noise to the shader output in order to reduce the banding.

I implemented a really basic version by simply adding a small random value to the output color. Already the results were surprisingly good and really, there is no reason why you shouldn’t be using this type of noise in a lot of your shaders where banding might become an issue. I decided to take it one level further and use a blue noise texture. Check out that GDC talk and this article for further details of why blue noise is better: Free Blue Noise Textures

The modified fragment shader looks like this:


float3 getNoise(float2 uv)
{
    float3 noise = tex2D(_NoiseTex, uv * 100 + _Time * 50);
    noise = mad(noise, 2.0, -0.5);

    return noise/255;
}

half4 frag(v2f IN) : SV_Target
{
    half4 texCol = tex2D(_MainTex, IN.uv);

    half4 c;
    c.rgb = IN.color.rgb + getNoise(IN.uv);
    c.rgb *= texCol.a;
    c.a = texCol.a;

    return c;
}

We simply get a random value from the noise texture (using _Time so that the noise doesn’t stay static), map it to [-0.5, 1.5] range, divide by 255 (since the banding error is 1/255) and add it to our output color.

There are a couple of questions which I have not been able to answer for myself:

  • All the sources talk about a modulation error that you would get if you just apply a uniform distribution noise. However in my experiments I found that the results were actually worse when I applied a triangular distribution function. Not sure how to explain this away but I ended up removing that code.
  • There is also a passing mention of needing to convert to sRGB color space when applying the noise in some scenarios. I haven’t looked at the mathematics of this closely enough but in my experiments I did not see any visible improvements when doing that so I left it out.

Results:

dithering

Here are a few more screenshots of some of the levels I have built.

office1-small

office2-small

office3-small

office4-small

Source Code

Here is the complete shader code. Feel free to use it as you need.

One thought on “Screen Space Gradient Shader with Dithering in Unity

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s