GL_SRC_COLOR and GL_DST_COLOR

Hello guys,

I’m trying to understand how GL_SRC_COLOR works.

The scale factors are defined in the docs as:
Parameter (fR , fG , fB , fA )
GL_ZERO (0,0,0,0)
GL_ONE (1,1,1,1)
GL_SRC_COLOR (Rs / kR , Gs / kG , Bs / kB , As / kA )
GL_DST_COLOR (Rd / kR , Gd / kG , Bd / kB , Ad / kA )

The docs also say: “They are understood to have integer values between zero and (kR , kG , kB , kA ), where
kR = 2mR - 1
kG = 2mG - 1
kB = 2mB - 1
kA = 2mA - 1
and (mR , mG , mB , mA ) is the number of red, green, blue, and alpha bitplanes.”

As far as I understand, kR-G-B-A are in this range: [0, 255]. I don’t get how the scale factors will be calculated then.

Let’s say the source color I have is (0.5, 0.5, 0.5, 0.5) and the dest color is (0.1, 0.2, 0.3, 0.4).

If I specify the blend func as (GL_SRC_COLOR, GL_DST_COLOR) and blend equation is ADD, what would be the final color?

Thank you!
Truong

The blending equation looks as follows:

(srcColor * <srcFactor>) <op> (dstColor * <dstFactor>)

Here srcColor is the fragment color, i.e. the color of what you are going to render, and dstColor is the framebuffer pixel color, i.e. the color of where you are going to render.
<srcFactor> and <dstFactor> are the factors you use to scale the source and destination colors (set via glBlendFunc) and <op> is the blend operator (set via glBlendEquation).

Thus in your example if <srcFactor> is SRC_COLOR, <dstFactor> is DST_COLOR, and <op> is ADD your equation will end up as follows:

(srcColor * srcColor) + (dstColor * dstColor)

i.e. you add the square of the source and destination colors.
In your example this would result in (0.26, 0.29, 0.34, 0.41).

If you simply want additive blending you use <srcFactor> and <dstFactor> of ONE which will result in:

srcColor + dstColor

If you want alpha blending use <srcFactor> of SRC_ALPHA and <dstFactor> of ONE_MINUS_SRC_ALPHA:

(srcColor * srcAlpha) + (dstColor * (1-srcAlpha))

This is, of course, a simplification, as you can specify more complex blending configurations, including separate factors for color (RGB) and alpha, etc.

In the man page it actually says kX = 2^mX - 1, so if you have 8 bit per color channel, mR=mG=mB=mA = 8 and kR=kG=kB=kA=255.

The color values Rc, Gc, Bc and Ac (lets call the generalization Xc) are understood as integer values between 0 and kX (e.g. kR for Rc), so to get a normalized color value (in the range 0.0 to 1.0), you divide the integer value by the maximum value: Xc / kX

Lets say you have 8 bit per color channel and the values you gave are the normalized results (the original integer values would be (128,128,128,128) and (26,51,77,102)).

Since the source factor is GL_SRC_COLOR, the source color (0.5,0.5,0.5,0.5) is multiplied by itself and results in (0.25,0.25,0.25,0.25).
The destination factor is GL_DST_COLOR, so the color (0.1,0.2,0.3,0.4) becomes (0.01,0.04,0.09,0.16)

Then, the resulting values are added to give the final color value: (0.26,0.29,0.34,0.41)

edit: looks like I was a little bit too slow