Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Generate all Pixel Blender/Composer possible combinations to solve #535 #641

Merged
merged 19 commits into from
Aug 19, 2018

Conversation

vpenades
Copy link
Contributor

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Description

This is the first stage of trying to solve #535, as explained in the issue, pixel operations involve color blending, and alpha composing. The current implementation assumes you can only do one of the two at a time.

The changes I've done, theoretically allow to have pixel blenders for all combinations of Color Blenders and Alpha Composers.

What's left:

  • Testing all combinations!!
  • Right now, choosing a pixel blender is done with PixelBlenderMode enumeration that needs to be extended, I've left things untouched so we can discuss how to extend it.

@JimBobSquarePants JimBobSquarePants changed the title Generate all Pixel Blender/Composer possible combinations to solve #535 WIP Generate all Pixel Blender/Composer possible combinations to solve #535 Jun 27, 2018
@vpenades
Copy link
Contributor Author

Reviewing the tests that are failing, I've noticed that some of them, like the ones that draw an Ellipse, are failing because the drawn ellipse has slightly different pixels than in the reference image, probably due to changes in how paths are being drawn.

In order to prevent unrelated changes affecting other tests I would suggest:

  • Using loaded images instead of shapes.
  • Using a custom, very basic, shape renderer within the tests suite.

@antonfirsov
Copy link
Member

@vpenades some of the test failures report relatively high difference, eg. _1DarkBlueRect_2BlendHotPinkRect_3BlendSemiTransparentRedEllipse<Rgba32>(provider: Blank250x250[Rgba32], mode: Src) has 1.14%. Are you sure it's not a bug/specification change?

@vpenades
Copy link
Contributor Author

@antonfirsov it's a bug. Some of the blenders do fail because tiny differences in how the ellipse is being drawn right now, but others do fail because incorrect blending.

I'm trying to replicate some of the changes made by @dlemstra , but I'm doing the testing by hand, manually comparing the generated and expected images.

As a side note, if at some point you consider moving from XUnit to NUnit, you have my vote; I seriously miss TestContext.AttachFile(path);

@vpenades
Copy link
Contributor Author

Okey, this is becoming much more complicated than I expected and It's way past the amount of time I had available for this issue.... I'll leave some comments in case people has more ideas on how to solve this:

  • My original implementation of Alpha Composing was done based on this, somehow, I missed the last Lerp, which seems to be needed for the alpha compositions to work correctly.
  • Current Alpha Composing tests only use Normal Color Blending, and current Color Blending Tests use SrcOver alpha composition. This, combined with using flat/opaque images might lead for tests to pass, but when dealing with blending layers with translucid gradient content might be incorrect.

Certainly, generating all possible combinations of Alpha Composing and Color Blending is tricky, but I would like to stress it's worth the pain: some specific combinations are incredibly useful and are widely used by photoshop for many special effects.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 2, 2018

Okey, I've been able to move this forward, and there's just a few combinations still failing:

  • Clear
  • (Src)In
  • (Src)Out
  • DestIn
  • DestOut

After reviewing the images produced, I found this:

Clear: The Output and Expected images differ by 4% , but the images look the same... I don't see where's the difference

SrcIn, SrcOut, DestIn, DestOut:

I think the reference images are wrong, they look like "XOR".... the ones being produced seem to be more consistent with the specification:

For example SrcIn current output looks like this:
_1darkbluerect_2blendblackellipse_mode-in
While the reference looks like this:
_1darkbluerect_2blendblackellipse_mode-in
The specification tells that (Src)In and (Src)Out must only draw the Source, thus, the reference must be wrong.

@tocsoft @dlemstra what do you think?

@tocsoft
Copy link
Member

tocsoft commented Jul 2, 2018

there is every possibility that the reference images are wrong... we should probably produce a bit of a crib sheet output with a set of svg's side by side with our equivalent output and manually compare until we are happy with the final output... once we are then we can update our reference images.

@antonfirsov
Copy link
Member

Our current ref images should follow the definition in #493 and the article it references. If it's not the case, it means we have a bug here.

Excuse us for the super slow progress on this PR! Unfortunately, this is an area I'm really not familiar with. @JimBobSquarePants is fighting jpeg, namespace refactor and his flu. @tocsoft works on telemetry. I think we need some extra patience with this, unless @dlemstra has some time to review this and help understanding the root cause of the differences.

@tocsoft
Copy link
Member

tocsoft commented Jul 3, 2018

To be brutally honest and explain the reason I see you getting little support on this PR will be caused by the fact that this particular issue hasn't really got anyone on the core team championing it and thus none of us has much incentive to actually help push it thru to completion.

For me personally (i can't speek or the rest on this) for v1 I was perfectly happy with the working solution we had, thus all I see here is something distracting us for pushing out v1 as we potentially break more APIs.

@tocsoft
Copy link
Member

tocsoft commented Jul 3, 2018

That's not to say we won't try and look over it and aid where we can just that we're aren't likely to prioritise this over getting releases out.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 3, 2018

@tocsoft yes, I noticed, but it is something I expected, the core team has a lot to chew and as you say, this issue was more or less closed. For me it's enough that this PR was not rejected straight away.

Anyway, even if you think that the current solution is good enough, it still has bugs and some conceptual mistakes. I'm not blaming @dlemstra , there's several misconceptions about the composition modes, how they're used, and what to expect from them.

What I'm trying to do is to get results similar to what photoshop produces, so my implementation is a mix of what I got from the theory, and from what I can expect in my experience using photoshop.

@JimBobSquarePants
Copy link
Member

@vpenades Are all these blend modes covered by the SVG spec? If so, perhaps we can use that as a definitive reference.

I must admit, I currently know very little on the subject, hence my lack of response.

@tzachshabtay
Copy link

Certainly, generating all possible combinations of Alpha Composing and Color Blending is tricky, but I would like to stress it's worth the pain: some specific combinations are incredibly useful and are widely used by photoshop for many special effects.

@vpenades can you give a few examples of those specific combinations and what special effects can be achieved with them?

I always wondered if there are any combinations that can be particularly useful and for what.

Thanks.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 4, 2018

@tzachshabtay By far, then most useful combination is (Multiply|Add|Subtract) + SrcAtop

These examples use a Flat rendered font as Destination and a blurred+tinted of the same font as Source

Multiply+SrcAtop

imagen

Add+SrcAtop

imagen

Multiply+SrcAtop using a pattern texture as Source

imagen

Other combinations can be useful in certain cases, while some combinations are redundant and produce the same effect (for example, Xor & Clear produce the same result regadless of choosing Normal, Multiply, etc) but all combinations are provided for completeness and because the code generator automatically generates all possible combinations.

The ultimate goal is to combine multiple image effects like blur, emboss, etc, with color/alpha composition to achieve things like these.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 4, 2018

@JimBobSquarePants It seems the SVG composition modes are explained here , I need a bit more research, but from a fast overview, it seems all modes are covered.

Then, there's extended modes but we can really leave them for a future release:

  • Extended color blend modes: Hue, Saturation, Color, Luminosity
  • Extended alpha composition modes; all known modes, using inverse alpha: InvSrcAtop, InvSrcIn, etc

Alpha blend modes can be done right now in two steps, but it's certainly slower.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 5, 2018

I've found why the In Out DestIn DestOut modes are still failing while the images produced look the same at first sight.

Here's how the reference and output images look like, with and without alpha masking.

Reference image rgbA output image rgbA
porterduffoutputiscorrect_rgba32_pd-dest_destin porterduffoutputiscorrect_rgba32_pd-dest_destin
RGB Only Rgb Only
imagen imagen

The difference in the result is because I've used a fast optimization that produces correct results, but leaves the RGB unmodified for fully transparent pixels.

So the image comparer is failing because it's determining two fully transparent pixels are different because their RGB part is different.

I remember this is something we discussed in the past... there's some cases in which transparent pixels should be treated as equal, regardless of the values of their RGB parts.... it's like making the comparison with premultiplied values.

@antonfirsov @JimBobSquarePants I am aware that in some other cases, comparing the RGBA as independent values is neccesary, but when the whole pipeline being tested essentially works with premultiplied values as it is the case of alpha composition, we need an image comparison mode that treats transparent values as equal, or in other way, compares alpha premultiplied values.

@JimBobSquarePants
Copy link
Member

@vpenades Wouldn't that be an issue with the implementation itself? I would consider our equality operators are doing the correct thing.

@vpenades
Copy link
Contributor Author

vpenades commented Jul 5, 2018

@JimBobSquarePants These are my In Out implementations:

        public static Vector4 In(Vector4 dst, Vector4 src, Vector4 blend)
        {
            src.W = dst.W * src.W;
            return src;
        }

        public static Vector4 Out(Vector4 dst, Vector4 src)
        {
            src.W = (1 - dst.W) * src.W;
            return src;
        }

The complete implementation would look like this:

        public static Vector4 In(Vector4 dst, Vector4 src, Vector4 blend)
        {
            float alpha = dst.W * src.W;
          
            Vector4 color = src * alpha;                  // premultiply
            color /= MathF.Max(alpha, Constants.Epsilon); // unpremultiply
            color.W = alpha;

            return color;
        }

        public static Vector4 Out(Vector4 dst, Vector4 src)
        {
            float alpha = (1 - dst.W) * src.W;

            Vector4 color = src * alpha;                  // premultiply
            color /= MathF.Max(alpha, Constants.Epsilon); // unpremultiply
            color.W = alpha;

            return color;
        }

As you can see, the complete implementation is applying a premultiply and an unpremultiply just one after the other. Which is a bit of a nonsense.

The trick here is that the MathF.Max is preventing a division by Alpha Zero, which is what's producing a RGBA=0,0,0,0 for all fully transparent cases.

Edit: an alternative way would be to just do if (alpha == 0) return Vector4.Zero; , but this makes more obvious that when the functions return transparent, it is also required the RGB to be (0,0,0).

I've just updated the functions to use pre/unpre multiplication. I preffer the tests to pass for now, and optimization can come later.

@JimBobSquarePants
Copy link
Member

JimBobSquarePants commented Jul 5, 2018

        public static Vector4 In(Vector4 dst, Vector4 src, Vector4 blend)
        {
            float alpha = dst.W * src.W; 
          
            Vector4 color = src * alpha;                  // premultiply
            color /= MathF.Max(alpha, Constants.Epsilon); // unpremultiply
            color.W = alpha;

            return color;
        }

In the above sample if dst.W * src.W is equal to 0 then the color is already <0,0,0,0> before you unpremultiply, the only thing you are avoiding by adding the MathF.Max is color becoming <NaN, NaN, NaN, 0> which gets converted to <0,0,0,0> when packing back from the vector anyway.

It looks to me like the reference image is incorrect and the test output is actually expected since the algorithm can only operate in the RGB space and input image is masking RGB values with an opacity of 0. If the reference image actually used transparent pixels instead of masking then I would expect the images to match.

Not sure about the above now. The algorithm above doesn't seem to match the SVG spec.

Note I have no idea what Dca nor X, Y, Z mean sing the spec writers helpfully decided to not define them.

f(Sc,Dc) = Dc
X        = 1
Y        = 0
Z        = 0

Dca' = Dca × Sa Da'  = Sa × Da 

https://www.w3.org/TR/SVGCompositing/#comp-op-dst-in

@vpenades
Copy link
Contributor Author

vpenades commented Jul 6, 2018

@JimBobSquarePants

In the above sample if dst.W * src.W is equal to 0 then the color is already <0,0,0,0>

So, you assume if Alpha is 0, the RGB must be zero too? both for input and output of functions?

Not sure about the above now. The algorithm above doesn't seem to match the SVG spec.

The X,Y,Z is essentially a boolean mechanism to enable/disable parts of the complete formula. But the formulas I've seen in several places expect receiving unpremultiplied values, but deliver premultiplied values, or assume Dst is always opaque, that's why every attempt to implement all these formulas straight away from the documentation always fail.

Instead, I've been trying to implement the formulas based on the expected outcome, for example:

SrcIn: "Draw Src ONLY when both Src and Dst are visible"
SrcOut: "Draw Src ONLY when Src is visible and Dst is invisible"
SrcAtop: "Draw blend(Src,Dst) when BOTH Src and DSt are visible, Draw Dst when ONLY Dst is visible"
SrcOver: "Draw blend(Src,Dst) when BOTH Src and Dst are visible, Draw Dst when ONLY Dst is visible, Draw Src when ONLY Src is visible"

Of course, the "visible/invisible" is not boolean, but alpha weights.

I agree that there's issues with the reference test images and how pixel comparison is defined, most of the tests I've been doing it's been comparing with the output of Photoshop and Krita, but for the most part, they operate only with SrcOver, then SrcAtop is a small change from SrcOver, and the rest of the compositions are much simpler.

@JimBobSquarePants
Copy link
Member

So, you assume if Alpha is 0, the RGB must be zero too? both for input and output of functions?

Not quite, I'll annotate your sample to explain what I mean. Apologies for confusion.

        public static Vector4 In(Vector4 dst, Vector4 src, Vector4 blend)
        {
            float alpha = dst.W * src.W; 
          
           // If alpha here is 0 then color = Vector4.Zero since you are multiplying by 0;
            Vector4 color = src * alpha;                  // premultiply 
            color /= MathF.Max(alpha, Constants.Epsilon); // unpremultiply
            color.W = alpha;

            return color;
        }

@vpenades
Copy link
Contributor Author

vpenades commented Jul 8, 2018

@JimBobSquarePants yes, the code you annotated is the full code, collapsing the sections of the full formula that always resolve to zero.

I'm going to do one or two more rounds of manual checking the formulas, but I believe they're right. So if the checks keep failing then it will be the time to amend them.

One thing to consider is that all the porter duff tests being done right now use fully opaque/ fully transparent pixels, but we would also need images with 50% transparency, because blending 50% source and 50% destination gives 75% opacity and that case needs to be tested

@JimBobSquarePants
Copy link
Member

@vpenades We've got failing tests in SolidFillBlendedShapesTests. Can you double check your output and update SixLabors/Imagesharp.Tests.Images#1 accordingly.

https://ci.appveyor.com/project/six-labors/imagesharp/build/1.0.0-PullRequest00641001725/job/ep7txx8yph2spk81/tests

@vpenades
Copy link
Contributor Author

vpenades commented Aug 6, 2018

We've got failing tests in SolidFillBlendedShapesTests. Can you double check your output and update SixLabors/Imagesharp.Tests.Images#1 accordingly.

I've reviewed the reference and output images:

First of all, I noticed the reference images have been saved in PNG using 8 bits, using one of the palette colors as transparent, this seems to be messing with XOR and CLEAR, even the images look the same at first sight.

All the other reference images that fail are incorrect.

So I'm going to replace the reference images with the ones currently being generated.

I was a bit puzzled about the meaning of the "Clear" mode; At first I thought it was some sort of rubber mode that erases pixels whenever the source has some opacity; But that effect can be achieved with DestOut composition. So the Clear composition mode essentially makes transparent every pixel in the rectangle being processed, regadless of the content of Source and Destination. Not very useful, but it's there for completeness.

This is apparent with _1DarkBlueRect_2BlendHotPinkRect_3BlendSemiTransparentRedEllipse_mode-Clear

The reference looks like below, as if the rectangle is being masked with a circle.
_1darkbluerect_2blendhotpinkrect_3blendsemitransparentredellipse_mode-clear

But, the actual correct output is shown below. Notice that what is being masked is the whole area being drawn, regadless of the content having transparent pixels or not.
_1darkbluerect_2blendhotpinkrect_3blendsemitransparentredellipse_mode-clear

Another issue to take care when drawing with composition is to care about which pixels are processed compared to what is being drawn. For example, I've noticed when drawing an ellipse, the shape of the area being processed is the square region containing the ellipse. That means the pixels outside the filter are being affected by the filter, that's why some of the filter results look so weird

@vpenades
Copy link
Contributor Author

vpenades commented Aug 8, 2018

@JimBobSquarePants I guess Travis and AppVeyor still fail because the new tests are being done against the master Test-Samples repository, which has not been merged with the updated test images.

I am not sure if it is possible to reconfigure this to tell travis to test against the test images fork. Otherwhise, it's about merging alltogether.

On a side note, I dislike having the sample images separated from the main repository, because it causes issues like this. I understand the reason of having the test images separated is to keep the main repository lightweight.... but in the end, adding new tests that also require new images in the repository makes things very tricky

@JimBobSquarePants
Copy link
Member

@vpenades Yeah it's simply due to the missing images. That's ok just now though and I don't want to add the complication of messing with the build scripts to get them working. Regarding the test images. Yeah, we'll probably end up moving them back in.

Thanks for the explanation on the other images btw, I found this from the IBM Developer Works documentation and example code with which we could theoretically use to generate comparison images.
https://www.ibm.com/developerworks/library/j-mer0918/index.html

Looking at the new output though. There's one image that strikes me as odd.

_1DarkBlueRect_2BlendHotPinkRect_3BlendSemiTransparentRedEllipse_mode-DestAtop
_1darkbluerect_2blendhotpinkrect_3blendsemitransparentredellipse_mode-destatop

I would expect only the part of the destination image that overlaps the source image to be drawn with the part of the source image that doesn't overlap the destination image drawn also. Maybe I'm missing something though?

The other example reflects that expected behaviour.

_1DarkBlueRect_2BlendBlackEllipse_mode-DestAtop
_1darkbluerect_2blendblackellipse_mode-destatop

@vpenades
Copy link
Contributor Author

vpenades commented Aug 8, 2018

@JimBobSquarePants surpringly, both images are correct. At first it did strike me, but after careful thinking, it's actually correct!

Let's talk about this one:

So, what's going on here? first of all, think that we're using DEST, which essentially reverses what's being applied over what. If SOURCE-X applies the source over the background, DEST-X applies the background over the source.

The drawing order is this:

  1. Draw a blue rectangle
  2. Draw a pink rectangle
  3. Draw an ellipse

Notice that, in the 3rd step, the background, aka DEST, is an image that contains the two previously drawn rectangles, and is that image that's being drawn ATOP the image of the ellipse.

Since we're using ImageBrush to render the elements, all the pixels of the square area are processed, ATOP operator states that only the pixels not transparent on the target image (in this case, the ellipse) are being drawn, it's as if the image below acts as a mask on the pixels on top, efectively clearing all the pixels outside the ellipse, but within the imagebrush.

@JimBobSquarePants
Copy link
Member

@vpenades Ah yes! Great explanation. It makes perfect sense now.

It's really interesting to look at that with the explanation in mind. Once you focus inside the rectangular bounds of the ellipse it's exactly how I'd expect it.

I think I'm happy with this and will give it one last pass before merging it and the test images in.

Thanks for taking the effort to figure out what is what I consider to be mind-boggling subject and thanks for pushing us to include it in the library 👍

@vpenades
Copy link
Contributor Author

@JimBobSquarePants thanks!

My hope is that this feature will be available before RC1.

Keep in mind that this PR is the preparatory work for a second one, that would modify the BlenderMode enumeration. The idea is to be able to configure Alpha Composition and Color Blending separately. Personally, I would split it into two enumerations, which would be an API breaking change.

@JimBobSquarePants
Copy link
Member

Personally, I would split it into two enumerations, which would be an API breaking change.

Then we would most definitely need to introduce this before RC1.

@CLAassistant
Copy link

CLAassistant commented Aug 16, 2018

CLA assistant check
All committers have signed the CLA.

@codecov
Copy link

codecov bot commented Aug 19, 2018

Codecov Report

Merging #641 into master will decrease coverage by 4.41%.
The diff coverage is 37.97%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #641      +/-   ##
==========================================
- Coverage    89.6%   85.19%   -4.42%     
==========================================
  Files         878      878              
  Lines       37244    39127    +1883     
  Branches     2452     2539      +87     
==========================================
- Hits        33373    33333      -40     
- Misses       3178     5101    +1923     
  Partials      693      693
Impacted Files Coverage Δ
...ts/PixelBlenders/DefaultPixelBlenders.Generated.cs 17.88% <ø> (-78.55%) ⬇️
...s/PixelBlenders/PorterDuffFunctionsTests_TPixel.cs 100% <100%> (ø) ⬆️
...rp.Drawing/Processing/GradientBrushBase{TPixel}.cs 95.34% <100%> (ø) ⬆️
...Sharp.Tests/Drawing/SolidFillBlendedShapesTests.cs 100% <100%> (ø) ⬆️
...Tests/PixelFormats/PixelOperationsTests.Blender.cs 100% <100%> (ø) ⬆️
...xelFormats/PixelOperations{TPixel}.PixelBenders.cs 95.45% <100%> (ø) ⬆️
...lFormats/PixelBlenders/PorterDuffFunctionsTests.cs 100% <100%> (ø) ⬆️
...ats/PixelBlenders/PorterDuffFunctions.Generated.cs 13.49% <13.49%> (-64.69%) ⬇️
.../PixelFormats/PixelBlenders/PorterDuffFunctions.cs 87.5% <87.5%> (-12.5%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c49b031...74a16f1. Read the comment docs.

@JimBobSquarePants JimBobSquarePants merged commit 485395d into SixLabors:master Aug 19, 2018
antonfirsov pushed a commit to antonfirsov/ImageSharp that referenced this pull request Nov 11, 2019
WIP Generate all Pixel Blender/Composer possible combinations to solve SixLabors#535
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants