-
-
Notifications
You must be signed in to change notification settings - Fork 852
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP Generate all Pixel Blender/Composer possible combinations to solve #535 #641
Conversation
…s of ColorBlenders and AlphaComposers
Reviewing the tests that are failing, I've noticed that some of them, like the ones that draw an Ellipse, are failing because the drawn ellipse has slightly different pixels than in the reference image, probably due to changes in how paths are being drawn. In order to prevent unrelated changes affecting other tests I would suggest:
|
@vpenades some of the test failures report relatively high difference, eg. |
@antonfirsov it's a bug. Some of the blenders do fail because tiny differences in how the ellipse is being drawn right now, but others do fail because incorrect blending. I'm trying to replicate some of the changes made by @dlemstra , but I'm doing the testing by hand, manually comparing the generated and expected images. As a side note, if at some point you consider moving from XUnit to NUnit, you have my vote; I seriously miss |
Okey, this is becoming much more complicated than I expected and It's way past the amount of time I had available for this issue.... I'll leave some comments in case people has more ideas on how to solve this:
Certainly, generating all possible combinations of Alpha Composing and Color Blending is tricky, but I would like to stress it's worth the pain: some specific combinations are incredibly useful and are widely used by photoshop for many special effects. |
there is every possibility that the reference images are wrong... we should probably produce a bit of a crib sheet output with a set of svg's side by side with our equivalent output and manually compare until we are happy with the final output... once we are then we can update our reference images. |
Our current ref images should follow the definition in #493 and the article it references. If it's not the case, it means we have a bug here. Excuse us for the super slow progress on this PR! Unfortunately, this is an area I'm really not familiar with. @JimBobSquarePants is fighting jpeg, namespace refactor and his flu. @tocsoft works on telemetry. I think we need some extra patience with this, unless @dlemstra has some time to review this and help understanding the root cause of the differences. |
To be brutally honest and explain the reason I see you getting little support on this PR will be caused by the fact that this particular issue hasn't really got anyone on the core team championing it and thus none of us has much incentive to actually help push it thru to completion. For me personally (i can't speek or the rest on this) for v1 I was perfectly happy with the working solution we had, thus all I see here is something distracting us for pushing out v1 as we potentially break more APIs. |
That's not to say we won't try and look over it and aid where we can just that we're aren't likely to prioritise this over getting releases out. |
@tocsoft yes, I noticed, but it is something I expected, the core team has a lot to chew and as you say, this issue was more or less closed. For me it's enough that this PR was not rejected straight away. Anyway, even if you think that the current solution is good enough, it still has bugs and some conceptual mistakes. I'm not blaming @dlemstra , there's several misconceptions about the composition modes, how they're used, and what to expect from them. What I'm trying to do is to get results similar to what photoshop produces, so my implementation is a mix of what I got from the theory, and from what I can expect in my experience using photoshop. |
@vpenades Are all these blend modes covered by the SVG spec? If so, perhaps we can use that as a definitive reference. I must admit, I currently know very little on the subject, hence my lack of response. |
@vpenades can you give a few examples of those specific combinations and what special effects can be achieved with them? I always wondered if there are any combinations that can be particularly useful and for what. Thanks. |
@tzachshabtay By far, then most useful combination is (Multiply|Add|Subtract) + SrcAtop These examples use a Flat rendered font as Multiply+SrcAtop Add+SrcAtop Multiply+SrcAtop using a pattern texture as Source Other combinations can be useful in certain cases, while some combinations are redundant and produce the same effect (for example, Xor & Clear produce the same result regadless of choosing Normal, Multiply, etc) but all combinations are provided for completeness and because the code generator automatically generates all possible combinations. The ultimate goal is to combine multiple image effects like blur, emboss, etc, with color/alpha composition to achieve things like these. |
@JimBobSquarePants It seems the SVG composition modes are explained here , I need a bit more research, but from a fast overview, it seems all modes are covered. Then, there's extended modes but we can really leave them for a future release:
Alpha blend modes can be done right now in two steps, but it's certainly slower. |
I've found why the Here's how the reference and output images look like, with and without alpha masking.
The difference in the result is because I've used a fast optimization that produces correct results, but leaves the RGB unmodified for fully transparent pixels. So the image comparer is failing because it's determining two fully transparent pixels are different because their RGB part is different. I remember this is something we discussed in the past... there's some cases in which transparent pixels should be treated as equal, regardless of the values of their RGB parts.... it's like making the comparison with premultiplied values. @antonfirsov @JimBobSquarePants I am aware that in some other cases, comparing the RGBA as independent values is neccesary, but when the whole pipeline being tested essentially works with premultiplied values as it is the case of alpha composition, we need an image comparison mode that treats transparent values as equal, or in other way, compares alpha premultiplied values. |
@vpenades Wouldn't that be an issue with the implementation itself? I would consider our equality operators are doing the correct thing. |
@JimBobSquarePants These are my In Out implementations:
The complete implementation would look like this:
As you can see, the complete implementation is applying a premultiply and an unpremultiply just one after the other. Which is a bit of a nonsense. The trick here is that the MathF.Max is preventing a division by Alpha Zero, which is what's producing a RGBA=0,0,0,0 for all fully transparent cases. Edit: an alternative way would be to just do I've just updated the functions to use pre/unpre multiplication. I preffer the tests to pass for now, and optimization can come later. |
In the above sample if
Not sure about the above now. The algorithm above doesn't seem to match the SVG spec. Note I have no idea what Dca nor X, Y, Z mean sing the spec writers helpfully decided to not define them.
|
So, you assume if Alpha is 0, the RGB must be zero too? both for input and output of functions?
The X,Y,Z is essentially a boolean mechanism to enable/disable parts of the complete formula. But the formulas I've seen in several places expect receiving unpremultiplied values, but deliver premultiplied values, or assume Dst is always opaque, that's why every attempt to implement all these formulas straight away from the documentation always fail. Instead, I've been trying to implement the formulas based on the expected outcome, for example: SrcIn: "Draw Src ONLY when both Src and Dst are visible" Of course, the "visible/invisible" is not boolean, but alpha weights. I agree that there's issues with the reference test images and how pixel comparison is defined, most of the tests I've been doing it's been comparing with the output of Photoshop and Krita, but for the most part, they operate only with SrcOver, then SrcAtop is a small change from SrcOver, and the rest of the compositions are much simpler. |
Not quite, I'll annotate your sample to explain what I mean. Apologies for confusion. public static Vector4 In(Vector4 dst, Vector4 src, Vector4 blend)
{
float alpha = dst.W * src.W;
// If alpha here is 0 then color = Vector4.Zero since you are multiplying by 0;
Vector4 color = src * alpha; // premultiply
color /= MathF.Max(alpha, Constants.Epsilon); // unpremultiply
color.W = alpha;
return color;
} |
@JimBobSquarePants yes, the code you annotated is the full code, collapsing the sections of the full formula that always resolve to zero. I'm going to do one or two more rounds of manual checking the formulas, but I believe they're right. So if the checks keep failing then it will be the time to amend them. One thing to consider is that all the porter duff tests being done right now use fully opaque/ fully transparent pixels, but we would also need images with 50% transparency, because blending 50% source and 50% destination gives 75% opacity and that case needs to be tested |
@vpenades We've got failing tests in |
I've reviewed the reference and output images: First of all, I noticed the reference images have been saved in PNG using 8 bits, using one of the palette colors as transparent, this seems to be messing with XOR and CLEAR, even the images look the same at first sight. All the other reference images that fail are incorrect. So I'm going to replace the reference images with the ones currently being generated. I was a bit puzzled about the meaning of the "Clear" mode; At first I thought it was some sort of rubber mode that erases pixels whenever the source has some opacity; But that effect can be achieved with DestOut composition. So the Clear composition mode essentially makes transparent every pixel in the rectangle being processed, regadless of the content of Source and Destination. Not very useful, but it's there for completeness. This is apparent with The reference looks like below, as if the rectangle is being masked with a circle. But, the actual correct output is shown below. Notice that what is being masked is the whole area being drawn, regadless of the content having transparent pixels or not. Another issue to take care when drawing with composition is to care about which pixels are processed compared to what is being drawn. For example, I've noticed when drawing an ellipse, the shape of the area being processed is the square region containing the ellipse. That means the pixels outside the filter are being affected by the filter, that's why some of the filter results look so weird |
@JimBobSquarePants I guess Travis and AppVeyor still fail because the new tests are being done against the master Test-Samples repository, which has not been merged with the updated test images. I am not sure if it is possible to reconfigure this to tell travis to test against the test images fork. Otherwhise, it's about merging alltogether. On a side note, I dislike having the sample images separated from the main repository, because it causes issues like this. I understand the reason of having the test images separated is to keep the main repository lightweight.... but in the end, adding new tests that also require new images in the repository makes things very tricky |
@vpenades Yeah it's simply due to the missing images. That's ok just now though and I don't want to add the complication of messing with the build scripts to get them working. Regarding the test images. Yeah, we'll probably end up moving them back in. Thanks for the explanation on the other images btw, I found this from the IBM Developer Works documentation and example code with which we could theoretically use to generate comparison images. Looking at the new output though. There's one image that strikes me as odd. _1DarkBlueRect_2BlendHotPinkRect_3BlendSemiTransparentRedEllipse_mode-DestAtop I would expect only the part of the destination image that overlaps the source image to be drawn with the part of the source image that doesn't overlap the destination image drawn also. Maybe I'm missing something though? The other example reflects that expected behaviour. |
@JimBobSquarePants surpringly, both images are correct. At first it did strike me, but after careful thinking, it's actually correct! Let's talk about this one: So, what's going on here? first of all, think that we're using DEST, which essentially reverses what's being applied over what. If SOURCE-X applies the source over the background, DEST-X applies the background over the source. The drawing order is this:
Notice that, in the 3rd step, the background, aka DEST, is an image that contains the two previously drawn rectangles, and is that image that's being drawn ATOP the image of the ellipse. Since we're using ImageBrush to render the elements, all the pixels of the square area are processed, ATOP operator states that only the pixels not transparent on the target image (in this case, the ellipse) are being drawn, it's as if the image below acts as a mask on the pixels on top, efectively clearing all the pixels outside the ellipse, but within the imagebrush. |
@vpenades Ah yes! Great explanation. It makes perfect sense now. It's really interesting to look at that with the explanation in mind. Once you focus inside the rectangular bounds of the ellipse it's exactly how I'd expect it. I think I'm happy with this and will give it one last pass before merging it and the test images in. Thanks for taking the effort to figure out what is what I consider to be mind-boggling subject and thanks for pushing us to include it in the library 👍 |
@JimBobSquarePants thanks! My hope is that this feature will be available before RC1. Keep in mind that this PR is the preparatory work for a second one, that would modify the BlenderMode enumeration. The idea is to be able to configure Alpha Composition and Color Blending separately. Personally, I would split it into two enumerations, which would be an API breaking change. |
Then we would most definitely need to introduce this before RC1. |
Codecov Report
@@ Coverage Diff @@
## master #641 +/- ##
==========================================
- Coverage 89.6% 85.19% -4.42%
==========================================
Files 878 878
Lines 37244 39127 +1883
Branches 2452 2539 +87
==========================================
- Hits 33373 33333 -40
- Misses 3178 5101 +1923
Partials 693 693
Continue to review full report at Codecov.
|
WIP Generate all Pixel Blender/Composer possible combinations to solve SixLabors#535
Prerequisites
Description
This is the first stage of trying to solve #535, as explained in the issue, pixel operations involve color blending, and alpha composing. The current implementation assumes you can only do one of the two at a time.
The changes I've done, theoretically allow to have pixel blenders for all combinations of Color Blenders and Alpha Composers.
What's left:
PixelBlenderMode
enumeration that needs to be extended, I've left things untouched so we can discuss how to extend it.