Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageSharp.Formats.Jpg.DecodedBlock ArrayPool is using up WAY too much memory #151

Closed
4 tasks done
DeCarabas opened this issue Mar 24, 2017 · 53 comments
Closed
4 tasks done

Comments

@DeCarabas
Copy link

Prerequisites

  • I have written a descriptive issue title
  • I have verified that I am running the latest version of ImageSharp
  • I have verified if the problem exist in both DEBUG and RELEASE mode
  • I have searched open and closed issues to ensure it has not already been reported

Description

I'm basically reporting issue #123 again, but with a slightly different spin. I'm running my little web app on a VM with 1GB of RAM, and I'm constantly getting killed with out of memory because of the ArrayPool being used for ImageSharp.Formats.Jpg.DecodedBlock[].

My app concurrently fetches RSS feeds, then for each entry in each RSS feed concurrently fetches all the images and evaluates all the images for a given entry to see which one will make the best thumbnail. I don't need to get into the large number of feeds before I'm decoding a lot of images in parallel.

After a single fetch session, I dumped the process and examined the heap; I discovered that ImageSharp.Formats.Jpg.DecodedBlockArray.ArrayPool was holding on to 337MB all on its own; the largest entry was 262,144 elements long for a size of 70,254,616. (Obviously that's just the current state of the pool; as currently configured by ImageSharp it can grow way past that.)

I need some way to get a handle on this memory usage; I'm looking at other constraints but some kind of configuration to let me limit the pool would be really nice.

Steps to Reproduce

N/A

System Configuration

  • ImageSharp version: 1.0.0 alpha 4
  • Environment (Operating system, version and so on): linux
  • .NET Framework version: .net core 1.1
@antonfirsov
Copy link
Member

antonfirsov commented Mar 24, 2017

@DeCarabas Thank you for your detailed report!

@JimBobSquarePants based on these issues it seems obvious to me, that we need to make our pooling configurable. My proposal:
https://gist.github.com/antonfirsov/25301496604015fbfe7f2b90d608a190

@tocsoft
Copy link
Member

tocsoft commented Mar 24, 2017

totally agree with @antonfirsov here. Lets our users limit the max pool size to use... the difficulty might be around the LimitInBytes as it looks like the total size will be limited not a limit by Type... might have to see if there's a way we can solve that and have it be a shared size (maybe internally we use a common byte[] pool and use Unsafe.As<> to cast them to the correct type thus the array pool will share a common memory limit.)

@DeCarabas
Copy link
Author

I'd take something as simple as the ability to reach in and tweak the individual pool settings. An overall memory usage policy sounds appealing but I think would be (a) hard to implement and (b) hard for me to really tune properly. Whereas I have a bunch of heap dumps and statistics around pool sizes that I can reason about for various pools...

@antonfirsov
Copy link
Member

antonfirsov commented Mar 24, 2017

Individual pools could be customized by implementing custom IArrayPoolProvider -s. There will be no such thing as Configuration.DecodedJpegBlockArrayPoolLimit because it's an implementation detail we want to hide.

@DeCarabas why do you feel a byte-measured limit is hard to tune?

@tocsoft I would be much happier to operate on "typeless" generic byte[] buffers but I'm afraid Unsafe.As<> is just too unsafe, and it would lead to bugs like #146.
The problem is that Unsafe.As<int[]>(new byte[1]).GetType() == typeof(byte[]).

@tocsoft
Copy link
Member

tocsoft commented Mar 24, 2017

ah well that's unfortunate 😞

@JimBobSquarePants
Copy link
Member

We can certainly make our pool configurable but we have to look at a use case issue here also.

Using 1GB ram for any serious image processing is simply a no-go. You'll run out of contiguous memory in the LOH pretty quickly if you work with multiple images. We're pooling yes but the pool is smart and will not grossly over-deliver on array requests. Making the pool more configurable will still lead to very similar memory consumption as it will end up allocating a new array anyway.

I would like to double check our codebase though to ensure we're not missing something in Jpeg. I'm sure we're good but maybe we can make it leaner than it is.

@DeCarabas
Copy link
Author

@antonfirsov I appreciate the desire to hide the pools from the caller, but then you're left with several unappealing options.

  1. You can try to have a single capped pool that dynamically trades off against other pools.
  2. You can try to have a single limit, and somehow divide up that limit amongst other pools.
  3. You can try to have a single per-pool limit.

The problem with option 1 is that picking a good policy is hard-- how do you trade DecodedBlock off vs. Pixel? What if I consume the entire pool with DecodedBlock at the beginning of my program and then never it again, since I spend the rest of the time doing image manipulation on uncompressed pixels?

The problem with option 2 is similar-- if you try to protect the pool for pixels by dividing the limit up "statically", then you waste by reserving memory even when you don't know the pool will ever be used. Why is the array pool for pixels smaller even if I never end up deocoding JPEGs?

The problem with option 3 is that you haven't really hidden the number of pools from me-- you've just made it harder for me to figure out how I should set the limit, since the real limit is now the per-pool-cap times the number of pools. I can do this; I have the memory statistics, I've dumped the heap, I know where to tune, but you haven't hidden anything from me, really, just made it harder to figure out.

@DeCarabas
Copy link
Author

DeCarabas commented Mar 24, 2017

@JimBobSquarePants Yeah I'm not entirely sure what I was trying to load that allocated 70MB of DecodedBlock; I probably need to do something to protect myself from that anyway.

But the pool is not that smart-- it doesn't have any kind of idle shrinking capability, it doesn't shrink on memory pressure, and it has this annoying power-of-two thing that can end up wildly over-allocating in order to hit the bucket.

The power-of-two thing is annoying because I was not necessarily using that 300MB concurrently-- if I needed an array of 131,073 DecodedBlocks I would have ended up allocating 262,144 of them. In the dump I'm looking at now, I have one such array at that size, for 70MB, and three at 131,072, at 35MB a piece. I only needed to get asked for an array of 65,537 blocks to allocate one of those... at those sizes I would much rather stop using the array pool and ask the allocator for an exact fit instead.

@antonfirsov
Copy link
Member

antonfirsov commented Mar 24, 2017

@DeCarabas our main difficulties:

  • Jpeg decoder implementation is changing rapidly, so it's really dangerous to expose this kind of super-specific fine-tuning option on a public API.
  • Not only DecodedBlock-s are pooled. There are dozens of different places using array pools in different codec implementations. We have to think about all the use cases, not just yours :)

We can only solve these by thinking in a general way.

What could be a fair compromise for me: we can introduce IDecoderOptions.ArrayPoolProviderOverride , so you can tune the pooling for a specific decoding process. It will limit down, or disable pooling for DecodedBlock and temporal jpeg image buffers (JpegPixelArea.Pixels) as well.

Also totally agree with @JimBobSquarePants: a fragmented LOH could also lead to OutOfMemoryException-s.

@DeCarabas
Copy link
Author

@antonfirsov Totally understand on both points; I expect tuning parameters to be something weird and fiddly and likely to break at a moment's notice.

This is always the difficulty with adding caches to libraries like this-- you can't meaningfully add a cache without being in complete control of the resources you're using for your cache, and your library is not in control over the memory in my app. Alas.

Another couple of options:

  • A big boolean flag saying "DISABLE ALLOCATION POOLS"
  • An API I can call that says "FLUSH ALLOCATION POOLS NOW"

Either of these would fix my problems without exposing the internals. (I'd personally take the hit of just relying on the GC more heavily over accidentally permanently consuming large blocks of memory.(

@tocsoft
Copy link
Member

tocsoft commented Mar 24, 2017

In that case If/when we add the IArrayPoolProvider you would be able to make one that fakes out the ArrayPool<T> abstract class with one that just allocates arrays and ignores the Return(...) that will just allow the GC to do its work on the arrays.

@antonfirsov
Copy link
Member

I think we would provide an easy way to disable pooling by setting Configuration.ArrayPoolProvider to null or a similar construct. By implementing your own ArrayPool you can take even more control, eg. add flushing. (There is no such builtin feature in the default System.Buffers implementation ).

@dlemstra
Copy link
Member

If we are going to create an ArrayPoolProvider (sounds like a good idea) we should offer two implementations. One that reuses the pool (our default) and one that just creates a new one all the time. Otherwise we need null checks everywhere we use pools.

@BrianThomson
Copy link

Same problem here. Using a 13MB JPEG image resulted in 2.4GB private memory allocation. The second upload was even higher. This brings any server down.

@antonfirsov
Copy link
Member

antonfirsov commented Mar 30, 2017

hmm ... maybe this isn't just normal ArrayPool behaviour, it can be an actual leak.

@BrianThomson I'm planning to reproduce your use case within a console application. Have you used the same 13MB JPEG multiple times? How many requests resulted in 2.4GB allocation? Is it growing linearly with the number of requests?
EDIT:
If this was the result of a single Image.Load() call, can you post that image?

@dlemstra re ArrayPoolProvider:
I think we need to finish this investigation before implementing anything. Also: if I understand you properly, an ArrayPoolProvider returning a new ArrayPool on every request is basically implementing the non-pooling case in a complicated way: allocating 1 instance of ArrayPool<T> and 1 instance of T[] at for each Rent() request. I think we need to find something better for this.

@JimBobSquarePants
Copy link
Member

JimBobSquarePants commented Apr 6, 2017

@antonfirsov

The issue could be here. We're copying the block since it's a struct then not returning the block back to the pool by disposing of it.

https://github.com/JimBobSquarePants/ImageSharp/blob/be363c9a279385bef846fd042a7a70885b8f8431/src/ImageSharp/Formats/Jpeg/Components/Decoder/JpegBlockProcessor.cs#L50

@antonfirsov
Copy link
Member

@JimBobSquarePants DecodedBlockArray is a struct, but the wrapped array inside it will be copied by reference, so we are not creating+copying an actual new array here.

Also not returning to the pool will result in GC-ing te array, and the issue is with the arrays that are actually kept in the pool.

We need to benchmark + memory profile the ArrayPool behavour with big images + different image sizes to investigate this properly.

@antonfirsov
Copy link
Member

antonfirsov commented Apr 8, 2017

@BrianThomson your problem seems to be pretty different. Actually, it's the opposite of @DeCarabas 's one:
With our current hard-coded settings DecodedBlock ArrayPool-s are switching to a non-pooling behaviour at these input sizes, so you must be running out of LOH memory.

Try executing this, after you finished your request:

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();

ImageSharp should not eat up 2.4GB decoding a single Jpeg, or at least I can't reproduce that case. With a 29MB Jpeg I was able to keep the memory consumption under 1.5GB using GCSettings.LargeObjectHeapCompactionMode.

If your experience is different, a sample image with simple reproduction steps is really welcome!

Also keep in mind, that our current Jpeg solution we have chosen an optimization which tries to reduce CPU load with a relatively large memory footprint. I'm in favor of this approach, because memory seems to be much easier to scale than CPU.

@BrianJThomson
Copy link

I deleted the old test set, but created a new one. This time I use a 15MB JPEG image (600dpi). The results are a bit different, but memory consumption is to high for our web based use cases. In this test, I only load the file, no processing.
This is an excerpt from PerfView:

•CommandLine: dotnet TestApp.dll
•Runtime Version: V 4.0.22220.0
•CLR Startup Flags: None
•Total CPU Time: 111.649 msec
•Total GC CPU Time: 208 msec
•Total Allocs : 2.098,568 MB
•GC CPU MSec/MB Alloc : 0,099 MSec/MB
•Total GC Pause: 10,8 msec
•% Time paused for Garbage Collection: 3,1%
•% CPU Time spent Garbage Collecting: 0,2%
•Max GC Heap Size: 1.938,932 MB
•Peak Process Working Set: 1.785,287 MB
•Peak Virtual Memory Usage: 3.728,183 MB

This is from Visual Studio Diagnostics Tool:

diagtool0

diagtool

@antonfirsov
Copy link
Member

@BrianJThomson can you share your test app (or a simplified version of your test app) together with the test image(s) you are using? It would be very helpful!

@JimBobSquarePants This DecodedBlock[] -specific memory issue also can be interpreted as a side effect of #90 (which fixed #18). It's possible to implement a decoder option which disables the two-step decoding, because most users do not need it. It will complicate the decoders code though.

@BrianJThomson
Copy link

Below is the code. It's a vanilla netcoreapp1.1 console application:

    class Program
    {
        static void Main(string[] args)
        {
            using (Image image = Image.Load(@"test_15mb.jpg"))
            {
                // image.Resize(image.Width / 2, image.Height / 2);
            }
        }
    }

Image:

https://github.com/BrianJThomson/ImageSharp/raw/jpeg/test_15mb.jpg

@antonfirsov
Copy link
Member

antonfirsov commented Apr 8, 2017

I can try to fix this by introducing a switch IJpegDecoderOptions.SingleStep on the public API. By enabling it, the large memory consumption by DecodedBlock[] will be entirely eliminated, but you won't be able to decode corrupt progressive Jpeg-s perfectly (see this comment on #18).

Making a fix that handles both corner cases (very large Jpegs + erroneous progressive Jpegs) seems to be really hard, and needs unpredictable amount of time. I won't be able to do it in the next 6-8 weeks.

@JimBobSquarePants @BrianJThomson @DeCarabas thoughts?

@JimBobSquarePants
Copy link
Member

@antonfirsov, @BrianJThomson @DeCarabas This is a worrying predicament and one of many reasons why I hate the way Jpeg's have been dealt with in the past. In a perfect world there would be no history of broken images and we wouldn't have to hack our code to support them.

A switch, I think would be a sticking plaster at best and I don't think that should be target outcome. I appreciate the difficulty here though and the work you've done so far @antonfirsov is extraordinary.

Let's try to spread awareness off this issue before anyone makes a start on anything. If we are lucky we might catch the eye of someone who has had to deal with this before. It might even just need a fresh pair of eyes.

I'll share it on Twitter now.

@antonfirsov
Copy link
Member

antonfirsov commented Apr 10, 2017

@JimBobSquarePants I'm taking back my previous comments, because I got lost in details of the current implementation. We have to make one step back, because the problem is more general, and leads to hard CPU vs Memory design questions.

A few numbers on 10k x 10k images:

  • 10k*10k Pixels = 100MP
  • A 100MP image uses 400MB of memory even with the compact, uint packed in-memory representation!
  • With a Vector4-based (CPU-optimized!) pixel type it would be 1.6GB!
  • DecodedBlock arrays are basically holding a similar intermediate representation with the total size of Pixels * 4 * Channels bytes, which is ~1.2GB for a 10k×10k 3-channel image!

Here is the hard problem:

  • In our current plans the default pixel type would be based on Vector4 becuse it enables much faster execution paths utilizing SIMD, and removing/speeding-up conversion operations.
    But: Image : Image<Vector4Color> would use 4-times more memory than the current Image : Image<UInt32Color> solution!
  • Following the design plan preferring Vector4, I'm planning a Vector4-all-the-way Jpeg refactor in order to speed-up and simplify the decoder.
    But: it would consume Pixels * 4 * Channels bytes instead of Pixels * Channels bytes in all cases, even if we elminate DecodedBlock arrays! Users with 100MP Jpegs will end up using 1.6GB for the decoder process instead of 400MB.

Personally I think memory is a cheaper, and more scalable resource than CPU, so in the general case it is better to optimize reducing the CPU load, even with a price of a larger memory footprint. Maybe we should find a general way to provide alternative implementations and guidelines for users having their memory as bottleneck.

@tocsoft
Copy link
Member

tocsoft commented Apr 10, 2017

I agree CPU is probably the way to go, but I feel we need much better control over our ArrayPool usage and try to limit ourselves as much as possible to the types we allow ourselves to be requested from pools. We should limit our use of array pools to byte, float & TColor wherever possible.

@blackcity
Copy link

blackcity commented Apr 10, 2017

@antonfirsov Ever thought about using Memory Mapped Files for large files (temp file) in addition to ArrayPool for smaller files? With MMF you can limit memory usage without giving up the other goals (CPU). It's easy to implement and really fast since its just a wrapper. Windows itself uses it to load dlls and resources as you can see in a memory viewer. Maybe implement some kind of a simple MemoryController allocating memory from ArrayPool for smaller files and MMF views for larger files. It's available for netstandard1.3.

https://msdn.microsoft.com/library/dd997372(v=vs.110).aspx
https://www.nuget.org/packages/System.IO.MemoryMappedFiles/

@DeCarabas
Copy link
Author

Some comments:

  1. AWS backs you up on the "CPU is harder to get" front-- getting dedicated CPU appears to be relatively more expensive than getting dedicated RAM.

  2. Do you really need to keep the entire image decoded as a Vector4 all the way through? Aren't there naturally independent blocks in JPEG that you can work in?

  3. Are you sure it will be faster to just have Vector4 everywhere? On large images, will it be faster to work in cache-sized chunks, so that expensive main memory only really sees your packed data?

@DeCarabas
Copy link
Author

Oh, and I still would like control over the buffer pool policy since its current maximum size and pool size policies are not what I actually want to run with. (Given the resources I have, allocating a 20MB buffer for an 11MB request is not practical.)

@blackcity
Copy link

blackcity commented Apr 12, 2017

Did some proof-of-concept coding. My current experience is from the MemoryMappedFile class in the full Framework. The NuGet package for netstandard1.3 has a little different API and, as always, there is literally no documentation out there for the NuGet package version. Anyway!

I created a small MemoryManager class that gets arrays from ArrayPool<DecodedBlocks> for a configurable amount of decoded blocks (e.g. 10k-100k) and switches to MMF if the image needs more. So, in this first shot only the DecodedBlockArray struct benefits from the memory manager. Later this should be a generic memory manager accessible for all types in the library of course. Just a test.

Below is the memory profile from the image @BrianJThomson used above now backed by memory mapped files.

Diagnostic Tool output

Look at DecoderBlockArrray[]. In the test form @BrianJThomson this was 377MB per component. Ok, just fun, this can't be used in production since I limited things to 1 decoded block per component at a time in this run, just to make sure this works (1 block is 268 bytes). So in real life we want to configure the blocks to maybe 10.000 which results in 2.5MB memory allocation for each component.

Possible problems:

  • Does it work reliably on all devices and target platforms (Linux, MacOs)?
  • How does the massive parallel and unsafe code work with MMF. Is it reliable?
  • Performance: Memory mapped file code execution was 20-25% slower than the ArrayPool counterpart because of additional file I/O. In general: Smaller views (MemoryMappedViewAccessor) lead to increasing file I/O but less memory consumption and vice versa. No surprise ;-)

So, it's possible to implement Memory Mapped Files in this library, but this comes at a price (performance, reliability?). I think there are more important things to do now.

@antonfirsov
Copy link
Member

@blackcity nice work!

Does it work reliably on all devices and target platforms (Linux, MacOs)?

I might be naive, but I think if it's the part of the standard, than it should be OK.

How does the massive parallel and unsafe code work with MMF. Is it reliable?

We should avoid parallel processing when operating on MMF buffers. It's not hard to do so in Jpeg code.

Later this should be a generic memory manager accessible for all types in the library of course.

Do you think it's possible to integrate it with our core memory buffer class? (It will not be pinned, I'm currently removing all the pointers.)

@blackcity
Copy link

blackcity commented Apr 13, 2017

might be naive, but I think if it's the part of the standard, than it should be OK.

Yeah, it's just that I am always tense developing things for one platform and then run it the first time on another. By the way, System.IO.MemoryMappedFiles is part of the Microsoft.NETCore.App SDK as you can see when you search for this namespace in Solution Explorer. So I think CoreClr and CoreFx itself making heavy use of it and it should run well on other platforms.

We should avoid parallel processing when operating on MMF buffers. It's not hard to do so in Jpeg code.

It just needs to be carefully designed and benchmarked. For example: If we have a fixed memory area in a view multiple threads can work on it without problem as long as there are no concurrent read/write operations (Mutexes should be avoided of course). If, on the other hand, multiple threads need to walk through the whole memory or large parts, it needs a different strategy. But there are well known design patterns for this.

Do you think it's possible to integrate it with our core memory buffer class?

Seems good to me.

@BrianJThomson
Copy link

Great to see progress on this issue!
👍

@vinh84
Copy link

vinh84 commented Apr 15, 2017

Version 1.0.0-alpha5-00071

WinDbg

image

Memory Leak ?

w3p crash sometime, memory not release

.net 462 iis host

@JimBobSquarePants
Copy link
Member

Can you show how you ar using the library? There should be nothing that doesn't either get returned or handled by GC.

@vinh84
Copy link

vinh84 commented Apr 15, 2017

Stream ResizeFunction(Stream stream)

using(stream)
using (var image = Image.Load(stream))
{
	
	ResizeOptions opt = new ResizeOptions();
	opt.Mode = ResizeMode.Max;

	opt.Size = new Size(width, height);

	var outStream = new MemoryStream();

	image.Resize(opt)
		 .Save(outStream);

	return outStream;
} 

My Service Resize Small Image File (< 2Mb)

@JimBobSquarePants
Copy link
Member

Nothing wrong with the code as long as you clean up that outstream instance.

I know of no memory leaks within the library and I've tested it a lot to ensure that. Are you running in 32 or 64 bit mode?

@vinh84
Copy link

vinh84 commented Apr 15, 2017

Hi @JimBobSquarePants
64bit IIS Asp.Net 4.6.2

I use procdump -ma pip (w3p.exe)
windbg with psscor4
!dumpheap -stat

@antonfirsov
Copy link
Member

antonfirsov commented Apr 17, 2017

I've been thinking on this issue for weeks. It is a hard one not only because the tricky implementation details. Defining a public memory management API which allows the right customization options is a non-trivial design task, because the solution has to be maintainable and future-proof.
We need to keep the doors open for the future standard memory management libraries like:

I think I managed to came up with a design which meets all the user requests described in this discussion, and also keeps things extensible. I'd like to introduce a MemoryManager class, which could solve following stuff:

  • Allows enabling, disabling, configuring and flushing ArrayPool-s
  • Does the first steps towards making our memory management pluggable, without exposing the internal details
    • This will allow using MMF for certain tasks as part of our core memory logic
    • In the future it will be possible to use MemoryManagers which allocate (or refer to) unmanaged memory

Here is my proposal

It doesn't deal with implementation details, only with the API. To integrate the MMF stuff, we need to change BufferSpan to make it work with unmanaged pointers. (Just like System.Span does,)

@JimBobSquarePants @tocsoft @dlemstra @blackcity @DeCarabas can you have a look?

@tocsoft
Copy link
Member

tocsoft commented Apr 17, 2017

Looking good to me.

Only thing I can see is I think ResetAllPools() should be called ReleaseMemory()

@JimBobSquarePants
Copy link
Member

JimBobSquarePants commented Apr 17, 2017

👍 Bravo. This looks very comprehensive. Agree on the naming change.

Quick Q on this comment. I'm assuming it has to do with .NET Core 2.0 making Span<T> available or the BufferPool?

To make MemoryManager extensible by users, we will need the new standard corefx (and maybe even corefxlab!) libraries.
corefx Sytem.Memory will be released with .NET Core 2.0

@antonfirsov
Copy link
Member

I think only Span<T> will be finished for that time. We can decide then to:

  • Wait for the System.Buffers.Primitives to be finalized
  • Go with our own buffer API-s. They might fit image processing needs better anyways.

I made a big mistake in the proposal ignoring the fact that ArrayPool<T> is generic. Gonna fix it now.

@JimBobSquarePants
Copy link
Member

I'm gonna use it being nearly 1am here as an excuse for missing that! 😖

@dlemstra
Copy link
Member

@antonfirsov In ImageMagick we swap to disk when we run out of memory. Maybe we could make the MemoryManager do that automatically?

@antonfirsov
Copy link
Member

@tocsoft ReleaseMemory() might be misleading, specially for an unmanaged MemoryManager, because it does not release anything that is being used by the library at the moment of the ReleaseMemory() call. I think we need to find a better name. Renamed it to Reset() for now.

@JimBobSquarePants I also have an excuse: never should make gists before lunch! :) Updated it now. Things became a bit more complex around the pooling manager.

@dlemstra It might be possible to implement a MemoryManager which does swapping, but it seems to be a really difficult at the moment. It might be good enough for our users if we allow them switching to MMF over certain buffer sizes (see the JpegMMF example).

@blackcity
Copy link

Looks good. I like that you provide extensibility, because the library itself cannot implement optimized code for all conceivable runtime environments, 👍

@antonfirsov
Copy link
Member

@DeCarabas @BrianJThomson If you are still using ImageSharp, check out the beta (on NuGet!), the Jpeg decoder uses much less memory now! Let me know if the situation is still critical for you!

@rdcm
Copy link

rdcm commented Dec 6, 2017

@antonfirsov did you mean SixLabors.ImageSharp 1.0.0-beta0002?

@JimBobSquarePants
Copy link
Member

Beta 1 contained the relevant changes, beta 2 builds on those changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants