Skip to content

shaunol/docker-mono-owin-webapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

docker-mono-owin-webapi

Proof of concept

Run in current terminal session and remove container on exit:

docker run --rm -t -i -p 9100:9100 shaunol/mono-owin-webapi

Run as a daemon (requires manual container cleanup after use):

docker run -d -p 9100:9100 shaunol/mono-owin-webapi

Browse to http://[docker_hostname]:9100/values to confirm operation

Executable is from webapi.tar

See ./webapi-src for the .net source code

Benchmark:

docker run --rm -t -i shaunol/docker-ab ab -n 1000 -c 2 http://[docker_hostname]:9100/values

cpuThreads * 1.5 seems like a good formula to start with regarding max. concurrency levels

1100 requests per second were observed from aws t1.micro -> t1.micro instance (c=2, no network placement group, 1 vCPU, unknown ECUs)

4600 requests per second from c3.2xlarge -> c3.2xlarge instance (c=12, in same network placement group, 8 vCPU, 28 ECUs, ixgbevf 2.11.3-k)

5600 requests per second from c3.8xlarge -> c3.8xlarge instance (c=48, in same network placement group, 32 vCPU, 108 ECUs, ixgbevf 2.11.3-k)

Going to have to play with some different benchmark methods, it's most likely the ab client needs to be distributed and maybe the .net threadpool settings need to be tweaked on the server side.

Using 3x c3.large instances to target a c3.2xlarge instance, I was able to get 4500 combined requests per second. With a concurrency of 4 per source instance for peak performance, 95% of all requests were served within 5ms. With a concurrency of 6, 95% was 7ms, c=12, 95% was 12ms, c=24, 95% was 20ms, c=48, 95% was 34ms.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published