-
Notifications
You must be signed in to change notification settings - Fork 500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Automation: Investigate, specify, prototype a performance test suite that can be run at build time. #4201
Comments
I'd recommend watching the video created https://github.com/BU-NU-CLOUD-SP18/Dataverse-Scaling#our-project-video that was created by BU students Ashwin Pillai, James Michael Clifford, Ryan Morano, and Patrick Dillon in April 2018. They're using JMeter and here are some screenshots from around 17:38: |
The above tests have been run on a Minishift deployment of the containerized Dataverse application used for our class group project Dataverse Scaling. I have documented steps for creation of my simple Jmeter test to test the Dataverse deployment. I have also added two Jmeter ".jmx" files. "Dataverse Load testing template.jmx" is a file which can be used with the steps for test creation on Jmeter to make a customized test plan as needed. "MOC Project Dataverse Scaling_Load testing.jmx" can be used to test the deployment with minor changes in the test plan adding or reducing users/threads. Please access these files using the drive link below - |
@rockash thanks! I gave you a 🎉 back when you uploaded your Jmeter jmx files and doc but thanks again. Much appreciated. 😄 I'm attaching the files to this issue for extra safe keeping: Dataverse Load testing using Jmeter-20190708T162312Z-001.zip Speaking of Jmeter, as I mentioned at standup this morning, a couple of weeks ago @4tikhonov said he plans to use Jmeter in his development efforts for SSHOC. Someone recorded his talk! You can hear him talking about Jmeter at https://youtu.be/vAPpKuDQUDY?t=341 Here's the Jmeter slide from https://osf.io/cqsrj/ |
As a first step for this, lets take the performance tests that @kcondon uses and find a way to script them so we can automate them. We can then establish baseline #s of the current build, and make sure that any new commits don't worsen these. (as a later step, we can also eventually use these #s to improve needed areas and establish new baselines) |
During sprint planning today we talked about a lot of stuff including:
I don't know which tools we should use but from a quick look Jenkins has a "performance" plugin ( https://plugins.jenkins.io/performance ) that shows historical trends for throughput, response time, etc. Here's a screenshot from https://stackoverflow.com/questions/53411347/jenkins-display-multiple-jmeter-report-jtl-in-single-graph I'm not trying to specify any particular tool at this point but this is what I mean when I say that we should gather the metrics over time into a database or whatever, hopefully in an automated fashion. Ideas for tools are welcome! |
We want this but when? Let's open a fresh issue closer to when we plan to work on it. |
Produce a performance test suite that can run at build time.
Initially, it should measure the performance of the most frequently accessed pages: dataset, dataverse.
There are some open questions about what constitutes an effective performance test suite: whether it needs to be deterministic of production performance, requiring the same servers and data or whether it can be used as a relative measure of performance using a range of real and/or synthetic data, such as larger lists of parent/child objects, eg. dataverse with datasets, datasets with files.
The text was updated successfully, but these errors were encountered: