Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Automation: Investigate, specify, prototype a performance test suite that can be run at build time. #4201

Closed
kcondon opened this issue Oct 13, 2017 · 6 comments
Labels

Comments

@kcondon
Copy link
Contributor

kcondon commented Oct 13, 2017

Produce a performance test suite that can run at build time.
Initially, it should measure the performance of the most frequently accessed pages: dataset, dataverse.

There are some open questions about what constitutes an effective performance test suite: whether it needs to be deterministic of production performance, requiring the same servers and data or whether it can be used as a relative measure of performance using a range of real and/or synthetic data, such as larger lists of parent/child objects, eg. dataverse with datasets, datasets with files.

@pdurbin
Copy link
Member

pdurbin commented Apr 30, 2018

I'd recommend watching the video created https://github.com/BU-NU-CLOUD-SP18/Dataverse-Scaling#our-project-video that was created by BU students Ashwin Pillai, James Michael Clifford, Ryan Morano, and Patrick Dillon in April 2018.

They're using JMeter and here are some screenshots from around 17:38:

screen shot 2018-04-30 at 1 42 42 pm
screen shot 2018-04-30 at 1 44 33 pm
screen shot 2018-04-30 at 1 44 48 pm
screen shot 2018-04-30 at 1 45 04 pm
screen shot 2018-04-30 at 1 45 25 pm

@pillai-ashwin
Copy link

The above tests have been run on a Minishift deployment of the containerized Dataverse application used for our class group project Dataverse Scaling. I have documented steps for creation of my simple Jmeter test to test the Dataverse deployment. I have also added two Jmeter ".jmx" files.

"Dataverse Load testing template.jmx" is a file which can be used with the steps for test creation on Jmeter to make a customized test plan as needed.

"MOC Project Dataverse Scaling_Load testing.jmx" can be used to test the deployment with minor changes in the test plan adding or reducing users/threads.

Please access these files using the drive link below -
Dataverse Load Testing using Jmeter

@pdurbin
Copy link
Member

pdurbin commented Jul 8, 2019

@rockash thanks! I gave you a 🎉 back when you uploaded your Jmeter jmx files and doc but thanks again. Much appreciated. 😄 I'm attaching the files to this issue for extra safe keeping: Dataverse Load testing using Jmeter-20190708T162312Z-001.zip

Speaking of Jmeter, as I mentioned at standup this morning, a couple of weeks ago @4tikhonov said he plans to use Jmeter in his development efforts for SSHOC.

Someone recorded his talk! You can hear him talking about Jmeter at https://youtu.be/vAPpKuDQUDY?t=341

Here's the Jmeter slide from https://osf.io/cqsrj/

jmeter

@scolapasta
Copy link
Contributor

As a first step for this, lets take the performance tests that @kcondon uses and find a way to script them so we can automate them.

We can then establish baseline #s of the current build, and make sure that any new commits don't worsen these. (as a later step, we can also eventually use these #s to improve needed areas and establish new baselines)

@djbrooke djbrooke added the Large label Aug 28, 2019
@pdurbin
Copy link
Member

pdurbin commented Aug 28, 2019

During sprint planning today we talked about a lot of stuff including:

  • We should determine where these numbers go so that we can view historical trends

I don't know which tools we should use but from a quick look Jenkins has a "performance" plugin ( https://plugins.jenkins.io/performance ) that shows historical trends for throughput, response time, etc. Here's a screenshot from https://stackoverflow.com/questions/53411347/jenkins-display-multiple-jmeter-report-jtl-in-single-graph

IX1RI

I'm not trying to specify any particular tool at this point but this is what I mean when I say that we should gather the metrics over time into a database or whatever, hopefully in an automated fashion. Ideas for tools are welcome!

@pdurbin
Copy link
Member

pdurbin commented Nov 12, 2023

We want this but when? Let's open a fresh issue closer to when we plan to work on it.

@pdurbin pdurbin closed this as not planned Won't fix, can't repro, duplicate, stale Nov 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants