Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Adds benchmarking support to Jest [WIP] #3026

Closed
wants to merge 5 commits into from

Conversation

trueadm
Copy link

@trueadm trueadm commented Feb 28, 2017

Summary

This feature is still a work in progress!

Console preview

This PR is an extension of the ideas from #2694, where this PR aims to add basic benchmarking functionality to Jest via cli.

This feature adds new syntax support for benchmark and scenario and rather than using Jasmine or the current testing pipeline to handle benchmarks, this PR instead creates a new path for dealing with benchmarks. You can use beforeEach and afterEach with scenario, like you can do with tests and using describe and it. Each scenario can also be async by returning a Promise.

I've opted to use benchmarkjs to deal with the metrics regarding how many operations per second per each scenario are processed.

The naming of directories, such as __benches__ are stored in config for easy change too. The new config has been updated along with the Flow types.

Additionally, this PR adds a new package to Jest called jest-benchmark where most of the benchmarking logic and reporting is handled, as a self-contained module.

Usage

Jest will look for __benches__ directories in the codebase (like it currently does for __tests__) and will aim to synchronously execute each benchmark, providing feedback in terms of a report on the performance of each scenario in the particular benchmark.

I've added a Fibonacci example for benchmarking in the examples directory to demonstrate the syntax and ideas behind what can be possible. To run the examples, you can yarn run test-examples from the Jest directory and you should see the Fibonacci benchmark run in the cli and the report output into the console.

Feedback

This is still a work-in-progress and my first PR for Jest, so I'd appreciate all the feedback regarding the implementation and how I can improve certain areas.

Issues/Points

  • jest-benchmark needs to be published or the CI build fails as it can't pick it up. Unless I'm missing something with Lerna and how it bootstraps on CI?
  • Currently, there isn't a way to only run tests or only run benchmarks, Jest will attempt to do both. I was wondering what the best approach might be here for a flag to pass to Jest? Should benchmarks even run by default?

This will add benchmarking functionality to Jest. The cli will detect benchmark files in __benches__ directories and pass them to BenchmarkRunner. It does not use the test running route, instead opting to use its own path to handle how benchmarks operate. Under the hood, the jest-runtime is re-used to ensure benchmarks are piped through the transforms correctly, opting to offer a new syntax in terms of: "benchmark", "scenario" for benchmark files. These do not use Jasmine, but has Jasmine like functionality. Benchmark.js is used to perform the operations per second on scanrios and a custom benchmark reporter outputs the results to the console.
patternInfo = getTestPathPatternInfo(argv);
return source.getTestPaths(patternInfo);
>>>>>>> upstream/master
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whoops

hasteContext,
config,
).runBenches(data.benchPaths),
]);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if benchmarks will be reliable when ran parallel to tests. Maybe we should at least add an option to run only benchmark?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this to the issues/points section above. I wasn't sure and wanted some feedback before implementing that feature :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the way this is currently implemented is that they would run in the parent process while the tests may run in the parent or in the worker.

@thymikee
Copy link
Collaborator

thymikee commented Feb 28, 2017

I also get the same error: https://ci.appveyor.com/project/Daniel15/jest/build/2321. Some packages may not be included in package.json

Other than that, it's awesome that you moved it forward ❤️

@trueadm
Copy link
Author

trueadm commented Feb 28, 2017

@thymikee Thanks! I've included jest-benchmark in the PR. Did I miss something with lerna in order to store the bootstrapped version so it gets picked up on the CI during install?

@thymikee
Copy link
Collaborator

Bumping jest-* packages to 19.0.2 fixes the issue for me. But I'm not sure why it occurs, I have almost no experience with lerna.

"jest-matchers": "^18.1.0",
"jest-message-util": "^18.1.0",
"jest-snapshot": "^18.1.0",
"path": "^0.12.7"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this give a linting error?

{
"presets": ["es2015"],
"plugins": ["transform-async-to-generator"]
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs a newline!

@@ -0,0 +1,23 @@

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All new files need the FB copyright header.

benchEnvironment?: string,
benchMatch?: Array<Glob>,
benchRegex?: string,
benchRunner?: string,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd properly make an option "benchmark" available and within that, have environment, match, runner. I don't think we need to add the regex option for this new feature (we are only keeping testRegex for backwards compat).

// Prevent the default benchMatch conflicting with any explicitly
// configured `benchRegex` value
newConfig.benchMatch = [];
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can remove these if statements here if we get rid of benchRegex.

@@ -258,6 +261,9 @@ function normalize(config: InitialConfig, argv: Object = {}) {
if (config.testEnvironment) {
config.testEnvironment = getTestEnvironment(config);
}
if (config.benchEnvironment) {
config.benchEnvironment = getTestEnvironment(config);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might wanna rename getTestEnvironment to getEnvironment?

const COMPLETE = chalk.reset.inverse.green.bold(COMPLETE_TEXT) + ' ';

const SUCCESS = chalk.green('✓');
const ERROR = chalk.red('✗');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind checking what Jest uses already? I recall that we use different ones on windows.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah okay, I wasn't sure here – it's not the easiest thing to regex search for. Can you link me to the source?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const ops = Math.floor(parseFloat(hz));
const margin = roundToPrecision(parseFloat(stats.rme), 2);
const detail = chalk.gray(
` [ ${ops} op/s (±${margin}%) ${ cycles } cycles ]`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fancy stuff

@@ -20,6 +20,10 @@ const NODE_MODULES_REGEXP = replacePathSepForRegex(constants.NODE_MODULES);
module.exports = ({
automock: false,
bail: false,
benchEnvironment: 'jest-environment-jsdom',
benchMatch: ['**/__benches__/**/*.js?(x)', '**/?(*.)(spec|bench).js?(x)'],
benchRegex: '(/__benches__/.*|(\\.|/)(bench|spec))\\.jsx?$',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's call it __benchmarks__? :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

const endBenchmark = await startBenchmark(benchPath, name);
return await new Promise((resolve, reject) => {
if (beforeEach) {
beforeEach();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this means that beforeEach/afterEach can't be async? It would be good to support that!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll add support for async beforeEach/afterEach :)

};
const results = [];

environment.global.scenario = noOp;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason for stubbing these out?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If people call scenario() and they're not in a benchmark block it could potentially cause undesirable side-effects, I'd rather just explicitly stop that from ever becoming an issue.

let afterEach;
const scenarios = [];

environment.global.beforeEach = beforeEachFunc => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we are ok having only a single beforeEach function per benchmark file, right? In Jasmine you can have many different ones that are grouped with "describe".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true but I wanted us to avoid doing that here ideally. What are your thoughts on it? If there's demand for it I guess we could add it?

results.push(await runBenchmark(benchPath, benchmarks[i]));
}
return results;
} catch (err) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "error" :)

@@ -0,0 +1,75 @@

const Runtime = require('jest-runtime');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a lot of duplication in this file. Is there a way we could re-use most of what already exists in runTest?

})
.then(runResults => {
.then(([testRunResults]) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we aren't doing anything with the benchmark results?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are handled by the benchmark package rather than the cli, to reduce clutter.

@cpojer
Copy link
Member

cpojer commented Mar 3, 2017

Wow, this is really impressive – great work! Unfortunately it doesn't work locally for me; mind rebasing on the latest version of Jest, running yarn clean-all and then yarn and see if that works for you? :)

Overall, the big question we have to answer here is whether we want all of this to live within jest-cli or whether we want a separate CLI (jest-benchmark) for this. I'm thinking that the latter may be the more viable solution long-term and that it will result in less clutter. It comes down to how much stuff we want to ship out of the box, like, should watch mode work for benchmarks? If it makes everything more complex, then probably not. Personally I'm a bit worried about expanding everything around the jest-cli, like SearchSource, to support two concepts "benchmarks" and "tests" rather than N amount of arbitrary things that can be run, which would essentially turn Jest into a task runner. To answer this, the question is how much of the existing infrastructure can be changed to support both tests and benchmarks rather than extending them directly. Can SearchSource work as a source for any pattern rather than hardcoding them? If yes, I'd strongly recommend to refactor this all and make jest-benchmark a separate cli runner for benchmarks.

Given that plans we have to support a multi-config runner in #2970, we could conceivable find a way to run both tests and benchmarks with the same config in the same run without doing it directly inside of jest-cli.

What do you think?

Copy link
Member

@cpojer cpojer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment

@cpojer
Copy link
Member

cpojer commented Apr 18, 2017

I'll close this because we'll be taking a different direction. We can reopen when you push to it or let's take it to a new PR.

@cpojer cpojer closed this Apr 18, 2017
@hulkish
Copy link
Contributor

hulkish commented Apr 24, 2018

@cpojer I'm curious of the outcome for this? What direction has jest taken for this?

@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 12, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants