-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Adds benchmarking support to Jest [WIP] #3026
Conversation
This will add benchmarking functionality to Jest. The cli will detect benchmark files in __benches__ directories and pass them to BenchmarkRunner. It does not use the test running route, instead opting to use its own path to handle how benchmarks operate. Under the hood, the jest-runtime is re-used to ensure benchmarks are piped through the transforms correctly, opting to offer a new syntax in terms of: "benchmark", "scenario" for benchmark files. These do not use Jasmine, but has Jasmine like functionality. Benchmark.js is used to perform the operations per second on scanrios and a custom benchmark reporter outputs the results to the console.
packages/jest-cli/src/runJest.js
Outdated
patternInfo = getTestPathPatternInfo(argv); | ||
return source.getTestPaths(patternInfo); | ||
>>>>>>> upstream/master |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops
hasteContext, | ||
config, | ||
).runBenches(data.benchPaths), | ||
]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if benchmarks will be reliable when ran parallel to tests. Maybe we should at least add an option to run only benchmark?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this to the issues/points section above. I wasn't sure and wanted some feedback before implementing that feature :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah the way this is currently implemented is that they would run in the parent process while the tests may run in the parent or in the worker.
I also get the same error: https://ci.appveyor.com/project/Daniel15/jest/build/2321. Some packages may not be included in package.json Other than that, it's awesome that you moved it forward ❤️ |
@thymikee Thanks! I've included |
Bumping |
packages/jest-benchmark/package.json
Outdated
"jest-matchers": "^18.1.0", | ||
"jest-message-util": "^18.1.0", | ||
"jest-snapshot": "^18.1.0", | ||
"path": "^0.12.7" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this give a linting error?
{ | ||
"presets": ["es2015"], | ||
"plugins": ["transform-async-to-generator"] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs a newline!
@@ -0,0 +1,23 @@ | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All new files need the FB copyright header.
benchEnvironment?: string, | ||
benchMatch?: Array<Glob>, | ||
benchRegex?: string, | ||
benchRunner?: string, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd properly make an option "benchmark" available and within that, have environment, match, runner
. I don't think we need to add the regex option for this new feature (we are only keeping testRegex
for backwards compat).
// Prevent the default benchMatch conflicting with any explicitly | ||
// configured `benchRegex` value | ||
newConfig.benchMatch = []; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can remove these if statements here if we get rid of benchRegex.
@@ -258,6 +261,9 @@ function normalize(config: InitialConfig, argv: Object = {}) { | |||
if (config.testEnvironment) { | |||
config.testEnvironment = getTestEnvironment(config); | |||
} | |||
if (config.benchEnvironment) { | |||
config.benchEnvironment = getTestEnvironment(config); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might wanna rename getTestEnvironment
to getEnvironment
?
const COMPLETE = chalk.reset.inverse.green.bold(COMPLETE_TEXT) + ' '; | ||
|
||
const SUCCESS = chalk.green('✓'); | ||
const ERROR = chalk.red('✗'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mind checking what Jest uses already? I recall that we use different ones on windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah okay, I wasn't sure here – it's not the easiest thing to regex search for. Can you link me to the source?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const ops = Math.floor(parseFloat(hz)); | ||
const margin = roundToPrecision(parseFloat(stats.rme), 2); | ||
const detail = chalk.gray( | ||
` [ ${ops} op/s (±${margin}%) ${ cycles } cycles ]` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fancy stuff
@@ -20,6 +20,10 @@ const NODE_MODULES_REGEXP = replacePathSepForRegex(constants.NODE_MODULES); | |||
module.exports = ({ | |||
automock: false, | |||
bail: false, | |||
benchEnvironment: 'jest-environment-jsdom', | |||
benchMatch: ['**/__benches__/**/*.js?(x)', '**/?(*.)(spec|bench).js?(x)'], | |||
benchRegex: '(/__benches__/.*|(\\.|/)(bench|spec))\\.jsx?$', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it __benchmarks__
? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
const endBenchmark = await startBenchmark(benchPath, name); | ||
return await new Promise((resolve, reject) => { | ||
if (beforeEach) { | ||
beforeEach(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this means that beforeEach/afterEach can't be async? It would be good to support that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I'll add support for async beforeEach/afterEach :)
}; | ||
const results = []; | ||
|
||
environment.global.scenario = noOp; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason for stubbing these out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If people call scenario() and they're not in a benchmark block it could potentially cause undesirable side-effects, I'd rather just explicitly stop that from ever becoming an issue.
let afterEach; | ||
const scenarios = []; | ||
|
||
environment.global.beforeEach = beforeEachFunc => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we are ok having only a single beforeEach function per benchmark file, right? In Jasmine you can have many different ones that are grouped with "describe".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is true but I wanted us to avoid doing that here ideally. What are your thoughts on it? If there's demand for it I guess we could add it?
results.push(await runBenchmark(benchPath, benchmarks[i])); | ||
} | ||
return results; | ||
} catch (err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "error" :)
@@ -0,0 +1,75 @@ | |||
|
|||
const Runtime = require('jest-runtime'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a lot of duplication in this file. Is there a way we could re-use most of what already exists in runTest?
}) | ||
.then(runResults => { | ||
.then(([testRunResults]) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we aren't doing anything with the benchmark results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are handled by the benchmark package rather than the cli, to reduce clutter.
Wow, this is really impressive – great work! Unfortunately it doesn't work locally for me; mind rebasing on the latest version of Jest, running Overall, the big question we have to answer here is whether we want all of this to live within jest-cli or whether we want a separate CLI ( Given that plans we have to support a multi-config runner in #2970, we could conceivable find a way to run both tests and benchmarks with the same config in the same run without doing it directly inside of jest-cli. What do you think? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment
I'll close this because we'll be taking a different direction. We can reopen when you push to it or let's take it to a new PR. |
@cpojer I'm curious of the outcome for this? What direction has jest taken for this? |
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Summary
This feature is still a work in progress!
This PR is an extension of the ideas from #2694, where this PR aims to add basic benchmarking functionality to Jest via cli.
This feature adds new syntax support for
benchmark
andscenario
and rather than using Jasmine or the current testing pipeline to handle benchmarks, this PR instead creates a new path for dealing with benchmarks. You can usebeforeEach
andafterEach
withscenario
, like you can do with tests and usingdescribe
andit
. Eachscenario
can also be async by returning a Promise.I've opted to use
benchmarkjs
to deal with the metrics regarding how many operations per second per each scenario are processed.The naming of directories, such as
__benches__
are stored in config for easy change too. The new config has been updated along with the Flow types.Additionally, this PR adds a new package to Jest called
jest-benchmark
where most of the benchmarking logic and reporting is handled, as a self-contained module.Usage
Jest will look for
__benches__
directories in the codebase (like it currently does for__tests__
) and will aim to synchronously execute eachbenchmark
, providing feedback in terms of a report on the performance of eachscenario
in the particularbenchmark
.I've added a Fibonacci example for benchmarking in the
examples
directory to demonstrate the syntax and ideas behind what can be possible. To run the examples, you canyarn run test-examples
from the Jest directory and you should see the Fibonacci benchmark run in the cli and the report output into the console.Feedback
This is still a work-in-progress and my first PR for Jest, so I'd appreciate all the feedback regarding the implementation and how I can improve certain areas.
Issues/Points
jest-benchmark
needs to be published or the CI build fails as it can't pick it up. Unless I'm missing something with Lerna and how it bootstraps on CI?