Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wip! use tap.spawn() for each test #85

Closed
wants to merge 3 commits into from

Conversation

novemberborn
Copy link
Contributor

Another suggestion for #80. My "crazier idea" as mentioned in #83:

My even crazier idea was to change the runner a little bit. Still one test file, but it starts each test in a child process and communicates the results. That means any shared setup code is still executed for each test without having to be duplicated.

jamestalmage and others added 3 commits December 9, 2015 14:51
The intent was to use an older version of micromatch, but the current
semver specification allows up to latest.

The carrot range will allow any version up to, but not including 3.0.0.

Switched to a tilde range which allows up to, but not including 2.2.0.
Annoyingly they now don't work *with* tap --coverage, but on the other hand the
fixes seem legit:

Spawning tests need to run via bin/nyc otherwise they end up relying on tap's
coverage reporting.

The 'runs reports for all JSON in output directory' test should spawn a
subprocess that itself spawns more processes, else only a single file would be
generated.

The 'handles corrupt JSON files' test doesn't need to spawn a subprocess, it
just has to inject the corrupt JSON file into the output directory.
@novemberborn
Copy link
Contributor Author

@jamestalmage @bcoe this works a bit better now. Tests pass without tap --coverage but ironically they now fail with tap --coverage. Perhaps #89 is useful there.

I had to fix some tests, these fixes seem to be generally applicable if I'm understanding everything correctly.

If you run with --reporter=spec you'll see that some describe() groups are output as tests in master. The nesting is better in this branch. Perhaps a tap bug? Regardless that explains the number of tests being different in master.

Note that I needed #88 for my tests to pass. I was getting an error in lazy-cache. Not sure why that is.

@jamestalmage
Copy link
Member

I really think jamestalmage@ad9a7ba is the best strategy for providing our own coverage. Between require hooks and spawn-wrap I think we will continue to run the risk of false positive tests like #86. I could pull that out of PR #89 if we decide this is the forking strategy to go with.

As for this forking strategy vs what I am doing in #89, there are some tradeoffs:

Things this does better:

  • Prints prettier
  • Does not generate a ton of files
  • Does not require a build script (probably runs quite a bit faster)

Things that are better about #89

  • If a single test is giving you a problem, you can run just that test by calling the generated file. It's almost like only support in TAP. Which is something I have missed from mocha.
  • You can look inside each generated file for a clearer understanding of what is happening
  • tap's mocha globals option doesn't give you beforeEach/afterEach, but with Fork per test. #89 you can haz.
describe('foo', function () {
  rimraf.sync('foo') // this will actually behave like a beforeEach for this describe block

  it('foo-test-1', function() {
    // ...
  })

  it('foo-test-2', function () {
     // ...
  })

  assert(fs.readDirSync('foo').length === x) // an afterEach assertion - applicable only to the `foo` describe block
})

describe('bar', function() {
  // The "beforeEach" / "afterEach" from the `foo` block will not be run in this fork
  // My understanding of this PR is that they would be
})

Where they both suck:

  • They hack the tap testing framework in ways that may be surprising to new collaborators. That said, due to the nature of NYC, I think we are stuck some variation of one of these.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants