Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests with dynamic/random parameters are never run #97

Closed
pjquirk opened this issue Dec 7, 2015 · 7 comments · Fixed by #216
Closed

Tests with dynamic/random parameters are never run #97

pjquirk opened this issue Dec 7, 2015 · 7 comments · Fixed by #216

Comments

@pjquirk
Copy link

pjquirk commented Dec 7, 2015

Consider the following test:

namespace ExampleNamespace
{
    using System;
    using System.Collections.Generic;
    using NUnit.Framework;

    [TestFixture]
    public class RandomParameterTest
    {
        static readonly Random random = new Random();

        public static IEnumerable<int> Numbers => new[] { random.Next(), random.Next() };

        [Test]
        public void TestMethod([ValueSource(nameof(Numbers))] int number)
        {
            Assert.Fail();
        }
    }
}

This test appears in the adapter as:

Example of Test Explorer window showing tests with explicit random numbers as parameters

When I then attempt to run these tests (right-click, Run Selected Tests), neither are run at all and become grayed out. I assume this is because it generates test names twice; once when lists the tests, and then again when attempting to run the tests. Since a different random number is generated each time, the adapter can't find the test that was selected and so skips it.

The console runner does not have this problem:

@CharliePoole
Copy link
Member

Good analysis of the problem. It's also described in nunit/nunit-vs-adapter#15, which we had decided to defer to NUnit 3.0 because it's a fairly major change. We'll use this issue to track it for 3.0 adapter.

@pjquirk
Copy link
Author

pjquirk commented Dec 7, 2015

Thanks @CharliePoole. I swear I search for pre-existing issues before reporting, but I'm 0 for 2 so far!

@CharliePoole
Copy link
Member

No problem, you saved us copying that issue here from the nunit-vs-adapter repo.

@CharliePoole CharliePoole modified the milestones: 3.0, Backlog Mar 19, 2016
@CharliePoole
Copy link
Member

It would be nice to fix this for 3.0. The simplest possibility seems to be saving the random seed from discovery and re-using it in execution. Where to save it is the key question.

However, saving the random seed would not help us with other instances of the same problem. Random data is only the most obvious place where the use of two completely separate processes for discovery and execution can trip us up. Ideally, we would prefer to save the already-loaded tests somewhere as a tree and just run them when it came to execution time. If somebody with in-depth knowledge of VS architecture can help with ideas, they would be welcome.

@jocutajar
Copy link

Hi, I have a similar situation, though probably more complex. Using AutoFixture and NSubstitute, I can declare tests as such:

[Theory, AutoSubstituteData]
public void Test_Mixed(int version, string name, [Substitute] IService svc)
{
     // not executed, shown as inconclusive
}

This works with xUnit and also with NUnit console runners. The VS test runner shows it always inconclusive. Test output shows:

Test adapter sent back a result for an unknown test case. Ignoring result for 'Test_Mixed(150,"name04f0cd50-e89f-4a8f-db6-dab69b58e98e",Castle.Proxies.IServiceProxy)'.

More details are available on SO.

Thanks, Rob

@CharliePoole
Copy link
Member

@RobajZ It's certainly possible, but we would have to know a lot more about the internals of AutoSubstitute to be sure. If the generated data is random, then fixing this issue is probably a necessary but not necessarily sufficient condition for fixing the AutoSubstitute problem.

@jocutajar
Copy link

Thanks @CharliePoole , I've created a new issue #188, thus.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants