Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Testing runtime #9

Closed
treiher opened this issue Jul 12, 2018 · 9 comments
Closed

Testing runtime #9

treiher opened this issue Jul 12, 2018 · 9 comments
Assignees

Comments

@treiher
Copy link
Member

treiher commented Jul 12, 2018

We should add more thorough tests to ensure the correct functioning of the runtime. We should find out if we can reuse tests of the original Ada runtime.

@senier
Copy link
Member

senier commented Jul 12, 2018

Efforts to port the Ada conformance tests to Genode are done in Componolit/componolit#125. Maybe we should pursue platform independence here, too. @jklmnn, what do you think?

@jklmnn
Copy link
Member

jklmnn commented Jul 12, 2018

I'm not sure if the Ada conformance tests are what helps us here since they rather test what the runtime supports instead of its correct functioning (well this is tested of course but probably not as targeted as we need it).
Generally I think that we're not heading into the right direction with tests. Originally I implemented the secondary stack in a separate library and put it into a separate space to compile and prove it independently of the rest of the runtime. This, especially the SPARK part, is what would have prevented #8 in the first place (in my eyes). But this is also what I miss with the current structure of the runtme (runtime files and their implementation are mangled together).
So I propose that we take the effort and refactor the runtime to consequently separate the interface providing parts of the runtime (e.g. s-secsta.ad*) and the actual implementation (e.g. ss-utils.ad*) again and prove the implementation. We still should test the runtime in a manner as done now, even more thoroughly.
EDIT: I'd volunteer to do that ;)

@senier
Copy link
Member

senier commented Jul 12, 2018

I agree that we should better test the runtime. Ideally, we should do test-driven development as done for the secondary stack allocator package. Also, using SPARK at least for dataflow analysis would be great.

I wonder whether we only want to test in the common runtime or also create specific tests for the system-specific parts of the runtime. Having tests in a central place seems useful. Opinions?

We also should decide how to build tests. I'd suggest AUnit (but without gnattest which I found very inconvenient as tests could not be disabled and I was forced to implement many stupid tests for trivial functions. Also, I found no way to add tests for private subprograms.)

Thanks for volunteering!

@jklmnn
Copy link
Member

jklmnn commented Jul 12, 2018

I'd also do some sort of integration test for the platform dependent parts of the runtime. This would have helped e.g. to recognize that allocate_secondary_stack returned the wrong side of the stack. Also we could check explicitly for the behaviour we defined in the spec.

While writing trivial tests for some functions isn't really a problem for me not having tests available for private ones is indeed a one (so it seems AUnit only does blackbox tests on packages).

@jklmnn jklmnn self-assigned this Jul 12, 2018
@senier
Copy link
Member

senier commented Jul 12, 2018

Testing private subprograms is possible in AUnit. I just found no way to use gnattest for that. So "manual" AUnit should be OK (I use that in JWX).

For the integration test you suggest, the question still remains whether generic integration tests (in the generic runtime) suffice or we need specific tests in the platform repositories.

@jklmnn
Copy link
Member

jklmnn commented Jul 16, 2018

Oh I failed to make the distinction between AUnit and gnattest.

As far as I understand it we have to run the integration test on the specific platform itself (how could we otherwise test the memory allocation for example?). So the test functions also need to be somewhat platform specific. I would separate them into two parts (like it is done in Acats). One that checks the functional requirements (e.g. if the mapping is correct) and one that takes the same role as the Report package.

@jklmnn
Copy link
Member

jklmnn commented Jul 16, 2018

I looked into the Aunit Cookbook and the tests autogenerated by gnatmake. Since gnattest does not support private we have to use Aunit without it what means that we have to create all tests manually. This is time consuming but the tests created by gnattest are hardly usable since they depend on some gnattest libraries. Also I doubt that refining them manually requires significantly less time.

@senier
Copy link
Member

senier commented Jul 16, 2018

IMHO you overestimate the effort required to create tests manually. Once you have the basic setup, this mainly boils down to copying a test case or test suite. I never felt this to be overly complicated. Have a look at the jwx tests.

@senier
Copy link
Member

senier commented May 21, 2019

Closing in favor of #32

@senier senier closed this as completed May 21, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants