Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add algorithm for testing Project Euler solutions #2471

Merged
merged 14 commits into from
Sep 28, 2020

Conversation

dhruvmanila
Copy link
Member

As described in #2463 (comment)
Currently in beta

Adding solution_test.py file for testing all the submitted solutions for Project Euler. The file will not be picked up by Travis CI and we will have to manually call it, maybe we can call it in before_script section for Project Euler build.

The use case for imported modules:

  • pathlib:
    • Path manipulation
    • Iterating over the solution files path
    • Excluding all non-solution files
  • importlib.util
    • For importing the solution files directly using it's path
  • logging
    • For logging errors like:
      • File doesn't contain solution() function
      • solution() function requires a positional argument
      • Incorrect answer for the problem

Currently all the logs goes into the filename described in LOG_FILENAME constant, we can change it to wherever we want it to go. The log looks like this (we can change it however we want):

ERROR: solution() missing 1 required positional argument: 'n' 
Source: Problem 1: sol7.py
...

ERROR: module 'sol3.py' has no attribute 'solution' 
Source: Problem 10: sol3.py
...

Describe your change:

  • Add an algorithm?

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
  • If this pull request resolves one or more open issues then the commit message contains Fixes: #{$ISSUE_NO}.

@cclauss cclauss marked this pull request as draft September 24, 2020 10:09
@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

I put the PR in DRAFT mode. (Kind like beta) ;-)

Co-authored-by: Christian Clauss <cclauss@me.com>
@dhruvmanila
Copy link
Member Author

Extra ideas:

Should we keep the answers in a separate file in the JSON format? Or maybe extract the answers from https://github.com/nayuki/Project-Euler-solutions/blob/master/Answers.txt every time we call the script using either urllib.request or requests (better option) library?

Do we need additional information in our logs like lineno, asctime ?
All the attributes are present on https://docs.python.org/3/library/logging.html#logrecord-attributes
Should we log with colors?

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

This is looking awesome. The logging is just for us to develop the tests. Don't make the logs fancy -- we will get rid of them when we are done. Once we land this PR, files will either pass the test or we will flag them as inadequate.

@dhruvmanila
Copy link
Member Author

Alright. What are you thinking to do next?
How should we implement it into Travis CI? As I mentioned above, we can add python project_euler/solution_test.py to before_script part in the config file or maybe add it in scripts directory as that seems like a place where this script should be.

Once we land this PR, files will either pass the test or we will flag them as inadequate.

So, the tests will fail if the requirements aren't satisfied, printing the appropriate message for the person to debug? I'm a bit confused about what do you mean by "flag them as inadequate"

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

Let's run and fail fast just like scripts/validate_filenames.py does...
https://travis-ci.com/github/TheAlgorithms/Python/builds/185558423#L256

@dhruvmanila
Copy link
Member Author

Oh lol, I forgot to change the solution_test.py file. Also, should I change it as to how validate_filename.py is where it groups the files according to the error and print the filenames and exit with appropriate error code?

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

So... pytest "test discovery" finds all files called test_xxx.py or xxx_test.py and runs them as tests. So pytest is automatically running the file which means that you do not need to explicitly run the file in .travis.yml.

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

- Renamed file so that it isn't picked up by pytest
- Fail fast on validating solutions through Travis CI
@TravisBuddy
Copy link

Hey @dhruvmanila,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: c5c89b70-feac-11ea-b444-3dcc0bd866ab

@dhruvmanila
Copy link
Member Author

This commit is going to fail as for what I wanted to show when you mentioned:

Let's run and fail fast just like scripts/validate_filenames.py does...

https://travis-ci.com/github/TheAlgorithms/Python/jobs/390693925#L249

@TravisBuddy
Copy link

Hey @dhruvmanila,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: 8df1c1d0-fead-11ea-b444-3dcc0bd866ab

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

commit is going to fail

That is why we are in draft mode. ;-)

OK... So not how do we read the function signature so that we know how many input parameters a function needs? For each directory, we will need some sample data to pass into those functions that take parameters.

@dhruvmanila
Copy link
Member Author

The input parameters come from the questions. We can have something like if the function requires arguments, which would be in the case of doctest, the default argument(s) should be the question's input parameters.

In the other case, we need to have those input parameters and if required, which we will know if we get a TypeError, we will pass those as the arguments. That will require on our end to have those input parameters.

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

from collections import namedtuple

q_and_a = namedtuple("q_and_a", "question answer")

ANSWERS = {
1: q_and_a(None, "233168"),
2: q_and_a(1, "4613732"),
3: q_and_a((7, 3), "6857"),
4: q_and_a("Mary", "906609"),
...
}

Just do a few to see if we can make it work.

@dhruvmanila
Copy link
Member Author

dhruvmanila commented Sep 24, 2020

Not possible. Just take this problem as an example: https://projecteuler.net/problem=4

Here they are asking for the largest palindrome made from the product of two 3-digit numbers. Now, how do we define an input parameter to this question? The solutions submitted have n as a number up to which it's going to look to find the largest palindrome product, but the question is asking for the largest which, without knowing the answer, we have no idea what the limit will be. n could have been n-digit number up to which to find the largest palindrome number and then for this question n would be 3.

We do know the answer but that's not my point. We cannot define an input parameter to such questions. The solution() should define the default input parameters according to the question and the solution they came up with such that when we call the function with no arguments it defaults to the actual answer.

Just take a look at the size of this input parameter lol: https://projecteuler.net/problem=13

@cclauss
Copy link
Member

cclauss commented Sep 24, 2020

If solution() needs no parameter then question=None like I did with number 1
If solution() needs one parameter then question = one parameter like I did with number 2
if solution() needs multiple parameters then question = a tuple of parameters like number 3

@dhruvmanila
Copy link
Member Author

So you're saying we provide the parameters? What do you suggest the parameter for https://projecteuler.net/problem=4 be? The solutions submitted does ask for a parameter.

@dhruvmanila
Copy link
Member Author

Hi @cclauss, Can you take a look at this? Thanks! 😁

Sorry, it was pretty late so I went to sleep.
Here you can see that some of the problem input get really large and a lot of are None. This is pretty inefficient. I would highly suggest, for the long run, to enforce either no parameters for solution() or to have the default parameters as the question's input which will be according to their solution and the given problem.

https://travis-ci.com/github/TheAlgorithms/Python/jobs/390761149#L243
As you can see it's taking double time now, also some of the functions are returning list or dict object.

Take a look at: https://github.com/nayuki/Project-Euler-solutions/blob/master/python/eulertest.py#L21
It's one function which directly returns the answer, making life so much easier for testing ;-)

@TravisBuddy
Copy link

Hey @dhruvmanila,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: 83621750-ff03-11ea-b444-3dcc0bd866ab

@cclauss
Copy link
Member

cclauss commented Sep 26, 2020

OK... Let's force default args so that solution() can always be called with no args.

On eulertest.py#L21, my modification would be:

  • actualans = module.compute() --> actual_answer = str(module.compute())
    This allows modules to return whatever datatype they want but actual_answer is always str.

If it would help, send me an email so we can do a videoconf.

@dhruvmanila
Copy link
Member Author

Alright, I did it and also added all the answers according to https://github.com/luckytoilet/projecteuler-solutions

actualans = module.compute() --> actual_answer = str(module.compute())
This allows modules to return whatever datatype they want but actual_answer is always str.

I already did this right from the start 😁

I'll send you the email but is there anything else to discuss?

@TravisBuddy
Copy link

Hey @dhruvmanila,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: 01f25b50-ffe7-11ea-9882-0125ab0f75bf

- As we want to fail fast on testing solutions, we need to test using
  this script first before we use tests written by the author.
- As pytest stops testing as soon as it receives a traceback, we need to
  use pytest-subtests to tell pytest to test all the iterations for the
  function with given parameters.
@dhruvmanila
Copy link
Member Author

Alright, that's not so good in terms of information plus it went on executing the other script, I will put it in before_script section. This is reducing the overall time by a large margin, from 60-80s to 30s.

Any idea about how to capture just the message part of the exception in the pytest and exiting with failure?
We could use except block and print the message like before but if we handle the exception pytest will think the function passed the test.

@dhruvmanila dhruvmanila requested a review from cclauss September 27, 2020 05:17
@TravisBuddy
Copy link

Hey @dhruvmanila,
Something went wrong with the build.

TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself.

View build log

TravisBuddy Request Identifier: e2767590-00ef-11eb-a257-6fde496095bb

@dhruvmanila
Copy link
Member Author

I fixed long error messages into one line like before and now Travis CI will exit if this script fails.
@cclauss Can you take a look at this? And what is hacktoberfest?

- Add custom print function to print all the error messages at the
  end of all the tests
- Let Travis skip if this failed
@dhruvmanila dhruvmanila marked this pull request as ready for review September 28, 2020 03:28
@dhruvmanila
Copy link
Member Author

I removed the patch for printing the error messages and using fixture to do it.
Source: https://stackoverflow.com/a/52873379

That's weird. Why did Travis start two builds?

@dhruvmanila dhruvmanila marked this pull request as draft September 28, 2020 05:02
@dhruvmanila dhruvmanila marked this pull request as ready for review September 28, 2020 05:02
Copy link
Member

@cclauss cclauss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work @dhruvmanila Congratulations! 💯

@poyea @AnupKumarPanwar @dynamitechetan This is an awesome submission made by @dhruvmanila. Well written, modern Python which delivers the superpower of validating the answer to every Euler solution in our repository. It scans every subdirectory and finds and runs the proposed Euler solution and then lists out any algorithm that does not return the correct answer.

I would like to propose that we add @dhruvmanila as a maintainer of TheAlgorithms.

@cclauss cclauss merged commit ceacfc6 into TheAlgorithms:master Sep 28, 2020
@dhruvmanila dhruvmanila deleted the project-euler-pytest branch September 28, 2020 09:41
@dhruvmanila
Copy link
Member Author

@cclauss Thank you for giving me the opportunity to work on it. 😇

@dhruvmanila dhruvmanila mentioned this pull request Sep 30, 2020
14 tasks
stokhos pushed a commit to stokhos/Python that referenced this pull request Jan 3, 2021
* Add file for testing Project Euler solutions

* Remove the importlib import

* Update project_euler/solution_test.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Small tweaks to project_euler/solution_test.py

* Test Project Euler solutions through Travis

* Improved testing for Project Euler solutions:

- Renamed file so that it isn't picked up by pytest
- Fail fast on validating solutions through Travis CI

* Update validate_solutions.py

* Use namedtuple for input parameters and answer

- Remove logging
- Remove unnecessary checks for PROJECT_EULER_PATH as Travis CI
  picks up the same path

* Fix flake8 errors: line too long

* Small tweaks to validate_solutions.py

* Add all answers & back to using dictionary

* Using pytest for testing Project Euler solutions

- As we want to fail fast on testing solutions, we need to test using
  this script first before we use tests written by the author.
- As pytest stops testing as soon as it receives a traceback, we need to
  use pytest-subtests to tell pytest to test all the iterations for the
  function with given parameters.

* Print error messages in oneline format

* Separated answers into a separate file:

- Add custom print function to print all the error messages at the
  end of all the tests
- Let Travis skip if this failed

Co-authored-by: Christian Clauss <cclauss@me.com>
Panquesito7 pushed a commit to Panquesito7/Python that referenced this pull request May 13, 2021
* Add file for testing Project Euler solutions

* Remove the importlib import

* Update project_euler/solution_test.py

Co-authored-by: Christian Clauss <cclauss@me.com>

* Small tweaks to project_euler/solution_test.py

* Test Project Euler solutions through Travis

* Improved testing for Project Euler solutions:

- Renamed file so that it isn't picked up by pytest
- Fail fast on validating solutions through Travis CI

* Update validate_solutions.py

* Use namedtuple for input parameters and answer

- Remove logging
- Remove unnecessary checks for PROJECT_EULER_PATH as Travis CI
  picks up the same path

* Fix flake8 errors: line too long

* Small tweaks to validate_solutions.py

* Add all answers & back to using dictionary

* Using pytest for testing Project Euler solutions

- As we want to fail fast on testing solutions, we need to test using
  this script first before we use tests written by the author.
- As pytest stops testing as soon as it receives a traceback, we need to
  use pytest-subtests to tell pytest to test all the iterations for the
  function with given parameters.

* Print error messages in oneline format

* Separated answers into a separate file:

- Add custom print function to print all the error messages at the
  end of all the tests
- Let Travis skip if this failed

Co-authored-by: Christian Clauss <cclauss@me.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants