Skip to content
This repository has been archived by the owner on Jun 9, 2024. It is now read-only.

Add basic code generation challenge #98

Merged

Conversation

waynehamadi
Copy link
Contributor

Background

Changes

PR Quality Checklist

  • I have run the following commands against my code to ensure it passes our linters:
    black .
    isort .
    mypy .
    autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring --in-place agbenchmark

Copy link

@ai-maintainer ai-maintainer bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI-Maintainer Review for PR - Add basic code generation challenge

Title and Description ❌

The title and description need more detail

The title "Add basic code generation challenge" gives a general idea of what the PR is about, but it could be more specific. The description is empty, which leaves a lot of questions about the purpose and context of the changes. Please provide more detail in both the title and description, including the motivation behind the changes, any relevant context or prior discussions, and how the changes align with the project's overall direction.

Scope of Changes ✅

The changes are narrowly focused

The changes in this PR are narrowly focused on adding a new code generation challenge. The changes include the addition of a copy_artifacts_into_workspace function in the setup_challenge method, the creation of new files in the challenges/code/d4/artifacts_out directory, and the addition of a test.py file in the challenges/code/d4/hidden_files directory. There are no unrelated or "extra" changes in the diff.

Testing ❌

Testing details are missing

The description does not provide any information about how the changes were tested. Please include details about the testing approach, the test cases used, the expected outcomes of the tests, and any test frameworks or tools employed.

Docstrings ❌

Docstrings are missing for some functions and methods

The following functions and methods need docstrings to describe their behavior, arguments, and return values:

  1. setup_challenge method in challenge.py module.
  2. two_sum function in code.py module.
  3. test_two_sum function in test.py module.

Suggested Changes

  1. Please provide a more detailed title and description for the PR.
  2. Include information about how the changes were tested in the PR description.
  3. Add docstrings to the setup_challenge method, two_sum function, and test_two_sum function.

Thank you for your contribution!

Reviewed with AI Maintainer

"task": "Create a two_sum function. Given an array of integers, return indices of the two numbers such that they add up to a specific target. You may assume that each input would have exactly one solution, and you may not use the same element twice. Example: Given nums = [2, 7, 11, 15], target = 9, Because nums[0] + nums[1] = 2 + 7 = 9, return [0, 1].",
"dependencies": ["TestWriteFile"],
"ground": {
"answer": "[0, 1] [2, 5] [0, 3]",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to be more descriptive here. A heuristic for what answer is: a quick sentence you can read to understand what the response should be without needing to read anything else

@@ -0,0 +1,18 @@
{
"name": "TestBasicCodeGeneration",
"category": ["code"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a mark here for "generate" and create another mark called "iterate" & add it to pyproject.toml? this way the code category stays solid but these markers can be used globally (also for other challenges such as writing)

"type": "execute_python_code"
},
"info": {
"difficulty": "basic",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to me to be maybe a novice level challenge? maybe not basic but this is a nitpick if you think its basic thats fine

# We copy them in the workspace to make it easy to import the code produced by the agent

copy_artifacts_into_workspace(
config["workspace"], "hidden_files", self.CHALLENGE_LOCATION
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you write what these hidden_files are in the read me? doesnt have to be structured but for my own understanding what the difference between these and artifacts_out

@waynehamadi waynehamadi marked this pull request as draft July 14, 2023 17:01
@waynehamadi waynehamadi force-pushed the add-basic-code-challenge branch 2 times, most recently from 3fc1a95 to b89fdf2 Compare July 14, 2023 17:07
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
@waynehamadi waynehamadi force-pushed the add-basic-code-challenge branch from b89fdf2 to 1c43994 Compare July 14, 2023 17:07
@waynehamadi waynehamadi marked this pull request as ready for review July 14, 2023 17:16
Copy link

@ai-maintainer ai-maintainer bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI-Maintainer Review for PR - Add basic code generation challenge

Title and Description 👍

The title is clear and concise The title of the pull request, "Add basic code generation challenge," provides a clear and concise overview of the purpose of the changes. However, the description of the pull request is missing. It would be beneficial to have a description that provides more context and explains the rationale behind adding the code generation challenge. This would help reviewers and future contributors understand the motivation behind the changes.

Scope of Changes 👍

The changes are narrowly focused The changes in this pull request are narrowly focused on adding a basic code generation challenge. The diff only includes modifications related to this specific challenge, such as adding new files, modifying existing code, and updating documentation related to the challenge. There are no unrelated or "extra" changes present in the diff.

Testing ⚠️

Testing details are missing The description of the pull request does not explicitly describe how the author tested the changes. It only includes a PR Quality Checklist that mentions running various commands against the code to ensure it passes linters. While running linters is a good practice, it would be beneficial for the author to provide more information about the testing process. This could include details on any additional tests performed, such as unit tests, integration tests, or manual testing.

Docstrings ⚠️

Docstrings are missing for some functions The following functions, classes, or methods do not have docstrings:
  • setup_challenge in agbenchmark/challenge.py (line 46)
  • test_method in agbenchmark/challenge.py (line 52)
  • two_sum in agbenchmark/challenges/code/d4/artifacts_out/code.py (line 1)
  • test_two_sum in agbenchmark/challenges/code/d4/hidden_files/test.py (line 5)

These functions, classes, or methods should have docstrings added to describe their behavior, arguments, and return values.

Suggested Changes

  • Please add a detailed description to the pull request explaining the rationale behind the changes and how you tested them.
  • Add docstrings to the setup_challenge, test_method, two_sum, and test_two_sum functions to describe their behavior, arguments, and return values.

Reviewed with AI Maintainer

@SilenNaihin SilenNaihin merged commit a9702e4 into Significant-Gravitas:master Jul 14, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants