Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added alert to study eval page if metrics do not exist #821

Merged
merged 4 commits into from
May 17, 2023

Conversation

dannypeterson
Copy link
Contributor

Added info alert using if statement to check if metrics exist for assessment

study eval

@dannypeterson dannypeterson marked this pull request as ready for review May 3, 2023 15:00
@shapiromatron shapiromatron self-requested a review May 8, 2023 22:29
Copy link
Owner

@shapiromatron shapiromatron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took a different approach; it turns out we had a message already but it was broken because we had a typo in the logic passed from the template - if its ok, feel free to merge.

@@ -73,7 +73,7 @@ def grouped(metrics: Iterable) -> list[list[models.RiskOfBiasMetric]]:
)
context.update(
metrics=metrics,
no_data=metrics == len(metrics) == 0,
no_data=len(metrics) == 0,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ended up changing this - which fixed the existing message we had I think it was trying to compare [] == True which must have been an error

@dannypeterson dannypeterson merged commit 5bfc6bc into main May 17, 2023
@dannypeterson dannypeterson deleted the empty-study-evaluation-page branch May 17, 2023 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants