-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add unit tests for error states in critical workflows #1118
Conversation
44ec28f
to
8931c6d
Compare
da76289
to
93f1f0b
Compare
93f1f0b
to
43cc2ed
Compare
I've rebased the branch on top of frontend-tests-dashboard |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some suggestions on scenarios we could add, can be done in a later pr too
EmailRecipient
- test for invalid file/valid file scenario
SMSCredential
- test with invalid phone number like
80000000
frontend/src/components/dashboard/create/email/tests/EmailTemplate.test.tsx
Show resolved
Hide resolved
// Setup | ||
jest.spyOn(console, 'error').mockImplementation(() => { | ||
// Do nothing. Mock console.error to silence expected errors | ||
// due to submitting invalid templates to the API | ||
}) | ||
server.use(...mockApis(false)) | ||
renderTemplatePage() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly the setup and teardown in these tests could be extracted to jest before/after hooks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's a bit difficult to do so, since most tests have slightly different setup/teardown phases. For example, some tests mock console.error
and some use protected email campaigns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, as for mocking the console.error
, if you want to do this globally, you can feed it in a setup file in your jest.config.ts
, this is something the backend tests currently implement to silence console.log
messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I considered doing so, but decided against mocking console.error
globally. When fixing test failures, I found that console.error
statements provided useful information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, the backend tests currently implement that only for console.log
to not overcrowd the travis log output
expect(screen.getByRole('button', { name: /next/i })).toBeInTheDocument() | ||
}) | ||
|
||
test('displays an error if the subject is empty after sanitization', async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also test for the error messages output when invalid templates are provided such as {{test}
, {{test 123}}
{{}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to add these in? i think the reason why I gave these examples is that they each output a different error message, but if the error messages are generated on the backend and not the frontend, then we can address this in backend tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into them initially, but realised that I would need to add some extra cases to the API endpoint mocks to handle them, so I put that on hold 😅
Now that you mention it, having these in the backend tests sounds like a good idea as well! If so, I guess we can postpone adding frontend tests for those cases to later?
expect(screen.getByText(/characters/i)).toBeInTheDocument() | ||
}) | ||
|
||
test('next button is disabled when template is empty', async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For tests like these which are common across the channels, is there a way we could share and reuse them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I could create a common test for these 2 components, but I think maybe a cleaner option would be to extract the common code in SMSTemplate
and TelegramTemplate
into a separate component, and write a test.
I'll explore both options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've attempted to extract the common code to a BodyTemplate
component here: #1148
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this can go in first and then we can refactor the tests once the other pr is approved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, sounds good!
Make sure to set CI=true explicitly as Amplify doesn't set it by default. CI=true will run all the tests exactly once and exit. Also compile translations before running the tests.
Just a simple render test to ensure that the test infrastructure works fine.
This makes more sense as compilation errors are more important than test errors. This also matches the Amplify build configuration.
Also refactor common API endpoints shared with the Email campaign test into a separate array.
Also add assertions for testing the credential dropdown for SMS campaigns.
…paign Previously, we were selecting the SMS channel button, not the email button.
e560206
to
b986412
Compare
Sorry for the erased commit history -- I've rebased the branch atop of |
* develop: refactor: use shared function to initialize models (#1172) chore: setup scaffolding for backend tests (#940) 1.23.0 fix(frontend): fix frontend test flakiness (#1162) fix: update successful delivery status only if error does not exist (#1150) chore: upgrade dependencies (#1153) feat: add unit tests for error states in critical workflows (#1118) feat: support whitelisting domains through `agencies` table (#1141) feat: add tests for happy paths in critical workflows (#1110) fix: prevent campaign names from causing dashboard rows to overflow (#1147) fix(email): Fix SendGrid fallback integration (#1026)
* develop: feat: refactor msg template components; add telegram character limit (#1148) refactor: use shared function to initialize models (#1172) chore: setup scaffolding for backend tests (#940) 1.23.0 fix(frontend): fix frontend test flakiness (#1162) fix: update successful delivery status only if error does not exist (#1150) chore: upgrade dependencies (#1153) feat: add unit tests for error states in critical workflows (#1118) feat: support whitelisting domains through `agencies` table (#1141) feat: add tests for happy paths in critical workflows (#1110) fix: prevent campaign names from causing dashboard rows to overflow (#1147)
Problem
#1110 added tests for the happy paths in critical workflows. It would be a good idea to add tests for the error paths as well to cover our bases.
Workflows:
Partially resolves issue #1079
Solution
Features:
Before & After Screenshots
There are no visual changes.
Tests
These tests have been verified on Travis and Amplify.
Deploy Notes
The aforementioned tests have been added and should be run during every deployment on Travis and Amplify. If any test fails, the deployment should not go through.
Specifically, Amplify should not publish the new build if any test fails.