You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
now with running things locally via helm + k8s we occasionally see the mariadb container crash, which in turn causes HTTP 50x from the grafana and oncall-engine containers. Solving this root issue will fix these HTTP 50xs:
Events:
│ Type Reason Age From Message
│ ---- ------ ---- ---- -------
│ Normal Created 18m kubelet Created container mariadb
│ Normal Started 18m kubelet Started container mariadb
│
│ Warning Unhealthy 7m22s (x11 over 15m) kubelet Liveness probe failed: command "/bin/bash -ec password_aux=\"${MARIADB_ROOT_PASSWORD:-}\"\nif [[ -f \"${MARIADB_ROOT_PASSWORD_FILE:-}\" ]]; then\n password_aux=$(cat \"$MARIADB_ROOT_PASSWORD_FI ││ LE\")\nfi\nmysqladmin status -uroot -p\"${password_aux}\"\n" timed out
│ Warning Unhealthy 2m42s (x18 over 15m) kubelet Readiness probe failed: command "/bin/bash -ec password_aux=\"${MARIADB_ROOT_PASSWORD:-}\"\nif [[ -f \"${MARIADB_ROOT_PASSWORD_FILE:-}\" ]]; then\n password_aux=$(cat \"$MARIADB_ROOT_PASSWORD_F ││ ILE\")\nfi\nmysqladmin status -uroot -p\"${password_aux}\"\n" timed out
Performance Issue in GSelect component when searching #1628 - this one is preventing us from being able to parallelize the tests due to the # of API requests that the UI makes + the very limited amount of CPU resources that the GitHub Actions CI host provides. Parallelization will be important once we start to amass more and more e2e tests.
I have seen a handful of cases where the tests will fail globalSetup.ts, mostly related to a failing assertion when configuring the plugin. We should increase the timeout for the "plugin configured" assertion from the default of 5s to ~25s. This is because it can sometimes take a bit longer for the backend sync to finish (closed in re-enable e2e UI tests on CI #1961)
// wait for the "Connected to OnCall" message to know that everything is properly configuredawaitexpect(page.getByTestId('status-message-block')).toHaveText(/ConnectedtoOnCall.*/);
The text was updated successfully, but these errors were encountered:
#1692 is still open. This PR is not an ideal approach, but it's a quick
win while we wait for that issue to be resolved.
By retrying failing tests up to 3 times, we _should_ be fine to
re-enable these on CI. If a test is failing > 3 times, there's likely a
legitimate issue occuring.
#1692 is still open. This PR is not an ideal approach, but it's a quick
win while we wait for that issue to be resolved.
By retrying failing tests up to 3 times, we _should_ be fine to
re-enable these on CI. If a test is failing > 3 times, there's likely a
legitimate issue occuring.
There are a few UI bugs which currently are contributing to the e2e tests not being reliable on CI. Below is a list of issues I have seen (so far):
helm
+ k8s we occasionally see themariadb
container crash, which in turn causes HTTP 50x from thegrafana
andoncall-engine
containers. Solving this root issue will fix these HTTP 50xs:globalSetup.ts
, mostly related to a failing assertion when configuring the plugin. We should increase the timeout for the "plugin configured" assertion from the default of 5s to ~25s. This is because it can sometimes take a bit longer for the backend sync to finish (closed in re-enable e2e UI tests on CI #1961)The text was updated successfully, but these errors were encountered: