-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New test configs: where, limit, warn_if, error_if, fail_calc #3336
Conversation
Also need to update (used for |
84bca9c
to
86649b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking really good! I haven't had the chance for an exhaustive review, so I just dropped some quick comments where things are standing out to me.
'test.test.not_null_base_extension_id.4a9d96018d', | ||
'test.test.not_null_base_extension_id.60bbea9027' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why the unique_id
s have changed? We shouldn't be including any configs in the hash, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model kwarg changed with the where
config. The hash includes kwargs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! And looks like a lot of failing tests that remain are checking result.message
against an integer, so just need to switch over to result.failures
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
Co-authored-by: Jeremy Cohen <jeremy@fishtownanalytics.com>
87ed860
to
64b4d3f
Compare
cd9f221
to
39c61f0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took this for a ride, it's looking really good!
- I found one thing we need to switch for config inheritance, which I didn't realize back in Feature: test config parity #3257.
- I was able to reimplement the tests in
dbt_utils
through a combination ofwhere
+fail_calc
. - I realize I originally opened this PR... so I'm going to mark it as review-ready, hah. Feel free to close and reopen in a new PR if you'd like.
@@ -334,10 +351,13 @@ def build_raw_sql(self) -> str: | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to reorder "{config}{{{{ {macro}(**{kwargs_name}) }}}}"
to be {{{{ {macro}(**{kwargs_name}) }}}}{config}
, so that specific configs (supplied by the user in the modifiers) override the generic configs set in the test macro.
# this is the 'raw_sql' that's used in 'render_update' and execution
# of the test macro
# config needs to be rendered last to take precedence over default configs
# set in the macro (test definition)
def build_raw_sql(self) -> str:
return (
"{{{{ {macro}(**{kwargs_name}) }}}}{config}"
).format(
macro=self.macro_name(),
config=self.construct_config(),
kwargs_name=SCHEMA_TEST_KWARGS_NAME,
)
core/dbt/contracts/graph/compiled.py
Outdated
other.unrendered_config.get('severity') | ||
) | ||
|
||
# TODO: this is unused |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes... indeed it is. I confirmed that dbt test -m state:modified
works as expected, leveraging the unrendered_config
.
error_if: str = "!= 0" | ||
|
||
@classmethod | ||
def same_contents( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! I see now that this is the proper place for it. Nice work, this logic is much clearer.
resolves #3258, #3321
Description
Adds
fourfive new test configs:limit
: Simple enough, templated out in the materialization. This is mostly a complement for dbt test --store-failures #3316where
: The way I did this was unbelievably hacky—see the test cases for yourself—but it works, and in a way that should be backwards compatible with all existing schema/generic tests, without them needing to change any part of their SQL definition. (This config doesn't make sense for one-off tests.)warn_if
,error_if
: The user supplies a python-evaluable string (e.g.>=3
,<5
,==0
,!=0
) and dbt will compare the fail count/calc against it.>0
. (I realize now this should probably be!=0
)severity
: A little tricky, but I actually think they work reasonably together. By default,severity: error
, and dbt checks theerror_if
condition first; if not error, then check thewarn_if
condition; if the result meets neither, it passes the test. If the user setsseverity: warn
, dbt will skip over theerror_if
condition entirely and jump straight towarn_if
.fail_calc
: User-supplied fail_calc for tests #3321TODO
where
logic in a special internal macro, so long as it's included in the schema test parsing context?unique
+not_null
shortcut?For the future
I got stuck trying to implement relative/percentage
warn_if
anderror_if
. That's ok! That piece, while compelling, can very easily come in a later issue.error_if: >5%
, it's easy enough toerror_if.replace('%', '/100')
and compare against>5/100
instead.where
condition included). At materialization time, we can accesstest_metadata.kwargs.model
, but this is an unrendered Jinja string, and even with theas_native
filter, I couldn't figure out how to "extra-render" it.Checklist
CHANGELOG.md
and added information about my change to the "dbt next" section.