-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.15.3 upgrade #65
0.15.3 upgrade #65
Conversation
cache schemas and relations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great.
I just did some end-to-end testing on local Spark and Databricks (including new merge
incremental strategy). The only small thing I found is that both the seed
and table
materializations raise a new-in-0.15 deprecation warning:
* Deprecation Warning: The materialization ("table") did not explicitly return a list
of relations to add to the cache. By default the target relation will be
added, but this behavior will be removed in a future version of dbt.
For more information, see:
https://docs.getdbt.com/v0.15/docs/creating-new-materializations#section-6-returning-relations
That's something we should resolve before shipping. Otherwise, I think this is good to go.
bfa9f60
to
a24e4e3
Compare
a24e4e3
to
9b13d12
Compare
I must have lost those in the merge, I think I've fixed it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✨
This was one heck of a merge!
This PR is obviously based on @SamKosky's totally awesome work with #46. Where possible, I've prioritized PRs that have merged since it was written over the contents of #46 that did the same thing
There's probably a significant amount of cruft left over (duplicated/vestigial methods and macros, especially). If you notice any in your review, please shout.
I basically can't rebase this, I'm sorry about that. Getting a merge to behave was hard enough!
There might be test failures, I didn't get spark-http running locally.