- Corrected references to connectors and connections in the README. (#61)
- (Affects Redshift only) Updates the
union_zendesk_connections
macro to use a limit 1 instead of limit 0 for empty tables.- When a table is empty, Redshift ignores explicit data casts and will materialize every column as a
varchar
. Redshift users may experience errors in downstream transformations as a consequence. - For each staging model, if the source table is not found, the package will create a empty table with 0 rows for non-Redshift warehouses and a table with 1 all-
null
row for Redshift destinations. The 1 row will ensure that Redshift will respect the package's datatype casts.
- When a table is empty, Redshift ignores explicit data casts and will materialize every column as a
- Moved badges at top of the README below the H1 header to be consistent with popular README formats.
PR #59 includes the following updates:
- Introduced new config variables for whether
brand
ororganization
tables are present, allowing customers to either enable or disable the respective staging and tmp models:- Updated
stg_zendesk__brand
(and upstreamtmp
model) with the newusing_brands
config variable. - Updated
stg_zendesk__organization
(and upstreamtmp
model) with the newusing_organizations
config variable. - Updated
stg_zendesk__organization_tag
(and upstreamtmp
model) with the newusing_organizations
config variable, as theorganization_tag
source table can be disabled in some situations, whileorganization
is not. Thus anything that is disabled/enabled byusing_organization_tags
should contain both theusing_organization_tags
ANDusing_organizations
variables.
- Updated
- Updated our Buildkite model run script to ensure we test for when
using_brands
andusing_organizations
is set to either true or false.
- Added enabled config variables to
brand
,organization
andorganization_tag
in thesrc_zendesk.yml
models. - Updated README with instructions on how to disable
brand
andorganization
sources.
PR #58 includes the following update:
- In v0.14.0 (or v0.19.0 of the transform package), Snowflake users may have seen
when searching for a relation, dbt found an approximate match
errors when running thestg_zendesk__group_tmp
model. The issue stemmed from theadapter.get_relation()
logic within theunion_zendesk_connections
macro, which has now been updated to resolve the error.
PR #44 includes the following updates:
- This release supports running the package on multiple Zendesk sources at once! See the README for details on how to leverage this feature.
Please note: This is a Breaking Change in that we have a added a new field,
source_relation
, that points to the source connector from which the record originated.
- Added missing documentation for staging model columns.
PR #55 includes the following updates:
- Introduced the
stg_zendesk__audit_log
table for capturing schedule changes from Zendesk's audit log.- This model is disabled by default, to enable it set variable
using_schedule_histories
totrue
in yourdbt_project.yml
. - While currently used for schedule tracking, this table has possible future applications, such as tracking user changes.
- This model is disabled by default, to enable it set variable
- Updated the
stg_zendesk__schedule_holidays
model to allow users to disable holiday processing (while still using schedules) by settingusing_holidays
tofalse
. - Added field-level documentation for the
stg_zendesk__audit_log
table.
- Added seed data for
audit_log
to enhance integration testing capabilities.
PR #53 includes the following updates:
- Added field
_fivetran_deleted
to the following models for use downstream:stg_zendesk__ticket
stg_zendesk__ticket_comment
stg_zendesk__user
- If you have already added
_fivetran_deleted
as a passthrough columns using thezendesk__ticket_passthrough_columns
orzendesk__user_passthrough_columns
vars, you will need to remove or alias this field from the variable to avoid duplicate column errors.
- Updated documentation to include
_fivetran_deleted
.
PR #49 includes the following updates:
- Adds passthrough column support for
USER
andORGANIZATION
.- Using the new
zendesk__user_passthrough_columns
andzendesk__organization_passthrough_columns
variables, you can include custom columns from these source tables in their respective staging models. See README for more details on how to configure.
- Using the new
- Also updated the format of the pre-existing
TICKET
passthrough column variable,zendesk__ticket_passthrough_columns
, to align with the newly added passthrough variables delineated above.- Previously, you could only provide a list of custom fields to be included in
stg_zendesk__ticket
. Now, you have the option to provide analias
andtransform_sql
clause to be applied to each field (see README for more details). - Note: the package is and will continue to be backwards compatible with the old list-format.
- Previously, you could only provide a list of custom fields to be included in
PR #48 includes the following updates:
- Adds the
phone
field tostg_zendesk__user
and ensures it is astring
if the column is not found in your source data. - Adds documentation for
user
fields that were previously missing yml descriptions.
PR #46 includes the following updates:
- Updated the following staging models to leverage the
{{ dbt.type_timestamp() }}
macro on timestamp fields in order to ensure timestamp with no timezone is used in downstream models. This update will cause timestamps to be converted to have no timezone. If records were reported as timezone timestamps before, this will result in converted timestamp records.stg_zendesk__ticket
stg_zendesk__ticket_comment
stg_zendesk__ticket_field_history
stg_zendesk__ticket_form_history
stg_zendesk__ticket_schedule
stg_zendesk__user
- Updated "Zendesk" references within the README to now refer to "Zendesk Support" in order to more accurately reflect the name of the Fivetran Zendesk Support Connector.
PR #43 introduces the following updates:
- Added the
internal_user_criteria
variable, which can be used to mark internal users whoseUSER.role
may have changed fromagent
toend-user
after they left your organization. This variable accepts SQL that may reference any non-custom field inUSER
, and it will be wrapped in acase when
statement in thestg_zendesk__user
model.- Example usage:
# dbt_project.yml
vars:
zendesk_source:
internal_user_criteria: "lower(email) like '%@fivetran.com' or external_id = '12345' or name in ('Garrett', 'Alfredo')" # can reference any non-custom field in USER
- Output: In
stg_zendesk__user
, users who match your criteria and have a role ofend-user
will have their role switched toagent
. This will ensure that downstream SLA metrics are appropriately calculated.
- Updated the way we dynamically disable sources. Previously, we used a custom
meta.is_enabled
flag, but, since we added this, dbt-core introduced a nativeconfig.enabled
attribute. We have opted to use the dbt-native config instead. - Updated the pull request templates.
- Included auto-releaser GitHub Actions workflow to automate future releases.
PR #42 introduces the following updates:
- We have changed the identifier logic in
src_zendesk.yml
to account forgroup
being both a Snowflake reserved word and a source table. Snowflake users will want to execute adbt run --full-refresh
before using the new version of the package.
- Updated our
tmp
models to utilize thedbt_utils.star
macro rather than the select * function. This removes Snowflake issues that arise when a source's dimensions change.
- Updates to the seed files and seed file configurations for the package integration tests to ensure updates are properly tested.
- Adding the
schedule_holiday
source table so that downstream models that involve business minutes calculations will accurately take holiday time into account. This staging model may be disabled by settingusing_schedules
to false. (#92)
- Incorporated the new
fivetran_utils.drop_schemas_automation
macro into the end of each Buildkite integration test job. (#37) - Updated the pull request templates. (#37)
- Updated the dbt-utils dispatch within the
stg_zendesk__ticket_schedule_tmp
model to properly dispatchdbt
as opposed todbt_utils
for the cross-db-macros. (#32)
PR #31 includes the following breaking changes:
- Dispatch update for dbt-utils to dbt-core cross-db macros migration. Specifically
{{ dbt_utils.<macro> }}
have been updated to{{ dbt.<macro> }}
for the below macros:any_value
bool_or
cast_bool_to_text
concat
date_trunc
dateadd
datediff
escape_single_quotes
except
hash
intersect
last_day
length
listagg
position
replace
right
safe_cast
split_part
string_literal
type_bigint
type_float
type_int
type_numeric
type_string
type_timestamp
array_append
array_concat
array_construct
- For
current_timestamp
andcurrent_timestamp_in_utc
macros, the dispatch AND the macro names have been updated to the below, respectively:dbt.current_timestamp_backcompat
dbt.current_timestamp_in_utc_backcompat
- Dependencies on
fivetran/fivetran_utils
have been upgraded, previously[">=0.3.0", "<0.4.0"]
now[">=0.4.0", "<0.5.0"]
.
π¨ This includes Breaking Changes! π¨
- Updated README documentation for easier navigation and dbt package setup (#28).
- Included the
zendesk_[source_table_name]_identifier
variables for easier flexibility of the package models to refer to differently named sources tables (#28). - Databricks compatibility 𧱠(#29)
- By default, this package now builds the Zendesk staging models within a schema titled (
<target_schema>
+_zendesk_source
) in your target database. This was previously<target_schema>
+_zendesk_staging
, but we have changed it to maintain consistency with our other packges. See the README for instructions on how to configure the build schema differently.
- The
stg_zendesk__ticket
table now allows for your custom passthrough columns to be added via thezendesk__ticket_passthrough_columns
variable. You can add your passthrough columns as a list within the variable in your project configuration. (#27)
- Incorporates the
daylight_time
andtime_zone
source tables into the package. In the transform package, these tables are used to more precisely calculate business hour metrics (#62).
π dbt v1.0.0 Compatibility π
- Adjusts the
require-dbt-version
to now be within the range [">=1.0.0", "<2.0.0"]. Additionally, the package has been updated for dbt v1.0.0 compatibility. If you are using a dbt version <1.0.0, you will need to upgrade in order to leverage the latest version of the package.- For help upgrading your package, I recommend reviewing this GitHub repo's Release Notes on what changes have been implemented since your last upgrade.
- For help upgrading your dbt project to dbt v1.0.0, I recommend reviewing dbt-labs upgrading to 1.0.0 docs for more details on what changes must be made.
- Upgrades the package dependency to refer to the latest
dbt_fivetran_utils
. The latestdbt_fivetran_utils
package also has a dependency ondbt_utils
[">=0.8.0", "<0.9.0"].- Please note, if you are installing a version of
dbt_utils
in yourpackages.yml
that is not in the range above then you will encounter a package dependency error.
- Please note, if you are installing a version of
- Adjusted timestamp fields within staging models to explicitly cast the data type as
timestamp without time zone
. This fixes a Redshift error where downstream datediff and dateadd functions would result in an error if the timestamp fields are synced astimestamp_tz
. (#23)
- @juanbriones (#55)
Refer to the relevant release notes on the Github repository for specific details for the previous releases. Thank you!