Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/clickhouse] Add create_schema option to config #32282

Merged
merged 5 commits into from
Jun 3, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .chloggen/clickhouse-add-create-schema-option.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: clickhouseexporter

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: "Add `create_schema` option to ClickHouse exporter"

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [32282]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext: The new create_schema option allows disabling default DDL to let the user manage their own schema.

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: []
17 changes: 16 additions & 1 deletion exporter/clickhouseexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,10 +279,11 @@ Connection options:

- `username` (default = ): The authentication username.
- `password` (default = ): The authentication password.
- `connection_params` (default = {}). Params is the extra connection parameters with map format.
- `ttl_days` (default = 0): **Deprecated: Use 'ttl' instead.** The data time-to-live in days, 0 means no ttl.
- `ttl` (default = 0): The data time-to-live example 30m, 48h. Also, 0 means no ttl.
- `database` (default = otel): The database name.
- `connection_params` (default = {}). Params is the extra connection parameters with map format.
- `create_schema` (default = true): When set to true, will run DDL to create the database and tables. (See [schema management](#schema-management))

ClickHouse tables:

Expand Down Expand Up @@ -321,6 +322,19 @@ Processing:
The exporter supports TLS. To enable TLS, you need to specify the `secure=true` query parameter in the `endpoint` URL or
use the `https` scheme.

## Schema management

By default the exporter will create the database and tables under the names defined in the config. This is fine for simple deployments, but for production workloads, it is recommended that you manage your own schema by setting `create_schema` to `false` in the config.
This prevents each exporter process from racing to create the database and tables, and makes it easier to upgrade the exporter in the future.

In this mode, the only SQL sent to your server will be for `INSERT` statements.

The default DDL used by the exporter can be found in `example/default_ddl`.
Be sure to customize the indexes, TTL, and partitioning to fit your deployment.
Column names and types must be the same to preserve compatibility with the exporter's `INSERT` statements.
As long as the column names/types match the `INSERT` statement, you can create whatever kind of table you want.
See [ClickHouse's LogHouse](https://clickhouse.com/blog/building-a-logging-platform-with-clickhouse-and-saving-millions-over-datadog#schema) as an example of this flexibility.

## Example

This example shows how to configure the exporter to send data to a ClickHouse server.
Expand All @@ -339,6 +353,7 @@ exporters:
endpoint: tcp://127.0.0.1:9000?dial_timeout=10s&compress=lz4
database: otel
ttl: 72h
create_schema: true
logs_table_name: otel_logs
traces_table_name: otel_traces
metrics_table_name: otel_metrics
Expand Down
11 changes: 11 additions & 0 deletions exporter/clickhouseexporter/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ type Config struct {
TableEngine TableEngine `mapstructure:"table_engine"`
// ClusterName if set will append `ON CLUSTER` with the provided name when creating tables.
ClusterName string `mapstructure:"cluster_name"`
// CreateSchema if set to true will run the DDL for creating the database and tables. default is true.
CreateSchema *bool `mapstructure:"create_schema"`
}

// TableEngine defines the ENGINE string value when creating the table.
Expand Down Expand Up @@ -147,6 +149,15 @@ func (cfg *Config) buildDB(database string) (*sql.DB, error) {
return conn, nil
}

// ShouldCreateSchema returns true if the exporter should run the DDL for creating database/tables.
func (cfg *Config) ShouldCreateSchema() bool {
if cfg.CreateSchema == nil {
return true // default to true
}

return *cfg.CreateSchema
}

// TableEngineString generates the ENGINE string.
func (cfg *Config) TableEngineString() string {
engine := cfg.TableEngine.Name
Expand Down
44 changes: 44 additions & 0 deletions exporter/clickhouseexporter/config_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ func TestLoadConfig(t *testing.T) {
defaultCfg := createDefaultConfig()
defaultCfg.(*Config).Endpoint = defaultEndpoint

createSchema := true
storageID := component.MustNewIDWithName("file_storage", "clickhouse")

tests := []struct {
Expand All @@ -55,6 +56,7 @@ func TestLoadConfig(t *testing.T) {
LogsTableName: "otel_logs",
TracesTableName: "otel_traces",
MetricsTableName: "otel_metrics",
CreateSchema: &createSchema,
TimeoutSettings: exporterhelper.TimeoutSettings{
Timeout: 5 * time.Second,
},
Expand Down Expand Up @@ -275,6 +277,48 @@ func TestConfig_buildDSN(t *testing.T) {
}
}

func TestShouldCreateSchema(t *testing.T) {
t.Parallel()

createSchemaTrue := true
createSchemaFalse := false

caseDefault := createDefaultConfig().(*Config)
caseCreateSchemaTrue := createDefaultConfig().(*Config)
caseCreateSchemaTrue.CreateSchema = &createSchemaTrue
caseCreateSchemaFalse := createDefaultConfig().(*Config)
caseCreateSchemaFalse.CreateSchema = &createSchemaFalse

tests := []struct {
name string
input *Config
expected bool
}{
{
name: "default",
input: caseDefault,
expected: true,
},
{
name: "true",
input: caseCreateSchemaTrue,
expected: true,
},
{
name: "false",
input: caseCreateSchemaFalse,
expected: false,
},
}

for _, tt := range tests {
t.Run(fmt.Sprintf("ShouldCreateSchema case %s", tt.name), func(t *testing.T) {
assert.NoError(t, component.ValidateConfig(tt))
assert.Equal(t, tt.expected, tt.input.ShouldCreateSchema())
})
}
}

func TestTableEngineConfigParsing(t *testing.T) {
t.Parallel()
cm, err := confmaptest.LoadConf(filepath.Join("testdata", "config.yaml"))
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
-- Default database DDL (uses "default" by default, but this is example for non-default database)

CREATE DATABASE IF NOT EXISTS otel;
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
-- Default Histogram metrics table DDL

CREATE TABLE IF NOT EXISTS otel_metrics_histogram (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a unit test to verify the table DDL gennerate by code is as same as the example ones.

it can prevent the example ones allway up-to-date.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered reading the template strings from a file to avoid this completely, but ultimately this DDL is just an example.
As long as the column names and types are the same, the inserts should run correctly.

If we were to add unit tests then I would prefer to just read the SQL in from a file, or maybe embed it in the code at build time, or extract them into individual .go files. Let me know if any of this sounds like a better solution and I will update the PR. 👍

ResourceAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ResourceSchemaUrl String CODEC(ZSTD(1)),
ScopeName String CODEC(ZSTD(1)),
ScopeVersion String CODEC(ZSTD(1)),
ScopeAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ScopeDroppedAttrCount UInt32 CODEC(ZSTD(1)),
ScopeSchemaUrl String CODEC(ZSTD(1)),
ServiceName LowCardinality(String) CODEC(ZSTD(1)),
MetricName String CODEC(ZSTD(1)),
MetricDescription String CODEC(ZSTD(1)),
MetricUnit String CODEC(ZSTD(1)),
Attributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
StartTimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
TimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
Count UInt64 CODEC(Delta, ZSTD(1)),
Sum Float64 CODEC(ZSTD(1)),
BucketCounts Array(UInt64) CODEC(ZSTD(1)),
ExplicitBounds Array(Float64) CODEC(ZSTD(1)),
Exemplars Nested (
FilteredAttributes Map(LowCardinality(String), String),
TimeUnix DateTime64(9),
Value Float64,
SpanId String,
TraceId String
) CODEC(ZSTD(1)),
Flags UInt32 CODEC(ZSTD(1)),
Min Float64 CODEC(ZSTD(1)),
Max Float64 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
) ENGINE = MergeTree()
TTL toDateTime("TimeUnix") + toIntervalDay(180)
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
SETTINGS index_granularity=8192, ttl_only_drop_parts = 1;
31 changes: 31 additions & 0 deletions exporter/clickhouseexporter/example/default_ddl/logs.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
-- Default Logs table DDL

CREATE TABLE IF NOT EXISTS otel_logs (
Timestamp DateTime64(9) CODEC(Delta, ZSTD(1)),
TraceId String CODEC(ZSTD(1)),
SpanId String CODEC(ZSTD(1)),
TraceFlags UInt32 CODEC(ZSTD(1)),
SeverityText LowCardinality(String) CODEC(ZSTD(1)),
SeverityNumber Int32 CODEC(ZSTD(1)),
ServiceName LowCardinality(String) CODEC(ZSTD(1)),
Body String CODEC(ZSTD(1)),
ResourceSchemaUrl String CODEC(ZSTD(1)),
ResourceAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ScopeSchemaUrl String CODEC(ZSTD(1)),
ScopeName String CODEC(ZSTD(1)),
ScopeVersion String CODEC(ZSTD(1)),
ScopeAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
LogAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_body Body TYPE tokenbf_v1(32768, 3, 0) GRANULARITY 1
) ENGINE = MergeTree()
TTL toDateTime("Timestamp") + toIntervalDay(180)
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SeverityText, toUnixTimestamp(Timestamp), TraceId)
SETTINGS index_granularity=8192, ttl_only_drop_parts = 1;
39 changes: 39 additions & 0 deletions exporter/clickhouseexporter/example/default_ddl/sum_metrics.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
-- Default Sum metrics table DDL

CREATE TABLE IF NOT EXISTS otel_metrics_sum (
ResourceAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ResourceSchemaUrl String CODEC(ZSTD(1)),
ScopeName String CODEC(ZSTD(1)),
ScopeVersion String CODEC(ZSTD(1)),
ScopeAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ScopeDroppedAttrCount UInt32 CODEC(ZSTD(1)),
ScopeSchemaUrl String CODEC(ZSTD(1)),
ServiceName LowCardinality(String) CODEC(ZSTD(1)),
MetricName String CODEC(ZSTD(1)),
MetricDescription String CODEC(ZSTD(1)),
MetricUnit String CODEC(ZSTD(1)),
Attributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
StartTimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
TimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
Value Float64 CODEC(ZSTD(1)),
Flags UInt32 CODEC(ZSTD(1)),
Exemplars Nested (
FilteredAttributes Map(LowCardinality(String), String),
TimeUnix DateTime64(9),
Value Float64,
SpanId String,
TraceId String
) CODEC(ZSTD(1)),
AggTemp Int32 CODEC(ZSTD(1)),
IsMonotonic Boolean CODEC(Delta, ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
) ENGINE = MergeTree()
TTL toDateTime("TimeUnix") + toIntervalDay(180)
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
SETTINGS index_granularity=8192, ttl_only_drop_parts = 1;
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
-- Default Summary metrics DDL

CREATE TABLE IF NOT EXISTS otel_metrics_summary (
ResourceAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ResourceSchemaUrl String CODEC(ZSTD(1)),
ScopeName String CODEC(ZSTD(1)),
ScopeVersion String CODEC(ZSTD(1)),
ScopeAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ScopeDroppedAttrCount UInt32 CODEC(ZSTD(1)),
ScopeSchemaUrl String CODEC(ZSTD(1)),
ServiceName LowCardinality(String) CODEC(ZSTD(1)),
MetricName String CODEC(ZSTD(1)),
MetricDescription String CODEC(ZSTD(1)),
MetricUnit String CODEC(ZSTD(1)),
Attributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
StartTimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
TimeUnix DateTime64(9) CODEC(Delta, ZSTD(1)),
Count UInt64 CODEC(Delta, ZSTD(1)),
Sum Float64 CODEC(ZSTD(1)),
ValueAtQuantiles Nested(
Quantile Float64,
Value Float64
) CODEC(ZSTD(1)),
Flags UInt32 CODEC(ZSTD(1)),
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_key mapKeys(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_scope_attr_value mapValues(ScopeAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_key mapKeys(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_attr_value mapValues(Attributes) TYPE bloom_filter(0.01) GRANULARITY 1
) ENGINE = MergeTree()
TTL toDateTime("TimeUnix") + toIntervalDay(180)
PARTITION BY toDate(TimeUnix)
ORDER BY (ServiceName, MetricName, Attributes, toUnixTimestamp64Nano(TimeUnix))
SETTINGS index_granularity=8192, ttl_only_drop_parts = 1;
40 changes: 40 additions & 0 deletions exporter/clickhouseexporter/example/default_ddl/traces.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
-- Default Trace table DDL

CREATE TABLE IF NOT EXISTS otel_traces (
Timestamp DateTime64(9) CODEC(Delta, ZSTD(1)),
TraceId String CODEC(ZSTD(1)),
SpanId String CODEC(ZSTD(1)),
ParentSpanId String CODEC(ZSTD(1)),
TraceState String CODEC(ZSTD(1)),
SpanName LowCardinality(String) CODEC(ZSTD(1)),
SpanKind LowCardinality(String) CODEC(ZSTD(1)),
ServiceName LowCardinality(String) CODEC(ZSTD(1)),
ResourceAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
ScopeName String CODEC(ZSTD(1)),
ScopeVersion String CODEC(ZSTD(1)),
SpanAttributes Map(LowCardinality(String), String) CODEC(ZSTD(1)),
Duration Int64 CODEC(ZSTD(1)),
StatusCode LowCardinality(String) CODEC(ZSTD(1)),
StatusMessage String CODEC(ZSTD(1)),
Events Nested (
Timestamp DateTime64(9),
Name LowCardinality(String),
Attributes Map(LowCardinality(String), String)
) CODEC(ZSTD(1)),
Links Nested (
TraceId String,
SpanId String,
TraceState String,
Attributes Map(LowCardinality(String), String)
) CODEC(ZSTD(1)),
INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_key mapKeys(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_span_attr_value mapValues(SpanAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
INDEX idx_duration Duration TYPE minmax GRANULARITY 1
) ENGINE = MergeTree()
TTL toDateTime("Timestamp") + toIntervalDay(180)
PARTITION BY toDate(Timestamp)
ORDER BY (ServiceName, SpanName, toUnixTimestamp(Timestamp), TraceId)
SETTINGS index_granularity=8192, ttl_only_drop_parts = 1;
4 changes: 4 additions & 0 deletions exporter/clickhouseexporter/exporter_logs.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,10 @@ func newLogsExporter(logger *zap.Logger, cfg *Config) (*logsExporter, error) {
}

func (e *logsExporter) start(ctx context.Context, _ component.Host) error {
if !e.cfg.ShouldCreateSchema() {
return nil
}

if err := createDatabase(ctx, e.cfg); err != nil {
return err
}
Expand Down
8 changes: 6 additions & 2 deletions exporter/clickhouseexporter/exporter_metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,16 @@ func newMetricsExporter(logger *zap.Logger, cfg *Config) (*metricsExporter, erro
}

func (e *metricsExporter) start(ctx context.Context, _ component.Host) error {
internal.SetLogger(e.logger)

if !e.cfg.ShouldCreateSchema() {
return nil
}

if err := createDatabase(ctx, e.cfg); err != nil {
return err
}

internal.SetLogger(e.logger)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to drop this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was moved to the top of the function so that it is called before the DDL statements (conditionally) run.

I also considered extracting all of the DDL to its own function outside of start so this separation would be more explicit instead of relying on an early return from ShouldCreateSchema. Let me know if I should extract it to a runDDL type function. 👍


ttlExpr := generateTTLExpr(e.cfg.TTLDays, e.cfg.TTL, "TimeUnix")
return internal.NewMetricsTable(ctx, e.cfg.MetricsTableName, e.cfg.ClusterString(), e.cfg.TableEngineString(), ttlExpr, e.client)
}
Expand Down
4 changes: 4 additions & 0 deletions exporter/clickhouseexporter/exporter_traces.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,10 @@ func newTracesExporter(logger *zap.Logger, cfg *Config) (*tracesExporter, error)
}

func (e *tracesExporter) start(ctx context.Context, _ component.Host) error {
if !e.cfg.ShouldCreateSchema() {
return nil
}

if err := createDatabase(ctx, e.cfg); err != nil {
return err
}
Expand Down
Loading
Loading