Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Fix panic in avg aggregate and disable stddev by default #819

Merged
merged 4 commits into from
Aug 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions common/src/main/scala/org/apache/comet/CometConf.scala
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ object CometConf extends ShimCometConf {
val OPERATOR_LOCAL_LIMIT: String = "localLimit"
val OPERATOR_GLOBAL_LIMIT: String = "globalLimit"

val EXPRESSION_STDDEV: String = "stddev"

/** List of all configs that is used for generating documentation */
val allConfs = new ListBuffer[ConfigEntry[_]]

Expand Down Expand Up @@ -135,6 +137,12 @@ object CometConf extends ShimCometConf {
val COMET_EXEC_TAKE_ORDERED_AND_PROJECT_ENABLED: ConfigEntry[Boolean] =
createExecEnabledConfig(OPERATOR_TAKE_ORDERED_AND_PROJECT, defaultValue = false)

val COMET_EXPR_STDDEV_ENABLED: ConfigEntry[Boolean] =
createExecEnabledConfig(
EXPRESSION_STDDEV,
defaultValue = false,
notes = Some("stddev is slower than Spark's implementation"))

val COMET_MEMORY_OVERHEAD: OptionalConfigEntry[Long] = conf("spark.comet.memoryOverhead")
.doc(
"The amount of additional memory to be allocated per executor process for Comet, in MiB. " +
Expand Down Expand Up @@ -489,9 +497,13 @@ object CometConf extends ShimCometConf {
/** Create a config to enable a specific operator */
private def createExecEnabledConfig(
exec: String,
defaultValue: Boolean): ConfigEntry[Boolean] = {
defaultValue: Boolean,
notes: Option[String] = None): ConfigEntry[Boolean] = {
conf(s"$COMET_EXEC_CONFIG_PREFIX.$exec.enabled")
.doc(s"Whether to enable $exec by default. The default value is $defaultValue.")
.doc(
s"Whether to enable $exec by default. The default value is $defaultValue." + notes
.map(s => s" $s.")
.getOrElse(""))
.booleanConf
.createWithDefault(defaultValue)
}
Expand Down
1 change: 1 addition & 0 deletions docs/source/user-guide/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ Comet provides the following configuration settings.
| spark.comet.exec.shuffle.mode | The mode of Comet shuffle. This config is only effective if Comet shuffle is enabled. Available modes are 'native', 'jvm', and 'auto'. 'native' is for native shuffle which has best performance in general. 'jvm' is for jvm-based columnar shuffle which has higher coverage than native shuffle. 'auto' is for Comet to choose the best shuffle mode based on the query plan. By default, this config is 'jvm'. | jvm |
| spark.comet.exec.sort.enabled | Whether to enable sort by default. The default value is false. | false |
| spark.comet.exec.sortMergeJoin.enabled | Whether to enable sortMergeJoin by default. The default value is false. | false |
| spark.comet.exec.stddev.enabled | Whether to enable stddev by default. The default value is false. stddev is slower than Spark's implementation. | false |
| spark.comet.exec.takeOrderedAndProject.enabled | Whether to enable takeOrderedAndProject by default. The default value is false. | false |
| spark.comet.exec.union.enabled | Whether to enable union by default. The default value is false. | false |
| spark.comet.exec.window.enabled | Whether to enable window by default. The default value is false. | false |
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user-guide/expressions.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ The following Spark expressions are currently available. Any known compatibility
| ---------------- | ----- |
| UnaryMinus (`-`) | |

## Binary Arithmeticx
## Binary Arithmetic

| Expression | Notes |
| --------------- | --------------------------------------------------- |
Expand Down
4 changes: 0 additions & 4 deletions native/core/src/execution/datafusion/expressions/avg.rs
Original file line number Diff line number Diff line change
Expand Up @@ -322,10 +322,6 @@ where

// return arrays for sums and counts
fn state(&mut self, emit_to: EmitTo) -> Result<Vec<ArrayRef>> {
assert!(
matches!(emit_to, EmitTo::All),
"EmitTo::First is not supported"
);
let counts = emit_to.take_needed(&mut self.counts);
let counts = Int64Array::new(counts.into(), None);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -474,11 +474,6 @@ impl GroupsAccumulator for AvgDecimalGroupsAccumulator {

// return arrays for sums and counts
fn state(&mut self, emit_to: EmitTo) -> Result<Vec<ArrayRef>> {
assert!(
matches!(emit_to, EmitTo::All),
"EmitTo::First is not supported"
);

let nulls = self.is_not_null.finish();
let nulls = Some(NullBuffer::new(nulls));

Expand Down
30 changes: 25 additions & 5 deletions spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,8 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim

def windowExprToProto(
windowExpr: WindowExpression,
output: Seq[Attribute]): Option[OperatorOuterClass.WindowExpr] = {
output: Seq[Attribute],
conf: SQLConf): Option[OperatorOuterClass.WindowExpr] = {

val aggregateExpressions: Array[AggregateExpression] = windowExpr.flatMap { expr =>
expr match {
Expand All @@ -224,7 +225,7 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
val (aggExpr, builtinFunc) = if (aggregateExpressions.nonEmpty) {
val modes = aggregateExpressions.map(_.mode).distinct
assert(modes.size == 1 && modes.head == Complete)
(aggExprToProto(aggregateExpressions.head, output, true), None)
(aggExprToProto(aggregateExpressions.head, output, true, conf), None)
} else {
(None, exprToProto(windowExpr.windowFunction, output))
}
Expand Down Expand Up @@ -330,7 +331,8 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
def aggExprToProto(
aggExpr: AggregateExpression,
inputs: Seq[Attribute],
binding: Boolean): Option[AggExpr] = {
binding: Boolean,
conf: SQLConf): Option[AggExpr] = {
aggExpr.aggregateFunction match {
case s @ Sum(child, _) if sumDataTypeSupported(s.dataType) && isLegacyMode(s) =>
val childExpr = exprToProto(child, inputs, binding)
Expand Down Expand Up @@ -638,6 +640,15 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
withInfo(aggExpr, child)
None
}

case StddevSamp(child, _) if !isCometOperatorEnabled(conf, CometConf.EXPRESSION_STDDEV) =>
withInfo(
aggExpr,
"stddev disabled by default because it can be slower than Spark. " +
s"Set ${CometConf.EXPRESSION_STDDEV}.enabled=true to enable it.",
child)
None

case std @ StddevSamp(child, nullOnDivideByZero) =>
val childExpr = exprToProto(child, inputs, binding)
val dataType = serializeDataType(std.dataType)
Expand All @@ -658,6 +669,15 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
withInfo(aggExpr, child)
None
}

case StddevPop(child, _) if !isCometOperatorEnabled(conf, CometConf.EXPRESSION_STDDEV) =>
withInfo(
aggExpr,
"stddev disabled by default because it can be slower than Spark. " +
s"Set ${CometConf.EXPRESSION_STDDEV}.enabled=true to enable it.",
child)
None

case std @ StddevPop(child, nullOnDivideByZero) =>
val childExpr = exprToProto(child, inputs, binding)
val dataType = serializeDataType(std.dataType)
Expand Down Expand Up @@ -2593,7 +2613,7 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
return None
}

val windowExprProto = winExprs.map(windowExprToProto(_, output))
val windowExprProto = winExprs.map(windowExprToProto(_, output, op.conf))
val partitionExprs = partitionSpec.map(exprToProto(_, child.output))

val sortOrders = orderSpec.map(exprToProto(_, child.output))
Expand Down Expand Up @@ -2686,7 +2706,7 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde with CometExprShim
val output = child.output

val aggExprs =
aggregateExpressions.map(aggExprToProto(_, output, binding))
aggregateExpressions.map(aggExprToProto(_, output, binding, op.conf))
if (childOp.nonEmpty && groupingExprs.forall(_.isDefined) &&
aggExprs.forall(_.isDefined)) {
val hashAggBuilder = OperatorOuterClass.HashAggregate.newBuilder()
Expand Down
28 changes: 28 additions & 0 deletions spark/src/test/resources/tpcds-micro-benchmarks/agg_stddev.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
-- Licensed to the Apache Software Foundation (ASF) under one
-- or more contributor license agreements. See the NOTICE file
-- distributed with this work for additional information
-- regarding copyright ownership. The ASF licenses this file
-- to you under the Apache License, Version 2.0 (the
-- "License"); you may not use this file except in compliance
-- with the License. You may obtain a copy of the License at
--
-- http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing,
-- software distributed under the License is distributed on an
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-- KIND, either express or implied. See the License for the
-- specific language governing permissions and limitations
-- under the License.

select w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy
,stddev_samp(inv_quantity_on_hand) stdev
from inventory
,item
,warehouse
,date_dim
where inv_item_sk = i_item_sk
and inv_warehouse_sk = w_warehouse_sk
and inv_date_sk = d_date_sk
and d_year =2001
group by w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy;
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ object CometTPCDSMicroBenchmark extends CometTPCQueryBenchmarkBase {
"agg_low_cardinality",
"agg_sum_decimals_no_grouping",
"agg_sum_integers_no_grouping",
"agg_stddev",
"case_when_column_or_null",
"case_when_scalar",
"char_type",
Expand Down
Loading