-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-22036][SQL] Decimal multiplication with high precision/scale often returns NULL #20023
Conversation
…ften returns NULL
Test build #85116 has finished for PR 20023 at commit
|
@cloud-fan @dongjoon-hyun @gatorsmile @rxin @viirya I saw you worked on this files. Maybe you can help reviewing the PR. For further details about the reasons of this PR, please refer to the e-mail I sent on the dev mail list. Thank you. |
Ideally we should not change behaviors as possible as we can, but since this behavior is from Hive and Hive also changed it, might be OK to follow Hive and also change it? cc @hvanhovell too |
@cloud-fan yes, Hive changed and most important at the moment we are not compliant with SQL standard. So currently Spark is returning results which are different from Hive and not compliant with SQL standard. This is why I proposed this change. |
In am generally in favor of following the SQL standard. How about we do this. Let's make the standard behavior the default, and add a flag to revert to the old behavior. This allows us to ease users into the new behavior, and for us it can provide some data points on when we can remove the old behavior. I hope we can remove this for Spark 2.4 or later. At the end of the day it will be a bit more work, as I'd definitely would make an effort to isolate the the two behaviors as much as possible. |
thanks for looking at this @hvanhovell. The reasons why I didn't introduce a configuration variable for this behavior are:
Let me know if you don't agree with these arguments. Thanks. |
insert into decimals_test values(1, 100.0, 999.0); | ||
insert into decimals_test values(2, 12345.123, 12345.123); | ||
insert into decimals_test values(3, 0.1234567891011, 1234.1); | ||
insert into decimals_test values(4, 123456789123456789.0, 1.123456789123456789); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit. How about making into one SQL statement?
insert into decimals_test values (1, 100.0, 999.0), (2, 12345.123, 12345.123), (3, 0.1234567891011, 1234.1), (4, 123456789123456789.0, 1.123456789123456789)
I don't fully agree :)...
|
@@ -0,0 +1,16 @@ | |||
-- tests for decimals handling in operations | |||
-- Spark draws its inspiration byt Hive implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hyperlinks in the PR came from Microsoft, and the first purpose is SQL compliant. Can we remove this line?
@@ -1526,15 +1526,15 @@ class SQLQuerySuite extends QueryTest with SharedSQLContext { | |||
checkAnswer(sql("select 10.300000000000000000 * 3.000000000000000000"), | |||
Row(BigDecimal("30.900000000000000000000000000000000000", new MathContext(38)))) | |||
checkAnswer(sql("select 10.300000000000000000 * 3.0000000000000000000"), | |||
Row(null)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two cases (2 and 3) were mentioned in the email. If this is the only NULL
-return test case from previous behavior, can we have another test case?
Currently, Spark behaves like follows:
1. It follows some rules taken from intial Hive implementation;
2. it returns NULL;
3. it returns NULL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The third case is never checked in the current codebase, ie. when we go out of the representable range of values. I haven't added a test for it, because I was waiting for feedbacks by the community about how to handle the 3rd case and I focused this PR only on points 1 and 2. But I can add a test case for it and eventually change it in a future PR to address the 3rd point in the e-mail. Thanks.
Thank you for pining me, @mgaido91 . The approach of PR looks good to me. |
@hvanhovell, as far as 1 is regarded, I was referring to this comment and this PR where it is explicitly stated that using
may I kindly ask you if you could elaborate this sentence a bit more? Thank you very much. |
Thank you for your review @dongjoon-hyun. I think what we can do is add more test to the whitelist in |
I thought adding more cases into |
private[sql] def adjustPrecisionScale(precision: Int, scale: Int): DecimalType = { | ||
// Assumptions: | ||
// precision >= scale | ||
// scale >= 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use assert
to make sure assumptions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add it even though it is not needed... there is no way we can violate those constraints. If you believe it is better to use assert, I will do that.
* Type coercion for BinaryOperator in which one side is a non-decimal literal numeric, and the | ||
* other side is a decimal. | ||
*/ | ||
private def nondecimalLiteralAndDecimal( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this rule newly introduced?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is. If we don't introduce this, we have a failure in Hive compatibility tests, because Hive use the exact precision and scale needed by the literals, while we, before this change, were using conservative values for each type. For instance, if we have a select 123.12345 * 3
, before this change 3
would have been interpreted as Decimal(10, 0)
, which is the type for integers. After the change, 3
would become Decimal(1, 0)
, as Hive does. This prevents from needing more precision that what is actually needed.
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType { | |||
case DoubleType => DoubleDecimal | |||
} | |||
|
|||
private[sql] def forLiteral(literal: Literal): DecimalType = literal.value match { | |||
case v: Short => fromBigDecimal(BigDecimal(v)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just use ShortDecimal
, IntDecimal
...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, please see my comments above.
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType { | |||
case DoubleType => DoubleDecimal | |||
} | |||
|
|||
private[sql] def forLiteral(literal: Literal): DecimalType = literal.value match { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this different than forType
if applied on Literal.dataType
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, please see my comment above for an example. Thanks.
// scale >= 0 | ||
if (precision <= MAX_PRECISION) { | ||
// Adjustment only needed when we exceed max precision | ||
DecimalType(precision, scale) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we also prevent scale
> MAX_SCALE
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is prevented outside this function.
val intDigits = precision - scale | ||
// If original scale less than MINIMUM_ADJUSTED_SCALE, use original scale value; otherwise | ||
// preserve at least MINIMUM_ADJUSTED_SCALE fractional digits | ||
val minScaleValue = Math.min(scale, MINIMUM_ADJUSTED_SCALE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like MAXIMUM_ADJUSTED_SCALE
instead of MINIMUM_ADJUSTED_SCALE
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the MINIMUM_ADJUSTED_SCALE
. We can't have a scale lower that that, even though we would need not to loose precision. Please see the comments above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't have a scale lower that that...
Don't you get a scale lower than MINIMUM_ADJUSTED_SCALE
by Math.min(scale, MINIMUM_ADJUSTED_SCALE)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sorry, my answer was very poor, I will rephrase. scale
contains the scale which we need to represent the values without any precision loss. What we are doing here is saying that the lower bound for the scale is either the scale that we need to correctly represent the value or the MINIMUM_ADJUSTED_SCALE
. After this, in the line below we state that the scale we will use is the max between the number of digits of the precision we don't need on the left of the dot and this minScaleValue
: ie. even though in some cases we might need a scale higher than MINIMUM_ADJUSTED_SCALE
, but the number of digits needed on the left on the dot would force us to have a scale lower than MINIMUM_ADJUSTED_SCALE
, we enforce that we will maintain at least MINIMUM_ADJUSTED_SCALE
. We can't let the scale be lower that this threshold, even though it would be needed to enforce that we don't loose digits on the left of the dot. Please refer also to the blog post I linked in the comment above for further (hopefully better) explanation.
// If original scale less than MINIMUM_ADJUSTED_SCALE, use original scale value; otherwise | ||
// preserve at least MINIMUM_ADJUSTED_SCALE fractional digits | ||
val minScaleValue = Math.min(scale, MINIMUM_ADJUSTED_SCALE) | ||
val adjustedScale = Math.max(MAX_PRECISION - intDigits, minScaleValue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like Math.min
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is max
because we take either the scale which would prevent a loss of "space" for intDigits
, ie. the part on the left of the dot, or the minScaleValue
, which is the scale we are ensuring to provide at least.
* corresponding scale is reduced to prevent the integral part of a result from being truncated. | ||
* | ||
* For further reference, please see | ||
* https://blogs.msdn.microsoft.com/sqlprogrammability/2006/03/29/multiplication-and-division-with-numerics/. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this blog link can be available for long time.
I think that we should be careful on suddenly changing behavior. |
Thanks for your efforts! This change needs more careful review and investigation. Could you post the outputs of Oracle, DB2, SQL server and Hive? Are their results consistent? |
@gatorsmile, please refer to the e-mail to the dev mail list for further details. I run the script I added to the tests in this PR, the results are:
Here you are the outputs of the queries. Hive 2.3.0 (same as Spark after PR)
SQLServer 2017
Postgres and Oracle
Spark before the PR
|
In DB2, Thus, it might be more straightforward for us to follow what DB2 does. |
Regarding the rules for deciding precision and scales, DB2 z/OS also has its own rules: https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/sqlref/src/tpc/db2z_witharithmeticoperators.html Could you compare it with MS SQL Server? |
it looks like other databases are very careful about precision lose. However, following DB2 and throwing exception is pretty bad for big data applications, but returning null is also bad as it violates SQL standard. A new proposal: can we increase the max decimal precision to 76 and keep max scale as 38? Then we can avoid precision lose IIUC. |
@gatorsmile I answered to your comments about DB2 in the e-mail. @cloud-fan that would help, but not solve the problem. It would just make the problem being generated by bigger numbers. As you can see from the e-mail, DB2 behavior is actually in accordance to SQL standards and the other DBs, it just have a smaller maximum precision. And the case of throwing an exception is point 3 of my e-mails and it is out of scope of this PR, because I think we best discuss before which is the right approach in that case and then I can eventually create a PR. |
I might not get your point. Above is the result I got. This is your scenario 3 or 2? |
@gatorsmile that is scenario 3. I will explain you why and after I will do and errata corrige of the summary I did in my last e-mail, because I made a mistake about how DB2 computes the result precision and scale, sorry for that. Anyway, what you showed is an example of point 3 because DB2 computes the result type as
As you can see a truncation occurred. Now, let me amend my table to summarize the behavior of the many DBs:
|
Thanks for your detailed summary! We do not have a SQLCA. Thus, it is hard for us to send a warning message back like what DB2 does.. Silently losing the precision looks scary to me. Oracle sounds like following the rule, SQL ANSI 2011 does not document many details. For example, the result type of DB2's division is different from either our existing rule or the rule you changed. The rule you mentioned above about DB2 is just for multiplification. I am not sure whether we can finalize our default type cocersion rule Could you first help us improve the test cases added in #20008 ? Thanks! |
Thanks for your analysis @gatorsmile. Actually the rule you specified for Oracle is what it uses when casting, rather then when doing arithmetic operations. Yes DB2 has rather different rules to define the output type of operations. Anyway, we can have a behavior practically identical to DB2 by changing the value of The reason why I am suggesting this is that my first concern is not Hive compliance, but SQL standard compliance. Indeed, as you con see from the summary, on point 1 there is not a uniform behavior (but this is OK to SQL standard since it gives freedom). But on point 2 we are the only ones who are not compliant to SQL standard. And having this behavior by default doesn't look the right thing to do IMHO. On point 3, only we and Hive are not compliant. Thus I think also that should be changed. But in that case, we can't use the same flag, because it would be inconsistent. What do you think? I can understand and agree that loosing precision looks scary. But to me returning I would ne happy to help improving test cases. May I just kindly ask you how you meant to do that? What would you like to be tested more? Would you like me to add more test cases in scope of this PR or to open a new one for that? Thank you for your time reading my long messages. I just want to take the best choice and give you all the elements I have to decide for the best all together. |
Following ANSI SQL compliance sounds good to me. However, many details are vendor-specific. That means, the query results still varies even if we can be 100% ANSI SQL compliant. To avoid frequently introducing behavior breaking changes, we can also introduce a new mode Before introducing the new mode, we first need to understand the difference between Spark SQL and the other. That is the reason why we need to write the test cases first. Then, we can run them against different systems. This PR clearly shows the current test cases do not cover the scenarios of 2 and 3. |
Thanks @gatorsmile. Then should I create a follow up PR for #20008 in order to cover the cases 2 and 3 before going on with this PR or can we go on with this PR and the test cases added in this PR? |
LGTM. One thing we can improve is the golden file test framework. I found we sometimes repeat the test cases with a config on and off. We should write the test cases once and list the configs we wanna try, and ask the test framework to do it. This can be a follow-up. @mgaido91 thanks for your great work! |
Test build #86272 has finished for PR 20023 at commit
|
Test build #86276 has finished for PR 20023 at commit
|
Test build #86277 has finished for PR 20023 at commit
|
Test build #86271 has finished for PR 20023 at commit
|
docs/sql-programming-guide.md
Outdated
- Since Spark 2.3, by default arithmetic operations between decimals return a rounded value if an exact representation is not possible. This is compliant to SQL standards and Hive's behavior introduced in HIVE-15331. This involves the following changes | ||
- The rules to determine the result type of an arithmetic operation have been updated. In particular, if the precision / scale needed are out of the range of available values, the scale is reduced up to 6, in order to prevent the truncation of the integer part of the decimals. | ||
- Literal values used in SQL operations are converted to DECIMAL with the exact precision and scale needed by them. | ||
- The configuration `spark.sql.decimalOperations.allowPrecisionLoss` has been introduced. It defaults to `true`, which means the new behavior described here; if set to `false`, Spark will use the previous rules and behavior. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also need to explain what is the previous behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least, we need to say, NULL will be returned in this case.
docs/sql-programming-guide.md
Outdated
@@ -1795,6 +1795,11 @@ options. | |||
|
|||
- Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always returns as a string despite of input types. To keep the old behavior, set `spark.sql.function.eltOutputAsString` to `true`. | |||
|
|||
- Since Spark 2.3, by default arithmetic operations between decimals return a rounded value if an exact representation is not possible. This is compliant to SQL standards and Hive's behavior introduced in HIVE-15331. This involves the following changes | |||
- The rules to determine the result type of an arithmetic operation have been updated. In particular, if the precision / scale needed are out of the range of available values, the scale is reduced up to 6, in order to prevent the truncation of the integer part of the decimals. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to explicitly document which arithmetic operations are affected.
.doc("When true (default), establishing the result type of an arithmetic operation " + | ||
"happens according to Hive behavior and SQL ANSI 2011 specification, ie. rounding the " + | ||
"decimal part of the result if an exact representation is not possible. Otherwise, NULL " + | ||
"is returned in those cases, as previously.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This is better.
docs/sql-programming-guide.md
Outdated
@@ -1795,6 +1795,11 @@ options. | |||
|
|||
- Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always returns as a string despite of input types. To keep the old behavior, set `spark.sql.function.eltOutputAsString` to `true`. | |||
|
|||
- Since Spark 2.3, by default arithmetic operations between decimals return a rounded value if an exact representation is not possible. This is compliant to SQL standards and Hive's behavior introduced in HIVE-15331. This involves the following changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the new behavior introduced in Hive 2.2. We have to emphasize it.
LGTM except a few comments about the doc |
LGTM, pending jenkins |
Test build #86330 has finished for PR 20023 at commit
|
thanks, merging to master/2.3! |
…ften returns NULL ## What changes were proposed in this pull request? When there is an operation between Decimals and the result is a number which is not representable exactly with the result's precision and scale, Spark is returning `NULL`. This was done to reflect Hive's behavior, but it is against SQL ANSI 2011, which states that "If the result cannot be represented exactly in the result type, then whether it is rounded or truncated is implementation-defined". Moreover, Hive now changed its behavior in order to respect the standard, thanks to HIVE-15331. Therefore, the PR propose to: - update the rules to determine the result precision and scale according to the new Hive's ones introduces in HIVE-15331; - round the result of the operations, when it is not representable exactly with the result's precision and scale, instead of returning `NULL` - introduce a new config `spark.sql.decimalOperations.allowPrecisionLoss` which default to `true` (ie. the new behavior) in order to allow users to switch back to the previous one. Hive behavior reflects SQLServer's one. The only difference is that the precision and scale are adjusted for all the arithmetic operations in Hive, while SQL Server is said to do so only for multiplications and divisions in the documentation. This PR follows Hive's behavior. A more detailed explanation is available here: https://mail-archives.apache.org/mod_mbox/spark-dev/201712.mbox/%3CCAEorWNAJ4TxJR9NBcgSFMD_VxTg8qVxusjP%2BAJP-x%2BJV9zH-yA%40mail.gmail.com%3E. ## How was this patch tested? modified and added UTs. Comparisons with results of Hive and SQLServer. Author: Marco Gaido <marcogaido91@gmail.com> Closes #20023 from mgaido91/SPARK-22036. (cherry picked from commit e28eb43) Signed-off-by: Wenchen Fan <wenchen@databricks.com>
… integral literals ## What changes were proposed in this pull request? #20023 proposed to allow precision lose during decimal operations, to reduce the possibilities of overflow. This is a behavior change and is protected by the DECIMAL_OPERATIONS_ALLOW_PREC_LOSS config. However, that PR introduced another behavior change: pick a minimum precision for integral literals, which is not protected by a config. This PR add a new config for it: `spark.sql.literal.pickMinimumPrecision`. This can allow users to work around issue in SPARK-25454, which is caused by a long-standing bug of negative scale. ## How was this patch tested? a new test Closes #22494 from cloud-fan/decimal. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: gatorsmile <gatorsmile@gmail.com> (cherry picked from commit d0990e3) Signed-off-by: gatorsmile <gatorsmile@gmail.com>
… integral literals ## What changes were proposed in this pull request? #20023 proposed to allow precision lose during decimal operations, to reduce the possibilities of overflow. This is a behavior change and is protected by the DECIMAL_OPERATIONS_ALLOW_PREC_LOSS config. However, that PR introduced another behavior change: pick a minimum precision for integral literals, which is not protected by a config. This PR add a new config for it: `spark.sql.literal.pickMinimumPrecision`. This can allow users to work around issue in SPARK-25454, which is caused by a long-standing bug of negative scale. ## How was this patch tested? a new test Closes #22494 from cloud-fan/decimal. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: gatorsmile <gatorsmile@gmail.com> (cherry picked from commit d0990e3) Signed-off-by: gatorsmile <gatorsmile@gmail.com>
… integral literals ## What changes were proposed in this pull request? apache#20023 proposed to allow precision lose during decimal operations, to reduce the possibilities of overflow. This is a behavior change and is protected by the DECIMAL_OPERATIONS_ALLOW_PREC_LOSS config. However, that PR introduced another behavior change: pick a minimum precision for integral literals, which is not protected by a config. This PR add a new config for it: `spark.sql.literal.pickMinimumPrecision`. This can allow users to work around issue in SPARK-25454, which is caused by a long-standing bug of negative scale. ## How was this patch tested? a new test Closes apache#22494 from cloud-fan/decimal. Authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: gatorsmile <gatorsmile@gmail.com>
…ain integral digits first ### What changes were proposed in this pull request? This is kind of a followup of #20023 . It's simply wrong to cut the decimal precision to 38 if a wider decimal type exceeds the max precision, which drops the integral digits and makes the decimal value very likely to overflow. In #20023 , we fixed this issue for arithmetic operations, but many other operations suffer from the same issue: Union, binary comparison, in subquery, create_array, coalesce, etc. This PR fixes all the remaining operators, without the min scale limitation, which should be applied to division and multiple only according to the SQL server doc: https://learn.microsoft.com/en-us/sql/t-sql/data-types/precision-scale-and-length-transact-sql?view=sql-server-ver15 ### Why are the changes needed? To produce reasonable wider decimal type. ### Does this PR introduce _any_ user-facing change? Yes, the final data type of these operators will be changed if it's decimal type and its precision exceeds the max and the scale is not 0. ### How was this patch tested? updated tests ### Was this patch authored or co-authored using generative AI tooling? No Closes #43781 from cloud-fan/decimal. Lead-authored-by: Wenchen Fan <cloud0fan@gmail.com> Co-authored-by: Wenchen Fan <wenchen@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
What changes were proposed in this pull request?
When there is an operation between Decimals and the result is a number which is not representable exactly with the result's precision and scale, Spark is returning
NULL
. This was done to reflect Hive's behavior, but it is against SQL ANSI 2011, which states that "If the result cannot be represented exactly in the result type, then whether it is rounded or truncated is implementation-defined". Moreover, Hive now changed its behavior in order to respect the standard, thanks to HIVE-15331.Therefore, the PR propose to:
NULL
spark.sql.decimalOperations.allowPrecisionLoss
which default totrue
(ie. the new behavior) in order to allow users to switch back to the previous one.Hive behavior reflects SQLServer's one. The only difference is that the precision and scale are adjusted for all the arithmetic operations in Hive, while SQL Server is said to do so only for multiplications and divisions in the documentation. This PR follows Hive's behavior.
A more detailed explanation is available here: https://mail-archives.apache.org/mod_mbox/spark-dev/201712.mbox/%3CCAEorWNAJ4TxJR9NBcgSFMD_VxTg8qVxusjP%2BAJP-x%2BJV9zH-yA%40mail.gmail.com%3E.
How was this patch tested?
modified and added UTs. Comparisons with results of Hive and SQLServer.