You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the last release, the outputs had all the logic to calculate the time of the event and granularity, if there is not calculated and added to the row.
With the new functionality stateful time aggregation is calculated and sent to subsequent stages of the aggregation key pair that make up the RDD.
Now all data are aggregated by time and this is what has to be stored at the outputs.
Be especially careful with the order of the fields, in several places we have to eliminate the time if there dimensionValue and insert it in the right order with the value of the key. Thus we have clearly identified the schemas with rows and values.
Special care must be taken with the process of multiplexing of a rollup.
Without this refactor and implementation the old aggregations values is wrong
The text was updated successfully, but these errors were encountered:
* Upsert in jdbc not update search fields (#155)
* [SPARTA-1247] Fix/java heap size (#164)
* [SPARTA-1247] Added OS memory to heap size
* Leave 512MB as min
* Improved readability
* Set newMemorySize only to container
* Control empty rdd partitions in JDBC save/upsert (#179)
* Minor changes from branch incompatibles with new Sparta Version
in the last release, the outputs had all the logic to calculate the time of the event and granularity, if there is not calculated and added to the row.
With the new functionality stateful time aggregation is calculated and sent to subsequent stages of the aggregation key pair that make up the RDD.
Now all data are aggregated by time and this is what has to be stored at the outputs.
Be especially careful with the order of the fields, in several places we have to eliminate the time if there dimensionValue and insert it in the right order with the value of the key. Thus we have clearly identified the schemas with rows and values.
Special care must be taken with the process of multiplexing of a rollup.
Without this refactor and implementation the old aggregations values is wrong
The text was updated successfully, but these errors were encountered: