-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Akumuli database support #73
Conversation
Codecov Report
@@ Coverage Diff @@
## master #73 +/- ##
==========================================
- Coverage 69.61% 69.58% -0.04%
==========================================
Files 103 104 +1
Lines 5704 5810 +106
==========================================
+ Hits 3971 4043 +72
- Misses 1658 1689 +31
- Partials 75 78 +3
Continue to review full report at Codecov.
|
Hi @Lazin , I see you've updated this (though it does need a few more conflicts fixed). I will try to look into this this week so we can help move it forward, now that CrateDB is merged in. |
Looking forward to it! I fixed merge conflicts but it looks like something is broken on CI now. I'll investigate this tomorrow. |
Hi @Lazin , I apologize for not getting back to you sooner. We've been really heads down with a new use case that we are just now releasing: It does change a lot of things structurally that might affect your PR...would you mind looking into that and pinging me when the PR is ready for re-review? |
Hi @RobAtticus , |
Hello @Lazin |
Hi @blagojts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the files look good. I have some minor change requests.
I also couldn't test out the data loading and query execution. But that may be because of me (at the moment of writing this https://akumuli.gitbook.io/docs is unavailable)
I ran Akumuli with docker
docker run -d -p 8181:8181 -p 8282:8282 -p 8383:8383 -p 4242:4242 akumuli/akumuli:0.7.45-skylake
And when I try to run tsbs_load_akumuli -file=aku.in
I get the error Can't establish connection with http://localhost:8282
but the db is up and running and curl http://localhost:8181/api/stats
returns an ok response
series := string(buf[HeaderLength:]) | ||
if !s.bookClosed { | ||
// Save point for later | ||
if id, ok := s.book[series]; ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this if/else is a bit long, maybe you can extract parts of it to small functions
|
||
for _, c := range cases { | ||
actualCnt := strings.Count(got, c.expValue) | ||
if actualCnt != c.expCount { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the serialized content is not tested/asserted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's checked on line 73. If one of the test samples is not present in the output required number of times it will fail.
@@ -0,0 +1,294 @@ | |||
package akumuli |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find the GroupByOrderByLimit
query generation method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not possible to implement using Akumuli query API. Date/time truncation is not supported.
@@ -0,0 +1,32 @@ | |||
package main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understood correctly, Akumuli does not have the concept of databases and schemas like PostgreSQL or InfluxDB has. And the db creator is not needed to create a database. But the Creator also handles cleaning of old/stale data (check out tsbs_load_timescaledb/creator.go). I would also change it so that the DBExists
method always returns true
so the RemoveOldDb
gets called by tsbs (where you can put the cleanup code). The calling of RemoveOldDb is controlled by the do-create-db
flag, which by default is true, but if you want to keep the old data you can set it to false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it doesn't have a notion of database/table/schema. And it's not possible to clean up using the database API. You have to run command in terminal to crate or delete the database.
@blagojts thanks, I appreciated the feedback! |
Hey @Lazin , we're seeing failures in CI:
and
|
Hi @RobAtticus, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few last nits if you want, otherwise I might fix them after merge.
Can you verify which queries this works for? Is this every query in devops except GroupByOrderByLimit
?
buf := make([]byte, 0, 1024) | ||
// Add cue | ||
const HeaderLength = 8 | ||
buf = append(buf, "AAAAFFEE+"...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make this a named constant? I am unclear what this magic value is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. The value is just a placeholder that gets replaced by the binary value later (all that binary.LittleEndian.Putxxx calls).
// Shortcut | ||
s.index++ | ||
tmp := make([]byte, 0, 1024) | ||
tmp = append(tmp, "AAAAFFEE*2\n"...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same thing as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
// Update cue | ||
binary.LittleEndian.PutUint16(buf[4:6], uint16(len(buf))) | ||
binary.LittleEndian.PutUint16(buf[6:HeaderLength], uint16(len(p.fieldValues))) | ||
if deferPoint { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably more go-like written as:
if deferPoint {
s.deferred = append(s.deferred, buf...)
return nil
}
_, err = w.Write(buf)
return err
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
binary.LittleEndian.PutUint16(buf[6:HeaderLength], uint16(0)) | ||
binary.LittleEndian.PutUint32(buf[:4], id) | ||
} else { | ||
panic("Unexpected series name") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's just return an error here? A long term goal I have is to reduce the amount of panics in the codebase, and given this func already returns errors, we should probably just do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Sorry it took so long, but merged! |
No description provided.