-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow async stream for writing and appending to a dataset #3146
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3146 +/- ##
==========================================
- Coverage 77.95% 77.95% -0.01%
==========================================
Files 242 242
Lines 81904 82455 +551
Branches 81904 82455 +551
==========================================
+ Hits 63848 64275 +427
- Misses 14890 14969 +79
- Partials 3166 3211 +45
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for opening a PR. Seeing as both readers and streams are internally transformed into streams and handled the same way, I'd rather not have separate methods for them. If we could keep just one execute_stream
and not add a new execute_reader
, that would be my preference.
The only reason I created a separate method for Vec<RecordBatch>
is that I later intend for the implementation to be different. (It will be able to split the data up and write in parallel, which you can't do with a stream.)
Could you instead try something like:
pub trait StreamingWriteSource {
fn try_into_stream(self) -> Result<SendableRecordBatchStream>;
}
impl StreamingWriteSource for SendableRecordBatchStream { ... }
impl streamingWriteSource for Box<dyn RecordBatchReader> { ... }
pub async fn write_stream(
stream: impl StreamingWriteSource,
schema: Schema,
dest: impl Into<WriteDestination<'_>>,
params: Option<WriteParams>,
rust/lance/src/dataset.rs
Outdated
/// Append to existing [Dataset] with a stream of [RecordBatch]s | ||
/// | ||
/// Returns void result or Returns [Error] | ||
pub async fn append_stream( | ||
&mut self, | ||
stream: SendableRecordBatchStream, | ||
schema: Schema, | ||
params: Option<WriteParams>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also rather not add these methods. If we keep adding these we'll end up with too many APIs. The InsertBuilder
is public and I'd rather we use that. If we want, we could add write_builder(&self) -> WriteBuilder;
and I'd be fine with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented by abstracting the (stream, peek_schema(stream) as schema) into StreamingWriteSource
.
rust/lance/src/dataset.rs
Outdated
let (batches, schema) = peek_reader_schema(Box::new(batches)).await?; | ||
let stream = reader_to_stream(batches); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do some important logic related to dictionary arrays in peek_reader_schema
. So I think we should take Schema
from the user in the stream case. Instead, we should flip this around:
let stream = reader_to_stream(batches);
let (stream, schema) = peek_stream_schema(stream).await?;
Then for the stream case we can also use the peek_stream_schema()
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented the proposed logic by StreamingWriteSource::into_stream_and_schema
. People who do not want to get schema can rather use StreamingWriteSource::into_stream
, which is the formal reader_to_stream
. Of cource, reader_to_stream
function would not be removed.
Applied |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making those modifications. :)
Once you fix the code size issue, I am ready to approve.
pub async fn write( | ||
&self, | ||
reader: impl RecordBatchReader + Send + 'static, | ||
source: impl StreamingWriteSource, | ||
id: Option<u64>, | ||
) -> Result<Fragment> { | ||
let (stream, schema) = self.get_stream_and_schema(Box::new(reader)).await?; | ||
self.write_impl(stream, schema, id).await | ||
let id = id.unwrap_or_default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please separate this back out into write()
and write_impl
. Each concrete type passed into write()
will generate a new set of code for it. We have them dispatch into write_impl
to minimize that size of the code that is duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the new StreamingWriteSource
has to implement Self: Sized
, I converted this impl into (stream, schema) which is a set of concrete types before entering _impl
methods.
I reverted flattening |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good now. Thanks for working through my feedback. Great work, @HoKim98 !
This PR allows end-users to use
SendableRecordBatchStream
andSchema
directly for writing or appending a dataset.It's vital to write&append async streams to a dataset.
Related Issues
Partially resolves #1792.
Side-effects
This PR has a side-effect like below.
Changed
StreamingWriteSource::into_stream_and_schema
.Added
reader_to_stream
but also supports Streams.reader_to_stream
,peek_stream_schema
) but also supports Streams.