Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(storage): more on InsertObject vs. WriteObject #12577

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 15 additions & 5 deletions google/cloud/storage/client.h
Original file line number Diff line number Diff line change
Expand Up @@ -854,6 +854,12 @@ class Client {
/**
* Creates an object given its name and contents.
*
* If you need to perform larger uploads or uploads where the data is not
* contiguous in memory use `WriteObject()`. This function always performs a
coryan marked this conversation as resolved.
Show resolved Hide resolved
* single-shot upload, while `WriteObject()` always uses resumable uploads.
* The [service documentation] has recommendations on the upload size vs.
* single-shot or resumable uploads.
*
* @param bucket_name the name of the bucket that will contain the object.
* @param object_name the name of the object to be created.
* @param contents the contents (media) for the new object.
Expand All @@ -874,6 +880,9 @@ class Client {
*
* @par Example
* @snippet storage_object_samples.cc insert object multipart
*
* [service documentation]:
* https://cloud.google.com/storage/docs/uploads-downloads#size
*/
template <typename... Options>
StatusOr<ObjectMetadata> InsertObject(std::string const& bucket_name,
Expand Down Expand Up @@ -1135,6 +1144,10 @@ class Client {
* can use either the regular `operator<<()`, or `std::ostream::write()` to
* upload data.
*
* For small uploads where all the data is contiguous in memory we recommend
* using `InsertObject()`. The [service documentation] has specific
* recommendations on object sizes and upload types.
*
* This function always uses [resumable uploads][resumable-link]. The
* application can provide a `#RestoreResumableUploadSession()` option to
* resume a previously created upload. The returned object has accessors to
Expand All @@ -1145,9 +1158,6 @@ class Client {
* application's responsibility to query the next expected byte and send
* the remaining data without gaps or duplications.
*
* For small uploads we recommend using `InsertObject`, consult
* [the documentation][how-to-upload-link] for details.
*
* If the application does not provide a `#RestoreResumableUploadSession()`
* option, or it provides the `#NewResumableUploadSession()` option then a new
* resumable upload session is created.
Expand Down Expand Up @@ -1186,8 +1196,8 @@ class Client {
* resumable uploads.
*
* [resumable-link]: https://cloud.google.com/storage/docs/resumable-uploads
* [how-to-upload-link]:
* https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload
* [service documentation]:
* https://cloud.google.com/storage/docs/uploads-downloads#size
*/
template <typename... Options>
ObjectWriteStream WriteObject(std::string const& bucket_name,
Expand Down
15 changes: 12 additions & 3 deletions google/cloud/storage/examples/storage_object_samples.cc
Original file line number Diff line number Diff line change
Expand Up @@ -375,15 +375,24 @@ void WriteObjectFromMemory(google::cloud::storage::Client client,
[](gcs::Client client, std::string const& bucket_name,
std::string const& object_name) {
std::string const text = "Lorem ipsum dolor sit amet";
// For small uploads where the data is contiguous in memory use
// `InsertObject()`. For more specific size recommendations see
// https://cloud.google.com/storage/docs/uploads-downloads#size
auto metadata = client.InsertObject(bucket_name, object_name, text);
if (!metadata) throw std::move(metadata).status();
std::cout << "Successfully wrote to object " << metadata->name()
<< " its size is: " << metadata->size() << "\n";

// For larger uploads, or uploads where the data is not contiguous in
// memory, use `WriteObject()`. Consider using `std::ostream::write()` for
// best performance.
std::vector<std::string> v(100, text);
gcs::ObjectWriteStream stream =
client.WriteObject(bucket_name, object_name);

std::copy(v.begin(), v.end(), std::ostream_iterator<std::string>(stream));

stream.Close();

StatusOr<gcs::ObjectMetadata> metadata = std::move(stream).metadata();
metadata = std::move(stream).metadata();
if (!metadata) throw std::move(metadata).status();
std::cout << "Successfully wrote to object " << metadata->name()
<< " its size is: " << metadata->size()
Expand Down