Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge master into bigquery and add BigQuery to README and TESTING #502

Merged
merged 47 commits into from
Dec 22, 2015
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
4899c27
Create packages for resource manager and outline spi layer.
Nov 2, 2015
3687cb1
Fixes to the ResourceManagerRpc layer, and also add resource manager …
Nov 3, 2015
3892eed
minor changes to ResourceManagerRpc
Nov 3, 2015
a60f7d1
Style updates to ResourceManagerRpc
Nov 3, 2015
c69af1a
add return values to delete, undelete, and setIamPolicy
Nov 3, 2015
249fae8
Remove spi result enums, change 'update/set' terminology to 'replace'
Nov 4, 2015
65a6240
Project, ProjectInfo, Policy, and ResourceId classes added.
Nov 12, 2015
9d6fbff
Fix style, simplify equals methods, fix tests, and add Project javadocs
Nov 17, 2015
e625c20
Add documentation and make resource ID type string
Nov 20, 2015
ebcac0b
Remove Policy and add docs
Nov 24, 2015
d800d1f
Added docs, removed policy-related methods from spi layer, fixed list…
Nov 25, 2015
a770e12
Remove parent and resource ID, add fields options
Nov 25, 2015
24edea2
Add exception handling, add back resource ID, and other cleanup
Nov 27, 2015
940aa92
Add list/get options serialization tests + other small fixes
Nov 30, 2015
57de95a
Add page size and page token options
Nov 30, 2015
e9359a0
fix paging docs
Nov 30, 2015
e6954ae
sync pom version to parent project
Nov 30, 2015
a50d770
Default spi layer implementation
Nov 30, 2015
918ab7b
Fix typo and remove non-retryable error code
Dec 1, 2015
3de4f8f
Initial commit of LocalResourceManagerHelper and tests
Dec 6, 2015
978fe27
Add filtering, make projects map a ConcurrentHashMap, and fix style i…
Dec 8, 2015
a670d21
Make error messages more informative, propagate all server exceptions…
Dec 10, 2015
19aa650
remove checkNotNull calls and minor fixes
Dec 10, 2015
4fdd477
minor fixes
Dec 11, 2015
937fe22
minor fix
Dec 11, 2015
ddf447f
ResourceManagerImpl + docs
Dec 2, 2015
8c12d67
Fix docs and return null if project not found by get.
Dec 16, 2015
a3bbbd2
Minor fixes
Dec 17, 2015
37b5efd
Adding ResourceManagerExample, update docs
Dec 17, 2015
b6bbf67
Include delete action + minor edits
Dec 18, 2015
1e7e152
Add retryable exceptions and fix checkstyle complaints
Dec 18, 2015
0eada88
minor fixes
Dec 18, 2015
ccbf4f4
more checkstyle fixes
Dec 19, 2015
bac6a07
minor fixes
Dec 19, 2015
b5d4bca
Update pom version in resource manager, remove noCredentials from Ser…
Dec 21, 2015
6c5a642
Minor fixes to Storage
mziccard Dec 21, 2015
b504340
Make checkstyle happy
mziccard Dec 21, 2015
637f156
Merge pull request #495 from ajkannan/merge-service
aozarov Dec 21, 2015
604b1c4
More style fixes to Storage module
mziccard Dec 21, 2015
5b1c443
Merge pull request #494 from mziccard/storage-minor
aozarov Dec 21, 2015
12a10b5
Synchronize LocalResourceManagerHelper
Dec 21, 2015
a87cd14
make protobuf methods package protected
Dec 21, 2015
62a2d71
Merge pull request #497 from ajkannan/synchronize-lrmh
ajkannan Dec 22, 2015
b2d181f
revert unnecessary changes in scope in BaseMarshaller
Dec 22, 2015
9bcebd3
Merge pull request #498 from ajkannan/hide-non-public-refs
aozarov Dec 22, 2015
d299e06
Merge branch 'master' into bigquery
mziccard Dec 22, 2015
7b1b0dc
Add BigQuery to common README.md and TESTING.md
mziccard Dec 22, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ Currently, there isn't an emulator for Google Cloud Storage, so an alternative i
3. Create a `RemoteGcsHelper` object using your project ID and JSON key.
Here is an example that uses the `RemoteGcsHelper` to create a bucket.
```java
RemoteGcsHelper gcsHelper = RemoteGcsHelper.create(PROJECT_ID, "/path/to/my/JSON/key.json");
RemoteGcsHelper gcsHelper =
RemoteGcsHelper.create(PROJECT_ID, new FileInputStream("/path/to/my/JSON/key.json"));
Storage storage = gcsHelper.options().service();
String bucket = RemoteGcsHelper.generateBucketName();
storage.create(BucketInfo.of(bucket));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -479,8 +479,8 @@ public Tuple<String, byte[]> read(StorageObject from, Map<Option, ?> options, lo
}

@Override
public void write(String uploadId, byte[] toWrite, int toWriteOffset, StorageObject dest,
long destOffset, int length, boolean last) throws StorageException {
public void write(String uploadId, byte[] toWrite, int toWriteOffset, long destOffset, int length,
boolean last) throws StorageException {
try {
GenericUrl url = new GenericUrl(uploadId);
HttpRequest httpRequest = storage.getRequestFactory().buildPutRequest(url,
Expand Down Expand Up @@ -571,7 +571,7 @@ private RewriteResponse rewrite(RewriteRequest req, String token) throws Storage
try {
Long maxBytesRewrittenPerCall = req.megabytesRewrittenPerCall != null
? req.megabytesRewrittenPerCall * MEGABYTE : null;
com.google.api.services.storage.model.RewriteResponse rewriteReponse = storage.objects()
com.google.api.services.storage.model.RewriteResponse rewriteResponse = storage.objects()
.rewrite(req.source.getBucket(), req.source.getName(), req.target.getBucket(),
req.target.getName(), req.target.getContentType() != null ? req.target : null)
.setSourceGeneration(req.source.getGeneration())
Expand All @@ -590,11 +590,11 @@ private RewriteResponse rewrite(RewriteRequest req, String token) throws Storage
.execute();
return new RewriteResponse(
req,
rewriteReponse.getResource(),
rewriteReponse.getObjectSize().longValue(),
rewriteReponse.getDone(),
rewriteReponse.getRewriteToken(),
rewriteReponse.getTotalBytesRewritten().longValue());
rewriteResponse.getResource(),
rewriteResponse.getObjectSize().longValue(),
rewriteResponse.getDone(),
rewriteResponse.getRewriteToken(),
rewriteResponse.getTotalBytesRewritten().longValue());
} catch (IOException ex) {
throw translate(ex);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -264,8 +264,8 @@ Tuple<String, byte[]> read(StorageObject from, Map<Option, ?> options, long posi

String open(StorageObject object, Map<Option, ?> options) throws StorageException;

void write(String uploadId, byte[] toWrite, int toWriteOffset, StorageObject dest,
long destOffset, int length, boolean last) throws StorageException;
void write(String uploadId, byte[] toWrite, int toWriteOffset, long destOffset, int length,
boolean last) throws StorageException;

RewriteResponse openRewrite(RewriteRequest rewriteRequest) throws StorageException;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,14 +73,14 @@ protected String value() {
}

@Override
public boolean equals(Object o) {
if (this == o) {
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (o == null || getClass() != o.getClass()) {
if (obj == null || getClass() != obj.getClass()) {
return false;
}
Entity entity = (Entity) o;
Entity entity = (Entity) obj;
return Objects.equals(type, entity.type) && Objects.equals(value, entity.value);
}

Expand Down Expand Up @@ -226,7 +226,7 @@ public static final class Project extends Entity {

private static final long serialVersionUID = 7933776866530023027L;

private final ProjectRole pRole;
private final ProjectRole projectRole;
private final String projectId;

public enum ProjectRole {
Expand All @@ -236,20 +236,20 @@ public enum ProjectRole {
/**
* Creates a project entity.
*
* @param pRole a role in the project, used to select project's teams
* @param projectRole a role in the project, used to select project's teams
* @param projectId id of the project
*/
public Project(ProjectRole pRole, String projectId) {
super(Type.PROJECT, pRole.name().toLowerCase() + "-" + projectId);
this.pRole = pRole;
public Project(ProjectRole projectRole, String projectId) {
super(Type.PROJECT, projectRole.name().toLowerCase() + "-" + projectId);
this.projectRole = projectRole;
this.projectId = projectId;
}

/**
* Returns the role in the project for this entity.
*/
public ProjectRole projectRole() {
return pRole;
return projectRole;
}

/**
Expand All @@ -275,7 +275,7 @@ String toPb() {
}

/**
* Creats an ACL object.
* Creates an ACL object.
*
* @param entity the entity for this ACL object
* @param role the role to associate to the {@code entity} object
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ static <T extends Serializable> Result<T> empty() {
}
}

public BatchResponse(List<Result<Boolean>> deleteResult, List<Result<BlobInfo>> updateResult,
BatchResponse(List<Result<Boolean>> deleteResult, List<Result<BlobInfo>> updateResult,
List<Result<BlobInfo>> getResult) {
this.deleteResult = ImmutableList.copyOf(deleteResult);
this.updateResult = ImmutableList.copyOf(updateResult);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,7 @@
/**
* A Google cloud storage object.
*
* <p>
* Objects of this class are immutable. Operations that modify the blob like {@link #update} and
* <p>Objects of this class are immutable. Operations that modify the blob like {@link #update} and
* {@link #copyTo} return a new object. To get a {@code Blob} object with the most recent
* information use {@link #reload}.
* </p>
Expand Down Expand Up @@ -239,13 +238,13 @@ public Blob reload(BlobSourceOption... options) {
* made on the metadata generation of the current blob. If you want to update the information only
* if the current blob metadata are at their latest version use the {@code metagenerationMatch}
* option: {@code blob.update(newInfo, BlobTargetOption.metagenerationMatch())}.
* <p>
* Original metadata are merged with metadata in the provided {@code blobInfo}. To replace
*
* <p>Original metadata are merged with metadata in the provided {@code blobInfo}. To replace
* metadata instead you first have to unset them. Unsetting metadata can be done by setting the
* provided {@code blobInfo}'s metadata to {@code null}.
* </p>
* <p>
* Example usage of replacing blob's metadata:
*
* <p>Example usage of replacing blob's metadata:
* <pre> {@code blob.update(blob.info().toBuilder().metadata(null).build());}
* {@code blob.update(blob.info().toBuilder().metadata(newMetadata).build());}
* </pre>
Expand All @@ -261,6 +260,17 @@ public Blob update(BlobInfo blobInfo, BlobTargetOption... options) {
return new Blob(storage, storage.update(blobInfo, options));
}

/**
* Deletes this blob.
*
* @param options blob delete options
* @return {@code true} if blob was deleted, {@code false} if it was not found
* @throws StorageException upon failure
*/
public boolean delete(BlobSourceOption... options) {
return storage.delete(info.blobId(), toSourceOptions(info, options));
}

/**
* Sends a copy request for the current blob to the target blob. Possibly also some of the
* metadata are copied (e.g. content-type).
Expand All @@ -277,17 +287,6 @@ public CopyWriter copyTo(BlobId targetBlob, BlobSourceOption... options) {
return storage.copy(copyRequest);
}

/**
* Deletes this blob.
*
* @param options blob delete options
* @return {@code true} if blob was deleted, {@code false} if it was not found
* @throws StorageException upon failure
*/
public boolean delete(BlobSourceOption... options) {
return storage.delete(info.blobId(), toSourceOptions(info, options));
}

/**
* Sends a copy request for the current blob to the target bucket, preserving its name. Possibly
* copying also some of the metadata (e.g. content-type).
Expand Down Expand Up @@ -381,8 +380,8 @@ public static List<Blob> get(final Storage storage, BlobId... blobs) {
return Collections.unmodifiableList(Lists.transform(storage.get(blobs),
new Function<BlobInfo, Blob>() {
@Override
public Blob apply(BlobInfo f) {
return f != null ? new Blob(storage, f) : null;
public Blob apply(BlobInfo blobInfo) {
return blobInfo != null ? new Blob(storage, blobInfo) : null;
}
}));
}
Expand Down Expand Up @@ -410,8 +409,8 @@ public static List<Blob> update(final Storage storage, BlobInfo... infos) {
return Collections.unmodifiableList(Lists.transform(storage.update(infos),
new Function<BlobInfo, Blob>() {
@Override
public Blob apply(BlobInfo f) {
return f != null ? new Blob(storage, f) : null;
public Blob apply(BlobInfo blobInfo) {
return blobInfo != null ? new Blob(storage, blobInfo) : null;
}
}));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ Builder mediaLink(String mediaLink) {
*/
public Builder metadata(Map<String, String> metadata) {
this.metadata = metadata != null
? new HashMap(metadata) : Data.<Map>nullOf(ImmutableEmptyMap.class);
? new HashMap<>(metadata) : Data.<Map<String, String>>nullOf(ImmutableEmptyMap.class);
return this;
}

Expand Down Expand Up @@ -576,8 +576,9 @@ public ObjectAccessControl apply(Acl acl) {
Map<String, String> pbMetadata = metadata;
if (metadata != null && !Data.isNull(metadata)) {
pbMetadata = Maps.newHashMapWithExpectedSize(metadata.size());
for (String key : metadata.keySet()) {
pbMetadata.put(key, firstNonNull(metadata.get(key), Data.<String>nullOf(String.class)));
for (Map.Entry<String, String> entry : metadata.entrySet()) {
pbMetadata.put(entry.getKey(),
firstNonNull(entry.getValue(), Data.<String>nullOf(String.class)));
}
}
storageObject.setMetadata(pbMetadata);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,10 @@
/**
* A channel for reading data from a Google Cloud Storage object.
*
* Implementations of this class may buffer data internally to reduce remote calls. This interface
* implements {@link Restorable} to allow saving the reader's state to continue reading afterwards.
* <p>Implementations of this class may buffer data internally to reduce remote calls. This
* interface implements {@link Restorable} to allow saving the reader's state to continue reading
* afterwards.
* </p>
*/
public interface BlobReadChannel extends ReadableByteChannel, Closeable,
Restorable<BlobReadChannel> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,10 @@
/**
* A channel for writing data to a Google Cloud Storage object.
*
* Implementations of this class may further buffer data internally to reduce remote calls. Written
* data will only be visible after calling {@link #close()}. This interface implements
* <p>Implementations of this class may further buffer data internally to reduce remote calls.
* Written data will only be visible after calling {@link #close()}. This interface implements
* {@link Restorable} to allow saving the writer's state to continue writing afterwards.
* </p>
*/
public interface BlobWriteChannel extends WritableByteChannel, Closeable,
Restorable<BlobWriteChannel> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ private void flush() {
runWithRetries(callable(new Runnable() {
@Override
public void run() {
storageRpc.write(uploadId, buffer, 0, storageObject, position, length, false);
storageRpc.write(uploadId, buffer, 0, position, length, false);
}
}), options.retryParams(), StorageImpl.EXCEPTION_HANDLER);
} catch (RetryHelper.RetryHelperException e) {
Expand Down Expand Up @@ -139,7 +139,7 @@ public void close() throws IOException {
runWithRetries(callable(new Runnable() {
@Override
public void run() {
storageRpc.write(uploadId, buffer, 0, storageObject, position, limit, true);
storageRpc.write(uploadId, buffer, 0, position, limit, true);
}
}), options.retryParams(), StorageImpl.EXCEPTION_HANDLER);
} catch (RetryHelper.RetryHelperException e) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,7 @@
/**
* A Google cloud storage bucket.
*
* <p>
* Objects of this class are immutable. Operations that modify the bucket like {@link #update}
* <p>Objects of this class are immutable. Operations that modify the bucket like {@link #update}
* return a new object. To get a {@code Bucket} object with the most recent information use
* {@link #reload}.
* </p>
Expand All @@ -72,7 +71,7 @@ private static class BlobPageFetcher implements PageImpl.NextPageFetcher<Blob> {
@Override
public Page<Blob> nextPage() {
Page<BlobInfo> nextInfoPage = infoPage.nextPage();
return new PageImpl<Blob>(new BlobPageFetcher(options, nextInfoPage),
return new PageImpl<>(new BlobPageFetcher(options, nextInfoPage),
nextInfoPage.nextPageCursor(), new LazyBlobIterable(options, nextInfoPage.values()));
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ public class CopyWriter implements Restorable<CopyWriter> {
/**
* Returns the updated information for the written blob. Calling this method when {@code isDone()}
* is {@code false} will block until all pending chunks are copied.
* <p>
* This method has the same effect of doing:
*
* <p>This method has the same effect of doing:
* <pre> {@code while (!copyWriter.isDone()) {
* copyWriter.copyChunk();
* }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
import java.util.Objects;

/**
* Base class for Storage operation option
* Base class for Storage operation option.
*/
class Option implements Serializable {

Expand Down
Loading