Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorBoard 2.2.2 #3652

Merged
merged 94 commits into from
May 27, 2020
Merged
Changes from 1 commit
Commits
Show all changes
94 commits
Select commit Hold shift + click to select a range
0fc8e68
uploader: inline graph filtering from `dataclass_compat` (#3510)
wchargin Apr 14, 2020
de9f5e2
upgrade handlebars 4.7.3 -> 4.7.6 to avoid optimist dep (#3517)
nfelt Apr 14, 2020
0952ce7
Update description and landing page video (#3521)
kristenrq Apr 15, 2020
e28a009
Disable auto reload if page is not visible in ng_tensorboard (#3498)
bmd3k Apr 15, 2020
2fba10b
[DebuggerV2] Add basic store support and component for graph executio…
caisq Apr 15, 2020
49bc91f
Update README with TensorBoard.dev graph info (#3523)
bmd3k Apr 16, 2020
cc46e15
backend: move compat transforms to event file loading (#3511)
wchargin Apr 16, 2020
94d35cd
compat: avoid tf.__version__ in tf2 API lazy loader (#3525)
nfelt Apr 16, 2020
33ac378
dataclass_compat: track initial tag metadata (#3512)
wchargin Apr 17, 2020
83ccb6a
audio: deprecate labels and stop reading them (#3500)
wchargin Apr 17, 2020
0af1980
audio: add generic data support (#3514)
wchargin Apr 17, 2020
17b7413
util: add `timing.log_latency` (#3490)
wchargin Apr 17, 2020
4633d6e
text: batch HTML sanitization (#3529)
wchargin Apr 18, 2020
fe86e50
Add the missing `route-prefix` property to `<vz-projector-app>` (#3531)
dsmilkov Apr 20, 2020
60d7dc8
docs: add projector plugin colab (#3423)
hfiller Apr 20, 2020
887c5e8
Upgrade min Bazel required and upgrade CI to use 2.1 (#3239)
stephanwlee Apr 21, 2020
6e7a43a
[DebuggerV2] Flesh out graph execution data display (#3528)
caisq Apr 21, 2020
4e129b2
ng: expose core initialState as public (#3533)
stephanwlee Apr 21, 2020
0ea6284
Bump Bazel minimum version to 2.1.0 to match CI (#3535)
nfelt Apr 21, 2020
a3a37f4
[infra] Upgrade FE dependencies (#3232)
stephanwlee Apr 22, 2020
e09449f
Refactor tests for uploader._ScalarBatchedRequestSender (#3532)
bmd3k Apr 22, 2020
22326ab
improve rerender of vz_line_chart (#3524)
stephanwlee Apr 22, 2020
e31e443
Remove standalone uploader binary target (#3538)
bmd3k Apr 22, 2020
10f23bf
Remove functionality of legacy DB mode (#3539)
wchargin Apr 23, 2020
c5d8f6b
Refactor logic common to send Tensors and Scalars (#3537)
bmd3k Apr 23, 2020
7433137
scalars: remove unused tooltip column DOM (#3546)
stephanwlee Apr 24, 2020
4861f77
[DebuggerV2] Revise and re-enable debugger_v2_plugin_test for Const o…
caisq Apr 24, 2020
64cdae6
[DebuggerV2] Use more efficient read in /execution/data and /graph_ex…
caisq Apr 24, 2020
5c77891
[DebuggerV2] Display detailed tensor debug-values in graph- and eager…
caisq Apr 27, 2020
3d4d160
[DebuggerV2] Highlight stack frame being shown in SourceCodeComponent…
caisq Apr 27, 2020
c33d4d2
Partially revert "improve rerender of vz_line_chart (#3524)" (#3552)
wchargin Apr 27, 2020
45d192b
pr_curves: compute available time entries on client (#3553)
wchargin Apr 28, 2020
089349d
[DebuggerV2] Start DebugDataMultiplexer reading at plugin creation (#…
caisq Apr 28, 2020
8a9ff2a
pr_curves: remove dead `_create_time_entry` helper (#3555)
wchargin Apr 28, 2020
436c86e
pr_curves: add generic data support (#3556)
wchargin Apr 28, 2020
129cb61
Support writing tensors in uploader (#3545)
bmd3k Apr 28, 2020
0109de7
pr_curves: compute `num_thresholds` from data size (#3558)
wchargin Apr 28, 2020
13306f6
Stabilize generic data for plugins at parity (#3559)
wchargin Apr 29, 2020
540333d
Use bare raise for reraising _OutOfSpaceError (#3569)
bmd3k Apr 30, 2020
28499ae
build: lock down default visibilities (#3566)
wchargin Apr 30, 2020
3fd687a
build: use `//tensorboard:internal` for debugger (#3567)
wchargin Apr 30, 2020
2599625
build: lock down visibility for targets with no rdeps (#3568)
wchargin Apr 30, 2020
868267d
lib: clarify extents of public API (#3565)
wchargin Apr 30, 2020
33822f1
ng: fix infinite loop when on plugin selector (#3563)
stephanwlee Apr 30, 2020
ed82249
[DebuggerV2] Remove a problematic assertion in debugger_v2_plugin_tes…
caisq May 1, 2020
f92cf65
DataFrame API: Initial implementation of ExperimentFromDev (#3560)
caisq May 1, 2020
1587d9a
Factor tag type constants out of event accumulator (#3578)
wchargin May 1, 2020
daa9315
Remove some non-critical uses of TensorFlow (#3576)
wchargin May 1, 2020
2e0a2aa
Allow server to specify maximum tensor upload size (#3575)
bmd3k May 4, 2020
ae51026
Don't require max_tensor_point_size to be set (#3581)
bmd3k May 4, 2020
ab8d2ac
Remove unused vz_line_chart (#3572)
stephanwlee May 4, 2020
849c052
uploader: implement tensor exporting (#3577)
caisq May 4, 2020
f1992e2
[DebuggerV2] Implement HTTP route /graphs/op_info (#3564)
caisq May 5, 2020
f882d40
[DebuggerV2] Clicking on Inf/NaN alerts scrolls to the 1st graph exec…
caisq May 5, 2020
326ba5c
ng: set active plugin to first enabled plugin (#3584)
stephanwlee May 5, 2020
34a34ab
uploader: display per-experiments tensor bytes in `dev list` output (…
caisq May 5, 2020
1dc560b
ng: no active plugin warning page (#3587)
stephanwlee May 5, 2020
e73c5e6
ng: add plugin-id on tab for testability (#3591)
stephanwlee May 6, 2020
b6cb343
diagnose: warn on broken What-If Tool version (#3593)
wchargin May 6, 2020
d135467
[DebuggerV2] Stub out GraphComponent in the frontend (#3595)
caisq May 7, 2020
4db0e43
vulc: fix mangled DOM due to a bug in the printer (#3582)
stephanwlee May 7, 2020
6274951
Implement commitChanges on vz-line-chart2 (#3571)
stephanwlee May 7, 2020
3936a75
Cache bleach.Cleaner for faster conversion of markdown to safe html (…
bmd3k May 7, 2020
fee784c
[DebuggerV2] Cleanup: properly implement angular OnChanges interface …
caisq May 7, 2020
8c701a7
ng: remove deprecated `get` and use `inject` (#3603)
stephanwlee May 7, 2020
73f990a
Specify injectable as injectable. (#3602)
stephanwlee May 7, 2020
9dfde6c
vulc: fix pretty print messing up whitespace (#3604)
stephanwlee May 7, 2020
bf24c4f
[Debugger] Replace some usage of innerHTML with innerText (#3606)
caisq May 8, 2020
c73ecdf
cleanup: add @Override on interface impl (#3601)
stephanwlee May 8, 2020
3eeda56
[DebuggerV2] Add backend route /graphs/graph_info (#3600)
caisq May 8, 2020
9e0662d
[DebuggerV2] A few improvements to the /graphs/op_info route (#3594)
caisq May 8, 2020
022ff51
profile: suggest canonical Pip package name (#3617)
wchargin May 11, 2020
26e1f99
Updating documentation (#3618)
jindalshivam09 May 12, 2020
544eb1c
Fix projector plugin links in docs (#3561)
ricmatsui May 12, 2020
747796e
Freshen notebooks. (#3613)
MarkDaoust May 12, 2020
c19da92
Update keras_util_test with explicit model name (#3620)
qlzh727 May 12, 2020
7cfaa4d
uploader: validate tensors before uploading them (#3624)
caisq May 13, 2020
6a1a7fb
boto3 1.13.1 returns descriptive range error code (#3609)
ahirner May 13, 2020
7ca1c59
Expand UploadLimits proto and use it in TensorBoardUploader (#3625)
bmd3k May 13, 2020
fa85fa8
Fix doc generation failure due to JSON lint error (trailing comma) (#…
davidsoergel May 13, 2020
b9d8f54
[DebuggerV2] Add ngrx states, data source and selectors for graph ops…
caisq May 13, 2020
93fb550
[DebuggerV2] Refactor selectors to use intermediate selectors (#3619)
caisq May 14, 2020
bb5d533
Uploader uses entire UploadLimits from handshake. (#3631)
bmd3k May 14, 2020
95dc1f5
[DebuggerV2] Add basic reducers for graph ops (#3630)
caisq May 15, 2020
c14b147
Update broken link to r1 estimators tutorial (#3638)
wolffg May 16, 2020
bcf7e52
cleanup: fix undeclared and incorrect BUILD deps (#3641)
nfelt May 18, 2020
362a405
Migrate _parse_samples_per_plugin() into flag definition itself (#3637)
nfelt May 18, 2020
6b7d4da
perf: apply layout/layer bound where possible (#3642)
stephanwlee May 18, 2020
b3bf1d7
uploader: add rpc method GetExperiment to ExporterService (#3614)
caisq May 18, 2020
e2db93a
DataFrame: Flatten MultiIndex; Set include_wall_time=False (#3605)
caisq May 18, 2020
104e137
Update Travis to use TensorFlow 2.2.0
caisq May 19, 2020
990aa65
Disable debugger_v2_plugin_test
caisq May 20, 2020
0a86dba
Add 2.2.2 relnotes to RELEASE.md
caisq May 19, 2020
cfa9111
TensorBoard 2.2.2
caisq May 19, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Refactor tests for uploader._ScalarBatchedRequestSender (#3532)
Identify tests in uploader_test.py that specifically target logic in _ScalarBatchedRequestSender and rewrite them to only use _ScalarBatchedRequestSender and none of the other layers in uploader.py.

This is a useful exercise for the work on _TensorBatchedRequestSender. I was having a difficult time using _ScalarBatchedRequestSender tests to establish a pattern for testing _TensorBatchedRequestSender.
bmd3k authored and caisq committed May 19, 2020

Verified

This commit was signed with the committer’s verified signature.
Fdawgs Frazer Smith
commit e09449f8302795b04ecbde7c436dda7b4fc607f9
244 changes: 119 additions & 125 deletions tensorboard/uploader/uploader_test.py
Original file line number Diff line number Diff line change
@@ -165,6 +165,18 @@ def _create_request_sender(
)


def _create_scalar_request_sender(
experiment_id=None, api=None,
):
if api is _USE_DEFAULT:
api = _create_mock_client()
return uploader_lib._ScalarBatchedRequestSender(
experiment_id=experiment_id,
api=api,
rpc_rate_limiter=util.RateLimiter(0),
)


class TensorboardUploaderTest(tf.test.TestCase):
def test_create_experiment(self):
logdir = "/logs/foo"
@@ -564,42 +576,6 @@ def test_upload_swallows_rpc_failure(self):
uploader._upload_once()
mock_client.WriteScalar.assert_called_once()

def test_upload_propagates_experiment_deletion(self):
logdir = self.get_temp_dir()
with tb_test_util.FileWriter(logdir) as writer:
writer.add_test_summary("foo")
mock_client = _create_mock_client()
uploader = _create_uploader(mock_client, logdir)
uploader.create_experiment()
error = test_util.grpc_error(grpc.StatusCode.NOT_FOUND, "nope")
mock_client.WriteScalar.side_effect = error
with self.assertRaises(uploader_lib.ExperimentNotFoundError):
uploader._upload_once()

def test_upload_preserves_wall_time(self):
logdir = self.get_temp_dir()
with tb_test_util.FileWriter(logdir) as writer:
# Add a raw event so we can specify the wall_time value deterministically.
writer.add_event(
event_pb2.Event(
step=1,
wall_time=123.123123123,
summary=scalar_v2.scalar_pb("foo", 5.0),
)
)
mock_client = _create_mock_client()
uploader = _create_uploader(mock_client, logdir)
uploader.create_experiment()
uploader._upload_once()
mock_client.WriteScalar.assert_called_once()
request = mock_client.WriteScalar.call_args[0][0]
# Just check the wall_time value; everything else is covered in the full
# logdir test below.
self.assertEqual(
123123123123,
request.runs[0].tags[0].points[0].wall_time.ToNanoseconds(),
)

def test_upload_full_logdir(self):
logdir = self.get_temp_dir()
mock_client = _create_mock_client()
@@ -735,41 +711,6 @@ def test_empty_events(self):
run_proto, write_service_pb2.WriteScalarRequest.Run()
)

def test_aggregation_by_tag(self):
def make_event(step, wall_time, tag, value):
return event_pb2.Event(
step=step,
wall_time=wall_time,
summary=scalar_v2.scalar_pb(tag, value),
)

events = [
make_event(1, 1.0, "one", 11.0),
make_event(1, 2.0, "two", 22.0),
make_event(2, 3.0, "one", 33.0),
make_event(2, 4.0, "two", 44.0),
make_event(
1, 5.0, "one", 55.0
), # Should preserve duplicate step=1.
make_event(1, 6.0, "three", 66.0),
]
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, events)
tag_data = {
tag.name: [
(p.step, p.wall_time.ToSeconds(), p.value) for p in tag.points
]
for tag in run_proto.tags
}
self.assertEqual(
tag_data,
{
"one": [(1, 1.0, 11.0), (2, 3.0, 33.0), (1, 5.0, 55.0)],
"two": [(1, 2.0, 22.0), (2, 4.0, 44.0)],
"three": [(1, 6.0, 66.0)],
},
)

def test_skips_non_scalar_events(self):
events = [
event_pb2.Event(file_version="brain.Event:2"),
@@ -838,9 +779,11 @@ def test_remembers_first_metadata_in_scalar_time_series(self):
tag_counts = {tag.name: len(tag.points) for tag in run_proto.tags}
self.assertEqual(tag_counts, {"loss": 2})

def test_v1_summary_single_value(self):
def test_expands_multiple_values_in_event(self):
event = event_pb2.Event(step=1, wall_time=123.456)
event.summary.value.add(tag="foo", simple_value=5.0)
event.summary.value.add(tag="foo", simple_value=1.0)
event.summary.value.add(tag="foo", simple_value=2.0)
event.summary.value.add(tag="foo", simple_value=3.0)
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, [event])
expected_run_proto = write_service_pb2.WriteScalarRequest.Run()
@@ -850,31 +793,82 @@ def test_v1_summary_single_value(self):
foo_tag.metadata.plugin_data.plugin_name = "scalars"
foo_tag.metadata.data_class = summary_pb2.DATA_CLASS_SCALAR
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=5.0
step=1, wall_time=test_util.timestamp_pb(123456000000), value=1.0
)
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=2.0
)
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=3.0
)
self.assertProtoEquals(run_proto, expected_run_proto)

def test_v1_summary_multiple_value(self):

class ScalarBatchedRequestSenderTest(tf.test.TestCase):
def _add_events(self, sender, run_name, events):
for event in events:
for value in event.summary.value:
sender.add_event(run_name, event, value, value.metadata)

def _add_events_and_flush(self, events):
mock_client = _create_mock_client()
sender = _create_scalar_request_sender(
experiment_id="123", api=mock_client,
)
self._add_events(sender, "", events)
sender.flush()

requests = [c[0][0] for c in mock_client.WriteScalar.call_args_list]
self.assertLen(requests, 1)
self.assertLen(requests[0].runs, 1)
return requests[0].runs[0]

def test_aggregation_by_tag(self):
def make_event(step, wall_time, tag, value):
return event_pb2.Event(
step=step,
wall_time=wall_time,
summary=scalar_v2.scalar_pb(tag, value),
)

events = [
make_event(1, 1.0, "one", 11.0),
make_event(1, 2.0, "two", 22.0),
make_event(2, 3.0, "one", 33.0),
make_event(2, 4.0, "two", 44.0),
make_event(
1, 5.0, "one", 55.0
), # Should preserve duplicate step=1.
make_event(1, 6.0, "three", 66.0),
]
run_proto = self._add_events_and_flush(events)
tag_data = {
tag.name: [
(p.step, p.wall_time.ToSeconds(), p.value) for p in tag.points
]
for tag in run_proto.tags
}
self.assertEqual(
tag_data,
{
"one": [(1, 1.0, 11.0), (2, 3.0, 33.0), (1, 5.0, 55.0)],
"two": [(1, 2.0, 22.0), (2, 4.0, 44.0)],
"three": [(1, 6.0, 66.0)],
},
)

def test_v1_summary(self):
event = event_pb2.Event(step=1, wall_time=123.456)
event.summary.value.add(tag="foo", simple_value=1.0)
event.summary.value.add(tag="foo", simple_value=2.0)
event.summary.value.add(tag="foo", simple_value=3.0)
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, [event])
event.summary.value.add(tag="foo", simple_value=5.0)
run_proto = self._add_events_and_flush(_apply_compat([event]))
expected_run_proto = write_service_pb2.WriteScalarRequest.Run()
foo_tag = expected_run_proto.tags.add()
foo_tag.name = "foo"
foo_tag.metadata.display_name = "foo"
foo_tag.metadata.plugin_data.plugin_name = "scalars"
foo_tag.metadata.data_class = summary_pb2.DATA_CLASS_SCALAR
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=1.0
)
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=2.0
)
foo_tag.points.add(
step=1, wall_time=test_util.timestamp_pb(123456000000), value=3.0
step=1, wall_time=test_util.timestamp_pb(123456000000), value=5.0
)
self.assertProtoEquals(run_proto, expected_run_proto)

@@ -884,8 +878,7 @@ def test_v1_summary_tb_summary(self):
tf_summary.SerializeToString()
)
event = event_pb2.Event(step=1, wall_time=123.456, summary=tb_summary)
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, [event])
run_proto = self._add_events_and_flush(_apply_compat([event]))
expected_run_proto = write_service_pb2.WriteScalarRequest.Run()
foo_tag = expected_run_proto.tags.add()
foo_tag.name = "foo/scalar_summary"
@@ -901,8 +894,7 @@ def test_v2_summary(self):
event = event_pb2.Event(
step=1, wall_time=123.456, summary=scalar_v2.scalar_pb("foo", 5.0)
)
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, [event])
run_proto = self._add_events_and_flush(_apply_compat([event]))
expected_run_proto = write_service_pb2.WriteScalarRequest.Run()
foo_tag = expected_run_proto.tags.add()
foo_tag.name = "foo"
@@ -913,16 +905,26 @@ def test_v2_summary(self):
)
self.assertProtoEquals(run_proto, expected_run_proto)

def test_propagates_experiment_deletion(self):
event = event_pb2.Event(step=1)
event.summary.value.add(tag="foo", simple_value=1.0)

mock_client = _create_mock_client()
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, "run", _apply_compat([event]))

error = test_util.grpc_error(grpc.StatusCode.NOT_FOUND, "nope")
mock_client.WriteScalar.side_effect = error
with self.assertRaises(uploader_lib.ExperimentNotFoundError):
sender.flush()

def test_no_budget_for_experiment_id(self):
mock_client = _create_mock_client()
event = event_pb2.Event(step=1, wall_time=123.456)
event.summary.value.add(tag="foo", simple_value=1.0)
run_to_events = {"run_name": [event]}
long_experiment_id = "A" * uploader_lib._MAX_REQUEST_LENGTH_BYTES
mock_client = _create_mock_client()
with self.assertRaises(RuntimeError) as cm:
builder = _create_request_sender(long_experiment_id, mock_client)
builder.send_requests(run_to_events)
_create_scalar_request_sender(
experiment_id=long_experiment_id, api=mock_client,
)
self.assertEqual(
str(cm.exception), "Byte budget too small for experiment ID"
)
@@ -932,10 +934,9 @@ def test_no_room_for_single_point(self):
event = event_pb2.Event(step=1, wall_time=123.456)
event.summary.value.add(tag="foo", simple_value=1.0)
long_run_name = "A" * uploader_lib._MAX_REQUEST_LENGTH_BYTES
run_to_events = {long_run_name: _apply_compat([event])}
with self.assertRaises(RuntimeError) as cm:
builder = _create_request_sender("123", mock_client)
builder.send_requests(run_to_events)
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, long_run_name, [event])
self.assertEqual(str(cm.exception), "add_event failed despite flush")

@mock.patch.object(uploader_lib, "_MAX_REQUEST_LENGTH_BYTES", 1024)
@@ -948,20 +949,17 @@ def test_break_at_run_boundary(self):
event_1.summary.value.add(tag="foo", simple_value=1.0)
event_2 = event_pb2.Event(step=2)
event_2.summary.value.add(tag="bar", simple_value=-2.0)
run_to_events = collections.OrderedDict(
[
(long_run_1, _apply_compat([event_1])),
(long_run_2, _apply_compat([event_2])),
]
)

builder = _create_request_sender("123", mock_client)
builder.send_requests(run_to_events)
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, long_run_1, _apply_compat([event_1]))
self._add_events(sender, long_run_2, _apply_compat([event_2]))
sender.flush()
requests = [c[0][0] for c in mock_client.WriteScalar.call_args_list]

for request in requests:
_clear_wall_times(request)

# Expect two RPC calls despite a single explicit call to flush().
expected = [
write_service_pb2.WriteScalarRequest(experiment_id="123"),
write_service_pb2.WriteScalarRequest(experiment_id="123"),
@@ -990,14 +988,15 @@ def test_break_at_tag_boundary(self):
event = event_pb2.Event(step=1)
event.summary.value.add(tag=long_tag_1, simple_value=1.0)
event.summary.value.add(tag=long_tag_2, simple_value=2.0)
run_to_events = {"train": _apply_compat([event])}

builder = _create_request_sender("123", mock_client)
builder.send_requests(run_to_events)
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, "train", _apply_compat([event]))
sender.flush()
requests = [c[0][0] for c in mock_client.WriteScalar.call_args_list]
for request in requests:
_clear_wall_times(request)

# Expect two RPC calls despite a single explicit call to flush().
expected = [
write_service_pb2.WriteScalarRequest(experiment_id="123"),
write_service_pb2.WriteScalarRequest(experiment_id="123"),
@@ -1030,10 +1029,10 @@ def test_break_at_scalar_point_boundary(self):
if step > 0:
summary.value[0].ClearField("metadata")
events.append(event_pb2.Event(summary=summary, step=step))
run_to_events = {"train": _apply_compat(events)}

builder = _create_request_sender("123", mock_client)
builder.send_requests(run_to_events)
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, "train", _apply_compat(events))
sender.flush()
requests = [c[0][0] for c in mock_client.WriteScalar.call_args_list]
for request in requests:
_clear_wall_times(request)
@@ -1064,12 +1063,6 @@ def test_prunes_tags_and_runs(self):
event_1.summary.value.add(tag="foo", simple_value=1.0)
event_2 = event_pb2.Event(step=2)
event_2.summary.value.add(tag="bar", simple_value=-2.0)
run_to_events = collections.OrderedDict(
[
("train", _apply_compat([event_1])),
("test", _apply_compat([event_2])),
]
)

real_create_point = (
uploader_lib._ScalarBatchedRequestSender._create_point
@@ -1090,8 +1083,10 @@ def mock_create_point(uploader_self, *args, **kwargs):
"_create_point",
mock_create_point,
):
builder = _create_request_sender("123", mock_client)
builder.send_requests(run_to_events)
sender = _create_scalar_request_sender("123", mock_client)
self._add_events(sender, "train", _apply_compat([event_1]))
self._add_events(sender, "test", _apply_compat([event_2]))
sender.flush()
requests = [c[0][0] for c in mock_client.WriteScalar.call_args_list]
for request in requests:
_clear_wall_times(request)
@@ -1116,15 +1111,14 @@ def mock_create_point(uploader_self, *args, **kwargs):

def test_wall_time_precision(self):
# Test a wall time that is exactly representable in float64 but has enough
# digits to incur error if converted to nanonseconds the naive way (* 1e9).
# digits to incur error if converted to nanoseconds the naive way (* 1e9).
event1 = event_pb2.Event(step=1, wall_time=1567808404.765432119)
event1.summary.value.add(tag="foo", simple_value=1.0)
# Test a wall time where as a float64, the fractional part on its own will
# introduce error if truncated to 9 decimal places instead of rounded.
event2 = event_pb2.Event(step=2, wall_time=1.000000002)
event2.summary.value.add(tag="foo", simple_value=2.0)
run_proto = write_service_pb2.WriteScalarRequest.Run()
self._populate_run_from_events(run_proto, [event1, event2])
run_proto = self._add_events_and_flush(_apply_compat([event1, event2]))
self.assertEqual(
test_util.timestamp_pb(1567808404765432119),
run_proto.tags[0].points[0].wall_time,