-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TensorBoard 2.2.1 #3520
TensorBoard 2.2.1 #3520
Conversation
) Summary: Our favicon has hitherto been defined only as four copies of the same base64 data URI scattered about the repo. This commit creates an actual file and inlines it into the various destinations at build time with a very small shell script. Test Plan: Files are unchanged, modulo Prettier formatting: ```shell prettify() { yarn -s prettier --stdin --stdin-filepath index.html; } check() ( set -eu bazel build "$1" >/dev/null 2>&1 git show "HEAD~:$1" | prettify | shasum -a 256 prettify <"bazel-bin/$1" | shasum -a 256 ) ``` ``` $ check tensorboard/components/tensorboard.html e6139eb8dab40c5082b326dcaef810ccc9bd3f4ec5fe1903e18cc13e1eb80226 - e6139eb8dab40c5082b326dcaef810ccc9bd3f4ec5fe1903e18cc13e1eb80226 - $ check tensorboard/webapp/index.html 56088876962e466665d0c2a522b1bb323d771e7bd8890f5826627804957c0640 - 56088876962e466665d0c2a522b1bb323d771e7bd8890f5826627804957c0640 - ``` Also, `git grep iVBOR` shows that there are no more occurrences of the base64 string in the repository. wchargin-branch: favicon-build
Summary: Favicon comparison screenshot (finally, a use for `--window_title`!): ![Favicons, old and new (top: Firefox; bottom: Chrome)][ss] SVGs from TensorFlow design team. PNG generated with: ``` inkscape -z logo.svg -e favicon.png -w 196 -h 196 ``` with `inkscape --version` Inkscape 0.92.4 (5da689c313, 2019-01-14). [ss]: https://user-images.githubusercontent.com/4317806/77215796-57afbc80-6ad3-11ea-8b06-a486ed00a9f5.png wchargin-branch: favicon-new-branding-2020q1
Motivation for features / changes Allow uploader users to specify the plugins for which data should be uploaded. It has two-fold purpose: (1) Allow users to specify "experimental" plugins that would otherwise not be uploaded. (2) Allow users to list a subset of launched plugins, if they do not want to upload all launched plugin data. Technical description of changes Add "--plugins" command line option for upload subcommand. The information is sent in the ServerInfoRequest (aka the handshake) and may impact the list of plugins returned in ServerInfoResponse.plugin_control.allowed_plugins. Sample usage: tensorboard dev upload --logdir --plugins scalars graphs histograms Outside of these specific changes: It's expected that supported servers will evaluate the list of plugins and decide whether it is valid. If valid, the server will respond with the entire list of plugins or perhaps a sublist. If the list is invalid then it will respond with a CompatibilityVerdict of VERDICT_ERROR and a useful detail message to print to console. If --plugins is not specified then the server is expected to respond with a default list of plugins to upload.
…libraries (tensorflow#3411) * Motivation for features / changes * Roll forward go/tbpr/3374 with fixes. * Technical description of changes * Same as go/tbpr/3374, with the following exception: the `tf_web_library` BUILD targets for requirejs, monaco-editor (core) and monaco-languages are moved to a dedicated BUILD file (tensorboard/plugins/debugger_v2/tf_debugger_v2_plugin/views/source_code/monaco/BUILD) to facilitate sync'ing. * Detailed steps to verify changes work correctly (as executed by you) * copybara change CL: CL/302218915 * Final state: CL/302222581. Build passes, after some manual reverts deal with JSCompiler artifacts * Alternate designs / implementations considered * Alternative: use copybara rules to make the requirejs, monaco-editor and monaco-languages rules BUILD internally. * Con: Those transformed targets are *not* used internally. This is potentially confusing to future developers. It is also an unnecessary maintenance overhead. * Con: While using `tensorboard_html_binary` to wrap around third_party libraries may work for requirejs, which involves a single .js file, it is more tedious for monaco-editors and monaco-languages, which involve multiple separate files (5 for monaco-editors and 3 for monaco-languages). This would involve a large number of confusing rules added to the copybara file. Those rules would be useless except for making the build pass (see the previous Con item). Some of these files are not .js or .css files (e.g., the .ttf file), further increasing the complexity.
Check out this pull request on You'll be able to see Jupyter notebook diff and discuss changes. Powered by ReviewNB. |
All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the ℹ️ Googlers: Go here for more info. |
cla:yes |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) ℹ️ Googlers: Go here for more info. |
RELEASE.md
Outdated
|
||
## Features | ||
|
||
- TensorBoard.dev graph rendering support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't anything related to TensorBoard.dev go in the TensorBoard.dev updates section? That would make more sense IMO than separating out the server-side changes from the CLI changes.
In the past we've been kind of indiscriminate about this and put CLI changes either under "TensorBoard.dev updates" (2.1.0) or "Features" (2.2.0), but in a case where we have both server-side and CLI changes I would definitely expect them to be grouped together, and IMO better easier to be consistent overall if we always separate out the uploader changes in their own section.
RELEASE.md
Outdated
@@ -1,3 +1,34 @@ | |||
# Release 2.2.1 | |||
|
|||
This patch release includes a rollup to HEAD so as to support required features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just omit this line entirely. It could confuse users, and the rollup to HEAD is functionally the same as if we just cherrypicked a bunch of extra commits.
RELEASE.md
Outdated
after filtering are not uploaded. | ||
- The `tensorboard dev list` subcommand now reports the total size of stored | ||
binary objects (e.g., graphs) for each experiment (#3464) The `tensorboard | ||
dev list` subcommand now accepts a `--json` flag to allow parsing the output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sentence for the --json
flag should be a separate bullet point?
RELEASE.md
Outdated
more easily (#3480) | ||
- Auto-reload is now disabled when the browser tab is not visible, saving | ||
resources (#3483) | ||
- HParams plugin: substantial efficiency improvements (#3438, #3449) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd omit this since these improvements won't really make much of a difference for most OSS TensorBoard users, so I wouldn't want to oversell this as a feature.
RELEASE.md
Outdated
dev list` subcommand now accepts a `--json` flag to allow parsing the output | ||
more easily (#3480) | ||
- Auto-reload is now disabled when the browser tab is not visible, saving | ||
resources (#3483) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional but maybe specify "saving network bandwidth" rather than just "resources" to emphasize the part that benefits the user as opposed to the part that benefits the server that serves the traffic
@@ -1,3 +1,34 @@ | |||
# Release 2.2.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a few other PRs included in this rollup that seem worth mentioning:
Features
- New logo used in the favicon (favicon: update to new branding #3406)
TensorBoard.dev updates:
- The
tensorboard dev upload
command supports a--plugins
option to explicitly indicate the desired plugins for which summary data should be uploaded (Add --plugins option to uploader #3402, Support comma-separated --plugins lists for "dev" subcommand #3492)
Bug fixes
- Eliminated sporadic HTTP 500 errors for XHRs that do markdown rendering (plugin_util: use thread-local Markdown converter #3491)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Previously, we were reloading all stamped dashboards which is not the behavior of the tf-tensorboard. Side-effect: because we were reloading scalars dashboard when it was in the background, the charts could more frequently appear tiny when the reload kicked in. This change removes that bug.
) We do not currently have a solid design for our flagging (experimental) system. Regardless of the details of the system, because the flagging system can impact many things (e.g., UI, data source, etc...), we want to persist the flagging state in the store. This is laying that foundation with a QueryParams backed implementation.
…ensorflow#3430) * Technical description of changes * Previously, a source-file content was always loaded from the data source even if it's been loaded already. * In this PR, we change the behavior so that source files that are already loaded or currently loading are not requested again under the `sourceLineFocused` action. * Detailed steps to verify changes work correctly (as executed by you) * Unit test added. * Verified in a child PR that no repeated request occurs in the browser.
Summary: Until now, `context.data_provider` was provided unless `generic_data` was explicitly set to `false`. Now, it’s provided in all cases (except the legacy DB modes). This was always expected to be safe, and has been live by default since TensorBoard 1.15.0. This is needed to write plugins that unconditionally assume that a data provider is available. A consequence of this is that `is_active`’s use of the experimental `list_plugins` method is now always on as well. This has been on by default for two months, but was first released in TensorBoard 2.2.0 (just released), so it’s slightly riskier, but also expected to be safe. Test Plan: Run TensorBoard with `--generic_data false`. If pointing at an empty logdir, no plugins are shown as active. If pointing at a full plugin, all appropriate plugins are shown as active. The debugger plugin is still marked as active if `--debugger_port` is specified and as inactive if no relevant flags are given, so the fallback `is_active` calls are still working. wchargin-branch: backend-always-data-provider
Summary: Until now, the hparams plugin has been active only when hparams data is present *and* the scalars plugin is loaded and active. This cross-plugin dependency is unique and doesn’t fit well into the data provider world, so we remove it. The hparams plugin is now active iff hparams data is available, via the standard `data_plugin_names` API. Correspondingly, we loosen the error handling in parts of the plugin that rely on the scalars plugin being active. As long as the scalars plugin is included in the TensorBoard instance, this should not be a problem: TensorBoard core loads all plugins before accepting requests to any of them. If the hparams plugin is used in an installation with no scalars plugin, the errors will now be 404s instead of 500s. This seems acceptable, as running without a scalars plugin is not a common pattern. Test Plan: The hparams plugin is still marked as active in logdirs that contain hparams data and inactive in logdirs that don’t. wchargin-branch: hparams-data-provider-is-active
…ns. (tensorflow#3427) These symbols do not appear to be Tensorflow-specific, and are already part of the Closure Compiler builtin externs. They may or may not cause a conflict, depending on how the compiler is invoked. An earlier version of this change deleted the duplicate definitions entirely, but that causes tests to fail.
Summary: The hparams plugin makes three kinds of queries against the multiplexer: listing hparams metadata, listing scalars metadata, and reading scalar points. This commit encapsulates each such operation in a method so that we can migrate them to use data provider APIs. For now, the underlying implementations are unchanged. Test Plan: Running `git grep '\.multiplexer' tensorboard/plugins/hparams/` shows only matches in `backend_context.py` and tests. Existing tests pass, and all three views of the hparams plugin, including embedded scalar charts and the data download links, still work with and without `--generic_data true` (though in all cases the plugin actually still reads directly from the multiplexer). wchargin-branch: hparams-encapsulate-mux
Summary: All data access in the hparams plugin is now performed via methods that require an experiment ID. The ID is currently unused except for an assertion. In a subsequent change, these methods will switch to using a data provider instead of the multiplexer, at which point the experiment ID will be required. Test Plan: Unit tests suffice for the internals. To check the wiring at the plugin level, note that all three views of the hparams plugin, including embedded scalar charts and the download links, still render properly. wchargin-branch: hparams-thread-eid
Using the new experimentPlugin mechanism (tensorflow#3344), we will control whether to show a plugin or not. Please use `http://localhost:6006/ng_index.html?experimentalPlugin=debugger-v2` to see the debugger-v2 in the future.
Summary: Follow-up to tensorflow#2991. Fixes tensorflow#3434. Test Plan: Tests pass as written. wchargin-branch: data-blob-sequence-tests
Summary: Prior to this change, `read_scalars` (resp. `read_tensors`) delegated to `list_scalars` (resp. `list_tensors`) to find the set of time series to read. This is slower than it might sound, because `list_scalars` itself needs to scan over all relevant `multiplexer.Tensors` to identify `max_step` and `max_wall_time`, which are thrown away by `read_scalars`. (That `list_scalars` needs this full scan at all is its own issue; ideally, these would be memoized onto the event multiplexer.) When a `RunTagFilter` specifying a single run and tag is given, we optimize further by requesting individual `SummaryMetadata` rather than paring down `AllSummaryMetadata`. Resolves a comment of @nfelt on tensorflow#2980: <tensorflow#2980 (comment)> Test Plan: When applied on top of tensorflow#3419, `:list_session_groups_test` improves from taking 11.1 seconds to taking 6.6 seconds on my machine. This doesn’t seem to fully generalize; I see only ~13% speedups in a microbenchmark that hammers `read_scalars` on a logdir with all the demo data, but the improvement on that test is important. wchargin-branch: data-read-without-list
Summary: This commit changes the data access methods in `hparams.backend_context` to read from the context’s data provider rather than directly from its multiplexer. All data access in the hparams plugin already goes through these methods. Test Plan: Ideally, existing tests would have sufficed, but the tests made heavy use of mocking and so required manual intervention. The updated tests pass, and an inspection of the various views of the hparams dashboard (including embedded scalar charts and data download links) checks out, even when TensorBoard core is patched to not provide a multiplexer on the context. wchargin-branch: hparams-generic-data
In tensorflow#3483 I attempted to pause and restart auto reload behavior based on page visibility. Unfortunately a change I made to event handling just before merging into master inadvertently disabled the logic. The change happened in a seam in the code that was not covered by tests. This change fixes the event handling and improves the test coverage.
There is no point in uploading GraphDef data that won't be displayed anyway. In particular, the TensorBoard frontend filters out node attributes larger that 1024 bytes, since it has no good way to present those. So we may as well filter those out before upload to TensorBoard.dev, so as not to waste bandwidth, storage, and read-time processing.
vz_line_chart2 feels empirically slow at updating and this benchmark is an attempt to quantify its speed. Future TODO: add tests to prevent the benchmark from breaking.
…3502) * Motivation for features / changes * The imminent submission of internal CL/290949452 will cause the test to break due to newly-instrumented Const ops, which are previously untracked. Considering the complexity of the sync'ing of tensorflow and tensorboard code, I'll disable the test first. The test will later be fixed to reflect the newly-added Const ops and then re-enabled. See b/153722488. * Technical description of changes * Disable the test by applying proper tags.
Summary: These should both have been internal from the beginning, but team policy when `data_compat` was created was to be public by default, and we followed precedent with `dataclass_compat`. There are no remaining Google-internal non-TensorBoard dependents. Test Plan: Googlers, see <http://cl/305916851> for test sync. wchargin-branch: vis-lock-down-compats
Since tensorflow#3497, we parse GraphDefs in dataclass_compat.py during upload. If a graph is corrupt, that parsing fails. Here we catch the resulting exception, issue a warning, and continue (omitting the graph). This also updates tests to use valid GraphDefs where appropriate, as opposed to bytes(1024), which apparently produces inconsistent results with different proto parsers (e.g., OSS vs. Google internal).
webcomponent_tester has transitive dependency on sinon which is incompatible with our build rules. Remove to unbreak our CI.
Summary: The graph uploader test wasn’t actually covering the blob writing code, and was set up incorrectly such that the request iterator would throw when forced. Test Plan: The new assertions pass, but fail if the RPC changes are reverted. wchargin-branch: uploader-graph-write-test
Under certain mysterious circumstances, attempting to parse a malformed serialized `GraphDef` can result in the raising of a `RuntimeWarning` instead of a `DecodeError`. Here we catch both in order to treat them in the same way.
Summary: Fixes an oversight in tensorflow#3507: we can’t assert that the raw bytes are what was expected because the code under test does a proto serialization roundtrip, which is permitted to permute keys. Test Plan: Tests still pass; a test sync shows that internal tests pass now, too. wchargin-branch: uploader-test-proto-equal
All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the ℹ️ Googlers: Go here for more info. |
Thanks for the careful review! I've addressed the comments and re-organized the commits to match the common pattern. |
Consent is confirmed by default - these commits are all cherry picks from the master branch. |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) ℹ️ Googlers: Go here for more info. |
No description provided.