Skip to content

Commit

Permalink
Docs - update modules
Browse files Browse the repository at this point in the history
Summary:
1. Update all packages to their most recent versions.
2. Add missing dependencies.
3. Add `cheerio` module's version override until a new version of `cmfcmf/docusaurus-search-local` is released. (see here cmfcmf/docusaurus-search-local#218)
4. Fix MDX pages failing to compile with docusaurus 3.5.2

Differential Revision: D61535847
  • Loading branch information
Piotr Brzyski authored and facebook-github-bot committed Aug 20, 2024
1 parent f387d23 commit 7e271ca
Show file tree
Hide file tree
Showing 6 changed files with 141 additions and 222 deletions.
255 changes: 85 additions & 170 deletions website/docs/ARK/mps/request_mps/mps_cli_guide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,240 +79,155 @@ Project Aria MPS CLI settings can be customized via the mps.ini file. This file

<table>
<tr>
<td><strong>Setting</strong>
</td>
<td><strong>Description</strong>
</td>
<td><strong>Default Value</strong>
</td>
<td><strong>Setting</strong></td>
<td><strong>Description</strong></td>
<td><strong>Default Value</strong></td>
</tr>
<tr>
<td colspan="3" ><strong>General settings</strong>
</td>
<td colspan="3" ><strong>General settings</strong></td>
</tr>
<tr>
<td><code>log_dir</code>
</td>
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.
</td>
<td><code>/tmp/logs/projectaria/mps/</code>
</td>
<td><code>log_dir</code></td>
<td>Where log files are saved for each run. The filename is the timestamp from when the request tool started running.</td>
<td><code>/tmp/logs/projectaria/mps/</code></td>
</tr>
<tr>
<td><code>status_check_interval</code>
</td>
<td>How long the MPS CLI waits to check the status of data during the Processing stage.
</td>
<td>30 secs
</td>
<td><code>status_check_interval</code></td>
<td>How long the MPS CLI waits to check the status of data during the Processing stage.</td>
<td>30 secs</td>
</tr>
<tr>
<td colspan="3" > <strong>HASH</strong>
</td>
<td colspan="3" > <strong>HASH</strong></td>
</tr>
<tr>
<td><code>concurrent_hashes</code>
</td>
<td>Maximum number of files that can be concurrently hashed
</td>
<td>4
</td>
<td><code>concurrent_hashes</code></td>
<td>Maximum number of files that can be concurrently hashed</td>
<td>4</td>
</tr>
<tr>
<td><code>chunk_size</code>
</td>
<td>Chunk size to use for hashing
</td>
<td>10MB
</td>
<td><code>chunk_size</code></td>
<td>Chunk size to use for hashing</td>
<td>10MB</td>
</tr>
<tr>
<td colspan="3" ><strong>Encryption</strong>
</td>
<td colspan="3" ><strong>Encryption</strong></td>
</tr>
<tr>
<td><code>chunk_size</code>
</td>
<td>Chunk size to use for encryption
</td>
<td>50MB
</td>
<td><code>chunk_size</code></td>
<td>Chunk size to use for encryption</td>
<td>50MB</td>
</tr>
<tr>
<td><code>concurrent_encryptions</code>
</td>
<td>Maximum number of files that can be concurrently encrypted
</td>
<td>4
</td>
<td><code>concurrent_encryptions</code></td>
<td>Maximum number of files that can be concurrently encrypted</td>
<td>4</td>
</tr>
<tr>
<td><code>delete_encrypted_files</code>
</td>
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.
</td>
<td>True.
</td>
<td><code>delete_encrypted_files</code></td>
<td>Whether to delete the encrypted files after upload is done. If you set this to false local disk usage will double, due to an encrypted copy of each file.</td>
<td>True.</td>
</tr>
<tr>
<td colspan="3" ><strong>Health Check</strong>
</td>
<td colspan="3" ><strong>Health Check</strong></td>
</tr>
<tr>
<td><code>concurrent_health_checks</code>
</td>
<td>Maximum number of VRS file healthchecks that can be run concurrently
</td>
<td>2
</td>
<td><code>concurrent_health_checks</code></td>
<td>Maximum number of VRS file healthchecks that can be run concurrently</td>
<td>2</td>
</tr>
<tr>
<td colspan="3" ><strong>Uploads</strong>
</td>
<td colspan="3" ><strong>Uploads</strong></td>
</tr>
<tr>
<td><code>backoff</code>
</td>
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.
</td>
<td>1.5
</td>
<td><code>backoff</code></td>
<td>The exponential back off factor for retries during failed uploads. The wait time between successive retries will increase with this factor.</td>
<td>1.5</td>
</tr>
<tr>
<td><code>interval</code>
</td>
<td>Base delay between retries.
</td>
<td>20 secs
</td>
<td><code>interval</code></td>
<td>Base delay between retries.</td>
<td>20 secs</td>
</tr>
<tr>
<td><code>retries</code>
</td>
<td>Maximum number of retries before giving up.
</td>
<td>10
</td>
<td><code>retries</code></td>
<td>Maximum number of retries before giving up.</td>
<td>10</td>
</tr>
<tr>
<td><code>concurrent_uploads</code>
</td>
<td>Maximum number of concurrent uploads.
</td>
<td>4
</td>
<td><code>concurrent_uploads</code></td>
<td>Maximum number of concurrent uploads.</td>
<td>4</td>
</tr>
<tr>
<td><code>max_chunk_size</code>
</td>
<td>Maximum chunk size that can be used during uploads.
</td>
<td>100 MB
</td>
<td><code>max_chunk_size</code></td>
<td>Maximum chunk size that can be used during uploads.</td>
<td>100 MB</td>
</tr>
<tr>
<td><code>min_chunk_size</code>
</td>
<td>The minimum upload chunk size.
</td>
<td>5 MB
</td>
<td><code>min_chunk_size</code></td>
<td>The minimum upload chunk size.</td>
<td>5 MB</td>
</tr>
<tr>
<td><code>smoothing_window_size</code>
</td>
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.
</td>
<td>10
</td>
<td><code>smoothing_window_size</code></td>
<td>Size of the smoothing window to adjust the chunk size. This value defines the number of uploaded chunks that will be used to determine the next chunk size.</td>
<td>10</td>
</tr>
<tr>
<td><code>target_chunk_upload_secs</code>
</td>
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.
</td>
<td>3 secs
</td>
<td><code>target_chunk_upload_secs</code></td>
<td>Target time to upload a single chunk. If the chunks in a smoothing window take longer, we reduce the chunk size. If it takes less time, we increase the chunk size.</td>
<td>3 secs</td>
</tr>
<tr>
<td colspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong>
</td>
<td colspan="3" ><strong>GraphQL (Query the MPS backend for MPS Status)</strong></td>
</tr>
<tr>
<td><code>backoff</code>
</td>
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor
</td>
<td>1.5
</td>
<td><code>backoff</code></td>
<td>This the exponential back off factor for retries for failed queries. The wait time between successive retries will increase with this factor</td>
<td>1.5</td>
</tr>
<tr>
<td><code>interval</code>
</td>
<td>Base delay between retries
</td>
<td>4 secs
</td>
<td><code>interval</code></td>
<td>Base delay between retries</td>
<td>4 secs</td>
</tr>
<tr>
<td><code>retries</code>
</td>
<td>Maximum number of retries before giving up
</td>
<td>3
</td>
<td><code>retries</code></td>
<td>Maximum number of retries before giving up</td>
<td>3</td>
</tr>
<tr>
<td colspan="3" ><strong>Download</strong>
</td>
<td colspan="3" ><strong>Download</strong></td>
</tr>
<tr>
<td><code>backoff</code>
</td>
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.
</td>
<td>1.5
</td>
<td><code>backoff</code></td>
<td>This the exponential back off factor for retries during failed downloads. The wait time between successive retries will increase with this factor.</td>
<td>1.5</td>
</tr>
<tr>
<td><code>interval</code>
</td>
<td>Base delay between retries
</td>
<td>20 secs
</td>
<td><code>interval</code></td>
<td>Base delay between retries</td>
<td>20 secs</td>
</tr>
<tr>
<td><code>retries</code>
</td>
<td>Maximum number of retries before giving up
</td>
<td>10
</td>
<td><code>retries</code></td>
<td>Maximum number of retries before giving up</td>
<td>10</td>
</tr>
<tr>
<td><code>chunk_size</code>
</td>
<td>The chunk size to use for downloads
</td>
<td>10MB
</td>
<td><code>chunk_size</code></td>
<td>The chunk size to use for downloads</td>
<td>10MB</td>
</tr>
<tr>
<td><code>concurrent_downloads</code>
</td>
<td>Number of concurrent downloads
</td>
<td>10
</td>
<td><code>concurrent_downloads</code></td>
<td>Number of concurrent downloads</td>
<td>10</td>
</tr>
<tr>
<td><code>delete_zip</code>
</td>
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not
</td>
<td>True
</td>
<td><code>delete_zip</code></td>
<td>The server will send the results in a zip file. This flag controls whether to delete the zip file after extraction or not</td>
<td>True</td>
</tr>
</table>

Expand Down
2 changes: 1 addition & 1 deletion website/docs/ARK/sw_release_notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ MPS requests using the Desktop app have been slightly restructured, you no longe

The Streaming button in the dashboard has been renamed to Preview, to better reflect the capability provided by the Desktop app. Use the [Client SDK with CLI](/ARK/sdk/sdk.mdx) to stream data.

Desktop app logs are now stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
Desktop app logs are now stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`

* Please note, the streaming preview available through the Desktop app is optimized for Profile 12.

Expand Down
4 changes: 2 additions & 2 deletions website/docs/ARK/troubleshooting/desktop_app_logs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ open /Applications/Aria.app --args --log-output
```

3. The Aria Desktop app should then open with logging enabled
4. The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
4. The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
5. Logs will continue to be added to this file until you quit the app
6. If you generate logs at a later time, they will be appended to the end of these logs

Expand All @@ -48,6 +48,6 @@ open /Applications/Aria.app --args --log-output
```

3. The Aria Desktop app should then open with logging enabled.
4. The logs will be stored in The logs will be stored in ~/.aria/logs/aria_desktop_app_{date}_{time}.log
4. The logs will be stored in The logs will be stored in `~/.aria/logs/aria_desktop_app_{date}_{time}.log`
5. Logs will continue to be added to this file until you quit the app
6. If you generate logs at a later time, they will be appended to the end of these logs
12 changes: 6 additions & 6 deletions website/docs/data_utilities/core_code_snippets/data_provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,9 @@ Project Aria data has four kinds of TimeDomain entries. We strongly recommend al
* TimeDomain.TIME_CODE - for multiple devices

You can also search using three different time query options:
* TimeQueryOptions.BEFORE (default): last data with t <= t_query
* TimeQueryOptions.AFTER : first data with t >= t_query
* TimeQueryOptions.CLOSEST : the data where |t - t_query| is smallest
* TimeQueryOptions.BEFORE (default): last data with `t <= t_query`
* TimeQueryOptions.AFTER : first data with `t >= t_query`
* TimeQueryOptions.CLOSEST : the data where `|t - t_query|` is smallest

```python
for stream_id in provider.get_all_streams():
Expand All @@ -133,9 +133,9 @@ for stream_id in provider.get_all_streams():
* TimeDomain::TimeCode - for multiple devices

You can also search using three different time query options:
* TimeQueryOptions::Before : last data with t <= t_query
* TimeQueryOptions::After : first data with t >= t_query
* TimeQueryOptions::Closest : the data where |t - t_query| is smallest
* TimeQueryOptions::Before : last data with `t <= t_query`
* TimeQueryOptions::After : first data with `t >= t_query`
* TimeQueryOptions::Closest : the data where `|t - t_query|` is smallest

```cpp
for (const auto& streamId : provider.getAllStreams()) {
Expand Down
Loading

0 comments on commit 7e271ca

Please sign in to comment.