From 48aaea53939b5abf112da868fb4beb8d4e1cb90c Mon Sep 17 00:00:00 2001 From: Joshua Bell Date: Thu, 30 Jan 2025 20:20:27 -0800 Subject: [PATCH] Editorial: Various style and wording tweaks (#797) * Editorial: Avoid the wordier "the XYZ argument"; just say "XYZ". Part of https://github.com/webmachinelearning/webnn/issues/783 * Editorial: Eschew vars outside of algorithms Part of https://github.com/webmachinelearning/webnn/issues/783 * Editorial: Make dict member linking more consistent Part of https://github.com/webmachinelearning/webnn/issues/783 * Editorial: Format explanatory subscripts to emphasize variables Instead of "*foo[bar]*" make it "*foo*[*bar*]" just so that turning "foo" and "bar" into links is a more atomic operation. Purely stylistic. Part of https://github.com/webmachinelearning/webnn/issues/783 * Editorial: Drop "options.", rely on linking to provide context Part of https://github.com/webmachinelearning/webnn/issues/783 * Editorial: Linkify all argument/option references outside algorithms Part of https://github.com/webmachinelearning/webnn/issues/783 * Incorporate initial feedback * Incorporate more feedback - style/link a few more things * Remove extraneous * in example --- docs/SpecCodingConventions.md | 5 ++ index.bs | 147 +++++++++++++++++----------------- tools/lint.mjs | 10 +++ 3 files changed, 89 insertions(+), 73 deletions(-) diff --git a/docs/SpecCodingConventions.md b/docs/SpecCodingConventions.md index 9db2f6fc..6f87bc02 100644 --- a/docs/SpecCodingConventions.md +++ b/docs/SpecCodingConventions.md @@ -95,6 +95,9 @@ Example: * When concisely defining a tensor's layout, use the syntax `*[ ... ]*` (e.g. _"nchw" means the input tensor has the layout *[batches, inputChannels, height, width]*_) * Format explanatory expressions using backticks, e.g. `` `max(0, x) + alpha * (exp(min(0, x)) - 1)` `` * In Web IDL `
` blocks, wrap long lines to avoid horizontal scrollbars. 88 characters seems to be the magic number.
+* Avoid `v` or `|v|` outside of algorithms; Bikeshed interprets these as global variables which can mask errors. Just use `*v*`.
+    * Format each term separately; that is, `*splits*[*i*]` not `*splits[i]*`.
+* When referencing an argument in prose steps, link to it rather than just using formatted text e.g. `{{MLGraphBuilder/split(input, splits, options)/splits}}` rather than `*splits*`.
 
 
 ### Algorithms
@@ -129,6 +132,7 @@ Example:
     * There is an exception to this rule: Referring to WebIDL types is necessary when dealing with unions. In this case, refer to the full WebIDL type, e.g. _If splits is an `unsigned long` ... Otherwise, if splits is a `sequence` ..._
 * Do not repeat detaults provided by the WebIDL declaration.
 * For types like lists that can't be defaulted in WebIDL, define the default when missing as an explicit step. Example: _If options.padding does not exist, set options.padding to « 0, 0, 0, 0 »._
+* When referring to arguments and options in prose, avoid the wordier `the *foo* argument` or `the *bar* value` forms; just use the name alone.
 
 
 ### Internal Algorithms
@@ -146,4 +150,5 @@ Example:
 
 * Dictionary members are referenced using dotted property syntax. e.g. _options.padding_
    * Note that this is contrary to Web IDL + Infra; formally, a JavaScript object has been mapped to a Web IDL [dictionary](https://webidl.spec.whatwg.org/#idl-dictionaries) and then processed into an Infra [map](ordered) by the time a spec is using it. So formally the syntax _options["padding"]_ should be used.
+* Dictionary members should be linked to, both in algorithms and in other text. e.g. `|options|.{{MLOptionsDict/member}}` (in the steps for an algorithm) or `{{MLOptionsDict/member}}` (outside an algorithm).
 * Dictionary members should be given definitions somewhere in the text. This is usually done with a `
` for the dictionary as a whole, containing a `` for each member. diff --git a/index.bs b/index.bs index d3c6ea2b..fb3c1f55 100644 --- a/index.bs +++ b/index.bs @@ -1881,7 +1881,7 @@ partial dictionary MLOpSupportLimits { - axis: The dimension to reduce. The value must be in the range [0, N-1] where N is the [=MLOperand/rank=] of the input tensor. - options: an optional {{MLArgMinMaxOptions}}. The optional parameters of the operation. - **Returns:** an {{MLOperand}}. The N-D tensor of the reduced shape. The values must be of type |options|.{{MLArgMinMaxOptions/outputDataType}} in the range [0, N-1] where N is the size of the input dimension specified by axis. + **Returns:** an {{MLOperand}}. The N-D tensor of the reduced shape. The values must be of type {{MLArgMinMaxOptions/outputDataType}} in the range [0, N-1] where N is the size of the input dimension specified by {{MLGraphBuilder/argMin(input, axis, options)/axis}}. @@ -2007,7 +2007,7 @@ partial dictionary MLOpSupportLimits { - variance: an {{MLOperand}}. The 1-D tensor of the variance values of the input features across the batch whose [=list/size=] is equal to the size of the input dimension denoted by {{MLBatchNormalizationOptions/axis}}. - options: an optional {{MLBatchNormalizationOptions}}. Specifies the optional parameters of the operation. - **Returns:** an {{MLOperand}}. The batch-normalized N-D tensor of the same shape as *input*. + **Returns:** an {{MLOperand}}. The batch-normalized N-D tensor of the same shape as {{MLGraphBuilder/batchNormalization(input, mean, variance, options)/input}}.
@@ -2145,7 +2145,7 @@ partial dictionary MLOpSupportLimits { - type: an {{MLOperandDataType}}. The target data type. - options: an {{MLOperatorOptions}}. Specifies the optional parameters of the operation. - **Returns:** an {{MLOperand}}. The N-D tensor of the same shape as *input* with each element casted to the target data type. + **Returns:** an {{MLOperand}}. The N-D tensor of the same shape as {{MLGraphBuilder/cast(input, type, options)/input}} with each element casted to the target data type.
@@ -2285,7 +2285,7 @@ partial dictionary MLOpSupportLimits { - input: an {{MLOperand}}. The input tensor. - options: an optional {{MLClampOptions}}. The optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/clamp(input, options)/input}}.
@@ -2389,7 +2389,7 @@ partial dictionary MLOpSupportLimits { - options: an {{MLOperatorOptions}}. Specifies the optional parameters of the operation. **Returns:** an {{MLOperand}}. The concatenated tensor of all the inputs along - the *axis*. The output tensor has the same shape except on the dimension + the {{MLGraphBuilder/concat(inputs, axis, options)/axis}}. The output tensor has the same shape except on the dimension that all the inputs concatenated along. The size of that dimension is computed as the sum of all the input sizes of the same dimension. @@ -2549,12 +2549,12 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 4-D tensor. The logical shape - is interpreted according to the value of *options*.{{MLConv2dOptions/inputLayout}}. + is interpreted according to the value of {{MLConv2dOptions/inputLayout}}. - filter: an {{MLOperand}}. The filter 4-D tensor. The logical shape is - interpreted according to the value of *options*.{{MLConv2dOptions/filterLayout}} and *options*.{{MLConv2dOptions/groups}}. + interpreted according to the value of {{MLConv2dOptions/filterLayout}} and {{MLConv2dOptions/groups}}. - options: an {{MLConv2dOptions}}. The optional parameters of the operation. - **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the convolution result. The output shape is interpreted according to the *options*.{{MLConv2dOptions/inputLayout}} value. More specifically, the spatial dimensions or the sizes of the last two dimensions of the output tensor for the {{MLInputOperandLayout/"nchw"}} input layout can be calculated as follows: + **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the convolution result. The output shape is interpreted according to {{MLConv2dOptions/inputLayout}}. More specifically, the spatial dimensions or the sizes of the last two dimensions of the output tensor for the {{MLInputOperandLayout/"nchw"}} input layout can be calculated as follows: `outputSize = 1 + (inputSize - (filterSize - 1) * dilation - 1 + beginningPadding + endingPadding) / stride`
@@ -2609,7 +2609,7 @@ partial dictionary MLOpSupportLimits {
- A *depthwise* conv2d operation is a variant of grouped convolution, used in models like the MobileNet, where the *options.groups* = inputChannels = outputChannels and the shape of filter tensor is *[options.groups, 1, height, width]* + A *depthwise* conv2d operation is a variant of grouped convolution, used in models like the MobileNet, where the {{MLConv2dOptions/groups}} = *inputChannels* = *outputChannels* and the shape of filter tensor is *[options.groups, 1, height, width]* for {{MLConv2dFilterOperandLayout/"oihw"}} layout, *[height, width, 1, options.groups]* for {{MLConv2dFilterOperandLayout/"hwio"}} layout, *[options.groups, height, width, 1]* for {{MLConv2dFilterOperandLayout/"ohwi"}} layout and *[1, height, width, options.groups]* for {{MLConv2dFilterOperandLayout/"ihwo"}} layout.
@@ -2750,7 +2750,7 @@ partial dictionary MLOpSupportLimits { : outputPadding :: A list of length 2. - Specifies the padding values applied to each spatial dimension of the output tensor. The explicit padding values are needed to disambiguate the output tensor shape for transposed convolution when the value of the *options*.{{MLConvTranspose2dOptions/strides}} is greater than 1. + Specifies the padding values applied to each spatial dimension of the output tensor. The explicit padding values are needed to disambiguate the output tensor shape for transposed convolution when the value of the {{MLConvTranspose2dOptions/strides}} is greater than 1. Note that these values are only used to disambiguate output shape when needed; it does not necessarily cause any padding value to be written to the output tensor. @@ -2792,12 +2792,12 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 4-D tensor. The logical shape - is interpreted according to the value of *options*.{{MLConvTranspose2dOptions/inputLayout}}. + is interpreted according to the value of {{MLConvTranspose2dOptions/inputLayout}}. - filter: an {{MLOperand}}. The filter 4-D tensor. The logical shape is - interpreted according to the value of *options*.{{MLConvTranspose2dOptions/filterLayout}} and {{MLConvTranspose2dOptions/groups}}. + interpreted according to the value of {{MLConvTranspose2dOptions/filterLayout}} and {{MLConvTranspose2dOptions/groups}}. - options: an optional {{MLConvTranspose2dOptions}}. - **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the transposed convolution result. The output shape is interpreted according to the *options*.{{MLConvTranspose2dOptions/inputLayout}} value. More specifically, unless the *options*.{{MLConvTranspose2dOptions/outputSizes}} values are explicitly specified, the *options*.{{MLConvTranspose2dOptions/outputPadding}} is needed to compute the spatial dimension values of the output tensor as follows: + **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the transposed convolution result. The output shape is interpreted according to {{MLConvTranspose2dOptions/inputLayout}}. More specifically, unless {{MLConvTranspose2dOptions/outputSizes}} is explicitly specified, {{MLConvTranspose2dOptions/outputPadding}} is needed to compute the spatial dimension values of the output tensor as follows: `outputSize = (inputSize - 1) * stride + (filterSize - 1) * dilation + 1 - beginningPadding - endingPadding + outputPadding`
@@ -3534,7 +3534,7 @@ partial dictionary MLOpSupportLimits { - options: an optional {{MLEluOptions}}. The optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/elu(input, options)/input}}.
@@ -3701,10 +3701,10 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input N-D tensor from which the values are gathered. - - indices: an {{MLOperand}}. The indices N-D tensor of the input values to gather. The values must be of type {{MLOperandDataType/"int32"}}, {{MLOperandDataType/"uint32"}} or {{MLOperandDataType/"int64"}}, and must be in the range -N (inclusive) to N (exclusive) where N is the size of the input dimension indexed by *options.axis*, and a negative index means indexing from the end of the dimension. + - indices: an {{MLOperand}}. The indices N-D tensor of the input values to gather. The values must be of type {{MLOperandDataType/"int32"}}, {{MLOperandDataType/"uint32"}} or {{MLOperandDataType/"int64"}}, and must be in the range -N (inclusive) to N (exclusive) where N is the size of the input dimension indexed by {{MLGatherOptions/axis}}, and a negative index means indexing from the end of the dimension. - options: an optional {{MLGatherOptions}}. The optional parameters of the operation. - **Returns:** an {{MLOperand}}. The output N-D tensor of [=MLOperand/rank=] equal to the [=MLOperand/rank=] of *input* + the [=MLOperand/rank=] of *indices* - 1. + **Returns:** an {{MLOperand}}. The output N-D tensor of [=MLOperand/rank=] equal to the [=MLOperand/rank=] of {{MLGraphBuilder/gather(input, indices, options)/input}} + the [=MLOperand/rank=] of {{MLGraphBuilder/gather(input, indices, options)/indices}} - 1.
@@ -3867,7 +3867,7 @@ partial dictionary MLOpSupportLimits { - options: an {{MLOperatorOptions}}. Specifies the optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/gelu(input, options)/input}}.
@@ -3963,7 +3963,7 @@ partial dictionary MLOpSupportLimits {
: c :: - The third input tensor. It is either a scalar, or of the shape that is [=unidirectionally broadcastable=] to the shape *[M, N]*. When it is not specified, the computation is done as if *c* is a scalar 0.0. + The third input tensor. It is either a scalar, or of the shape that is [=unidirectionally broadcastable=] to the shape *[M, N]*. When it is not specified, the computation is done as if {{MLGemmOptions/c}} is a scalar 0.0. : alpha :: @@ -3984,8 +3984,8 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - - a: an {{MLOperand}}. The first input 2-D tensor with shape *[M, K]* if *aTranspose* is false, or *[K, M]* if *aTranspose* is true. - - b: an {{MLOperand}}. The second input 2-D tensor with shape *[K, N]* if *bTranspose* is false, or *[N, K]* if *bTranspose* is true. + - a: an {{MLOperand}}. The first input 2-D tensor with shape *[M, K]* if {{MLGemmOptions/aTranspose}} is false, or *[K, M]* if {{MLGemmOptions/aTranspose}} is true. + - b: an {{MLOperand}}. The second input 2-D tensor with shape *[K, N]* if {{MLGemmOptions/bTranspose}} is false, or *[N, K]* if {{MLGemmOptions/bTranspose}} is true. - options: an optional {{MLGemmOptions}}. The optional parameters of the operation. **Returns:** an {{MLOperand}}. The output 2-D tensor of shape *[M, N]* that contains the calculated product of all the inputs. @@ -4154,11 +4154,11 @@ partial dictionary MLOpSupportLimits {
: bias :: - The 2-D input bias tensor of shape *[numDirections, 3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to the {{MLGruOptions/layout}} argument. + The 2-D input bias tensor of shape *[numDirections, 3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. : recurrentBias :: - The 2-D recurrent bias tensor of shape *[numDirections, 3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to the {{MLGruOptions/layout}} argument. + The 2-D recurrent bias tensor of shape *[numDirections, 3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. : initialHiddenState :: @@ -4189,13 +4189,13 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 3-D tensor of shape *[steps, batchSize, inputSize]*. - - weight: an {{MLOperand}}. The 3-D input weight tensor of shape *[numDirections, 3 * hiddenSize, inputSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to the |options|.{{MLGruOptions/layout}} argument. - - recurrentWeight: an {{MLOperand}}. The 3-D recurrent weight tensor of shape *[numDirections, 3 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to the |options|.{{MLGruOptions/layout}} argument. + - weight: an {{MLOperand}}. The 3-D input weight tensor of shape *[numDirections, 3 * hiddenSize, inputSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. + - recurrentWeight: an {{MLOperand}}. The 3-D recurrent weight tensor of shape *[numDirections, 3 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. - steps: an {{unsigned long}} scalar. The number of time steps in the recurrent network. The value must be greater than 0. - hiddenSize: an {{unsigned long}} scalar. The value of the third dimension of the cell output tensor shape. It indicates the number of features in the hidden state. - options: an optional {{MLGruOptions}}. The optional parameters of the operation. - **Returns:** [=sequence=]<{{MLOperand}}>. The first element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the cell output from the last time step of the network. Additionally, if |options|.{{MLGruOptions/returnSequence}} is set to true, the second element is the 4-D output tensor of shape *[steps, numDirections, batchSize, hiddenSize]* containing every cell outputs from each time step in the temporal sequence. + **Returns:** [=sequence=]<{{MLOperand}}>. The first element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the cell output from the last time step of the network. Additionally, if {{MLGruOptions/returnSequence}} is set to true, the second element is the 4-D output tensor of shape *[steps, numDirections, batchSize, hiddenSize]* containing every cell outputs from each time step in the temporal sequence.
@@ -4495,11 +4495,11 @@ partial dictionary MLOpSupportLimits {
: bias :: - The 1-D input bias tensor of shape *[3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to the {{MLGruOptions/layout}} argument. + The 1-D input bias tensor of shape *[3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. : recurrentBias :: - The 1-D recurrent bias tensor of shape *[3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to the {{MLGruOptions/layout}} argument. + The 1-D recurrent bias tensor of shape *[3 * hiddenSize]*. The ordering of the bias vectors in the second dimension of the tensor shape is specified according to {{MLGruOptions/layout}}. : resetAfter :: @@ -4517,8 +4517,8 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 2-D tensor of shape *[batchSize, inputSize]*. - - weight: an {{MLOperand}}. The 2-D input weight tensor of shape *[3 * hiddenSize, inputSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to the *options.layout* argument. - - recurrentWeight: an {{MLOperand}}. The 2-D recurrent weight tensor of shape *[3 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to the *options.layout* argument. + - weight: an {{MLOperand}}. The 2-D input weight tensor of shape *[3 * hiddenSize, inputSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to {{MLGruCellOptions/layout}}. + - recurrentWeight: an {{MLOperand}}. The 2-D recurrent weight tensor of shape *[3 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to {{MLGruCellOptions/layout}}. - hiddenState: an {{MLOperand}}. The 2-D input hidden state tensor of shape *[batchSize, hiddenSize]*. - hiddenSize: an {{unsigned long}} scalar. The value of the second dimension of the output tensor shape. It indicates the number of features in the hidden state. - options: an optional {{MLGruCellOptions}}. The optional parameters of the operation. @@ -4770,7 +4770,7 @@ partial dictionary MLOpSupportLimits { - options: an optional {{MLHardSigmoidOptions}}. The optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/hardSigmoid(input, options)/input}}.
@@ -4855,7 +4855,7 @@ partial dictionary MLOpSupportLimits { - options: an {{MLOperatorOptions}}. Specifies the optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/hardSwish(input, options)/input}}.
@@ -4954,11 +4954,11 @@ partial dictionary MLOpSupportLimits {
: scale :: - The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1]. + The 1-D tensor of the scaling values whose [=list/size=] is equal to the number of channels, i.e. the size of the feature dimension of the input. For example, for an {{MLGraphBuilder/instanceNormalization(input, options)/input}} tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to {{MLGraphBuilder/instanceNormalization(input, options)/input}}'s [=MLOperand/shape=][1]. : bias :: - The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an |input| tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to |input|'s [=MLOperand/shape=][1]. + The 1-D tensor of the bias values whose [=list/size=] is equal to the size of the feature dimension of the input. For example, for an {{MLGraphBuilder/instanceNormalization(input, options)/input}} tensor with {{MLInputOperandLayout/"nchw"}} layout, the [=list/size=] is equal to {{MLGraphBuilder/instanceNormalization(input, options)/input}}'s [=MLOperand/shape=][1]. : epsilon :: @@ -4975,7 +4975,7 @@ partial dictionary MLOpSupportLimits { - input: an {{MLOperand}}. The input 4-D tensor. - options: an optional {{MLInstanceNormalizationOptions}}. The optional parameters of the operation. - **Returns:** an {{MLOperand}}. The instance-normalized 4-D tensor of the same shape as *input*. + **Returns:** an {{MLOperand}}. The instance-normalized 4-D tensor of the same shape as {{MLGraphBuilder/instanceNormalization(input, options)/input}}.
@@ -5109,15 +5109,15 @@ partial dictionary MLOpSupportLimits {
: scale :: - The N-D tensor of the scaling values whose shape is determined by the |axes| member in that each value in |axes| indicates the dimension of the input tensor with scaling values. For example, for an |axes| values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the scaling value is assumed to be 1. + The N-D tensor of the scaling values whose shape is determined by the {{MLLayerNormalizationOptions/axes}} member in that each value in {{MLLayerNormalizationOptions/axes}} indicates the dimension of the input tensor with scaling values. For example, for an {{MLLayerNormalizationOptions/axes}} values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the scaling value is assumed to be 1. : bias :: - The N-D tensor of the bias values whose shape is determined by the |axes| member in that each value in |axes| indicates the dimension of the input tensor with bias values. For example, for an |axes| values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the bias value is assumed to be 0. + The N-D tensor of the bias values whose shape is determined by the {{MLLayerNormalizationOptions/axes}} member in that each value in {{MLLayerNormalizationOptions/axes}} indicates the dimension of the input tensor with bias values. For example, for an {{MLLayerNormalizationOptions/axes}} values of [1,2,3], the shape of this tensor is the list of the corresponding sizes of the input dimension 1, 2 and 3. When this member is not present, the bias value is assumed to be 0. : axes :: - The indices to the input dimensions to reduce. When this member is not present, it is treated as if all dimensions except the first were given (e.g. for a 4-D input tensor, axes = [1,2,3]). That is, the reduction for the mean and variance values are calculated across all the input features for each independent batch. If empty, no dimensions are reduced. + The indices to the input dimensions to reduce. When this member is not present, it is treated as if all dimensions except the first were given (e.g. for a 4-D input tensor, {{MLLayerNormalizationOptions/axes}} = [1,2,3]). That is, the reduction for the mean and variance values are calculated across all the input features for each independent batch. If empty, no dimensions are reduced. : epsilon :: A small value to prevent computational error due to divide-by-zero. @@ -5128,7 +5128,7 @@ partial dictionary MLOpSupportLimits { - input: an {{MLOperand}}. The input N-D tensor. - options: an optional {{MLLayerNormalizationOptions}}. The optional parameters of the operation. - **Returns:** an {{MLOperand}}. The layer-normalized N-D tensor of the same shape as *input*. + **Returns:** an {{MLOperand}}. The layer-normalized N-D tensor of the same shape as {{MLGraphBuilder/layerNormalization(input, options)/input}}.
@@ -5263,7 +5263,7 @@ partial dictionary MLOpSupportLimits { - options: an optional {{MLLeakyReluOptions}}. The optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/leakyRelu(input, options)/input}}.
@@ -5361,7 +5361,7 @@ partial dictionary MLOpSupportLimits { - options: an optional {{MLLinearOptions}}. The optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/linear(input, options)/input}}.
@@ -5514,13 +5514,13 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 3-D tensor of shape *[steps, batchSize, inputSize]*. - - weight: an {{MLOperand}}. The 3-D input weight tensor of shape *[numDirections, 4 * hiddenSize, inputSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to the |options|.{{MLLstmOptions/layout}}. - - recurrentWeight: an {{MLOperand}}. The 3-D recurrent weight tensor of shape *[numDirections, 4 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to the |options|.{{MLLstmOptions/layout}} argument. + - weight: an {{MLOperand}}. The 3-D input weight tensor of shape *[numDirections, 4 * hiddenSize, inputSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to {{MLLstmOptions/layout}}. + - recurrentWeight: an {{MLOperand}}. The 3-D recurrent weight tensor of shape *[numDirections, 4 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the second dimension of the tensor shape is specified according to {{MLLstmOptions/layout}}. - steps: an {{unsigned long}} scalar. The number of time steps in the recurrent network. The value must be greater than 0. - hiddenSize: an {{unsigned long}} scalar. The value of the third dimension of the cell output tensor shape. It indicates the number of features in the hidden state. - options: an optional {{MLLstmOptions}}. The optional parameters of the operation. - **Returns:** [=sequence=]<{{MLOperand}}>. The first element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the output hidden state from the last time step of the network. The second element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the output cell state from the last time step of the network. Additionally, if |options|.{{MLLstmOptions/returnSequence}} is set to true, the third element is the 4-D output tensor of shape *[steps, numDirections, batchSize, hiddenSize]* containing every output from each time step in the temporal sequence. + **Returns:** [=sequence=]<{{MLOperand}}>. The first element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the output hidden state from the last time step of the network. The second element is a 3-D tensor of shape *[numDirections, batchSize, hiddenSize]*, the output cell state from the last time step of the network. Additionally, if {{MLLstmOptions/returnSequence}} is set to true, the third element is the 4-D output tensor of shape *[steps, numDirections, batchSize, hiddenSize]* containing every output from each time step in the temporal sequence.
@@ -5883,11 +5883,11 @@ partial dictionary MLOpSupportLimits {
: bias :: - The 1-D input bias tensor of shape *[4 * hiddenSize]*. The ordering of the bias vectors in the first dimension of the tensor shape is specified according to the {{MLLstmCellOptions/layout}} argument. + The 1-D input bias tensor of shape *[4 * hiddenSize]*. The ordering of the bias vectors in the first dimension of the tensor shape is specified according to {{MLLstmCellOptions/layout}}. : recurrentBias :: - The 1-D recurrent bias tensor of shape *[4 * hiddenSize]*. The ordering of the bias vectors in the first dimension of the tensor shape is specified according to the {{MLLstmCellOptions/layout}} argument. + The 1-D recurrent bias tensor of shape *[4 * hiddenSize]*. The ordering of the bias vectors in the first dimension of the tensor shape is specified according to {{MLLstmCellOptions/layout}}. : peepholeWeight :: @@ -5905,8 +5905,8 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 2-D tensor of shape *[batchSize, inputSize]*. - - weight: an {{MLOperand}}. The 2-D input weight tensor of shape *[4 * hiddenSize, inputSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to the *options.layout* argument. - - recurrentWeight: an {{MLOperand}}. The 2-D recurrent weight tensor of shape *[4 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to the *options.layout* argument. + - weight: an {{MLOperand}}. The 2-D input weight tensor of shape *[4 * hiddenSize, inputSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to {{MLLstmCellOptions/layout}}. + - recurrentWeight: an {{MLOperand}}. The 2-D recurrent weight tensor of shape *[4 * hiddenSize, hiddenSize]*. The ordering of the weight vectors in the first dimension of the tensor shape is specified according to {{MLLstmCellOptions/layout}}. - hiddenState: an {{MLOperand}}. The 2-D input hidden state tensor of shape *[batchSize, hiddenSize]*. - cellState: an {{MLOperand}}. The 2-D input cell state tensor of shape *[batchSize, hiddenSize]*. - hiddenSize: an {{unsigned long}} scalar. The value of the second dimension of the output tensor shape. It indicates the number of features in the hidden state. @@ -6204,9 +6204,9 @@ partial dictionary MLOpSupportLimits {
Computes the matrix product of two input tensors as follows: - - If both *a* and *b* are 2-dimensional, they are multiplied like conventional + - If both {{MLGraphBuilder/matmul(a, b, options)/a}} and {{MLGraphBuilder/matmul(a, b, options)/b}} are 2-dimensional, they are multiplied like conventional matrices and produce a 2-dimensional tensor as the output. - - If either *a* or *b* is `N`-dimensional where `N > 2`, it is treated as a stack of matrices with dimensions corresponding to the last two indices. The matrix multiplication will be [=broadcast=] according to [[!numpy-broadcasting-rule]]. The shapes of *a* and *b*, except the last two dimensions, must be [=bidirectionally broadcastable=]. The output is a `N`-dimensional tensor whose rank is the maximum [=MLOperand/rank=] of the input tensors. For each dimension, except the last two, of the output tensor, its size is the maximum size along that dimension of the input tensors. + - If either {{MLGraphBuilder/matmul(a, b, options)/a}} or {{MLGraphBuilder/matmul(a, b, options)/b}} is `N`-dimensional where `N > 2`, it is treated as a stack of matrices with dimensions corresponding to the last two indices. The matrix multiplication will be [=broadcast=] according to [[!numpy-broadcasting-rule]]. The shapes of {{MLGraphBuilder/matmul(a, b, options)/a}} and {{MLGraphBuilder/matmul(a, b, options)/b}}, except the last two dimensions, must be [=bidirectionally broadcastable=]. The output is a `N`-dimensional tensor whose rank is the maximum [=MLOperand/rank=] of the input tensors. For each dimension, except the last two, of the output tensor, its size is the maximum size along that dimension of the input tensors.
@@ -6322,8 +6322,8 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input tensor. - - beginningPadding: [=sequence=]<{{unsigned long}}>. The number of padding values to add at the beginning of each input dimension, of length *N* where *N* is the [=MLOperand/rank=] of the input tensor. For each dimension *d* of *input*, *beginningPadding[d]* indicates how many values to add before the content in that dimension. - - endingPadding: [=sequence=]<{{unsigned long}}>. The number of padding values to add at the ending of each input dimension, of length *N* where *N* is the [=MLOperand/rank=] of the input tensor. For each dimension *d* of *input*, *endingPadding[d]* indicates how many values to add after the content in that dimension. + - beginningPadding: [=sequence=]<{{unsigned long}}>. The number of padding values to add at the beginning of each input dimension, of length *N* where *N* is the [=MLOperand/rank=] of the input tensor. For each dimension *d* of {{MLGraphBuilder/pad(input, beginningPadding, endingPadding, options)/input}}, {{MLGraphBuilder/pad(input, beginningPadding, endingPadding, options)/beginningPadding}}[*d*] indicates how many values to add before the content in that dimension. + - endingPadding: [=sequence=]<{{unsigned long}}>. The number of padding values to add at the ending of each input dimension, of length *N* where *N* is the [=MLOperand/rank=] of the input tensor. For each dimension *d* of {{MLGraphBuilder/pad(input, beginningPadding, endingPadding, options)/input}}, {{MLGraphBuilder/pad(input, beginningPadding, endingPadding, options)/endingPadding}}[*d*] indicates how many values to add after the content in that dimension. - options: an optional {{MLPadOptions}}. The optional parameters of the operation. **Returns:** an {{MLOperand}}. The padded output tensor. Each dimension of the output tensor can be calculated as follows: @@ -6516,16 +6516,16 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input 4-D tensor. The logical shape - is interpreted according to the value of *options.layout*. + is interpreted according to the value of {{MLPool2dOptions/layout}}. - options: an optional {{MLPool2dOptions}}. The optional parameters of the operation. **Returns:** an {{MLOperand}}. The output 4-D tensor that contains the result of the reduction. The logical shape is interpreted according to the - value of *layout*. More specifically, if the *options.roundingType* is {{MLRoundingType/"floor"}}, the spatial dimensions of the output tensor can be calculated as follows: + value of {{MLPool2dOptions/layout}}. More specifically, if the {{MLPool2dOptions/roundingType}} is {{MLRoundingType/"floor"}}, the spatial dimensions of the output tensor can be calculated as follows: `output size = floor(1 + (input size - filter size + beginning padding + ending padding) / stride)` - or if *options.roundingType* is {{MLRoundingType/"ceil"}}: + or if {{MLPool2dOptions/roundingType}} is {{MLRoundingType/"ceil"}}: `output size = ceil(1 + (input size - filter size + beginning padding + ending padding) / stride)`
@@ -6700,11 +6700,11 @@ partial dictionary MLOpSupportLimits {
**Arguments:** - input: an {{MLOperand}}. The input tensor. - - slope: an {{MLOperand}}. The slope tensor. Its shape must be [=bidirectionally broadcastable=] to the shape of *input*. + - slope: an {{MLOperand}}. The slope tensor. Its shape must be [=bidirectionally broadcastable=] to the shape of {{MLGraphBuilder/prelu(input, slope, options)/input}}. - options: an {{MLOperatorOptions}}. Specifies the optional parameters of the operation. **Returns:** - - an {{MLOperand}}. The output tensor of the same shape as *input*. + - an {{MLOperand}}. The output tensor of the same shape as {{MLGraphBuilder/prelu(input, slope, options)/input}}.
@@ -6784,7 +6784,7 @@ partial dictionary MLOpSupportLimits { ### Reduction operations ### {#api-mlgraphbuilder-reduce} -Reduce the input tensor along all dimensions, or along the axes specified in the {{MLReduceOptions/axes}} array parameter. For each specified axis, the dimension with that index is reduced, i.e. the resulting tensor will not contain it, unless the {{MLReduceOptions/keepDimensions}} option is specified. The values of the resulting tensor are calculated using the specified reduction function that takes as parameters all the input values across the reduced dimensions. +Reduce the input tensor along all dimensions, or along the axes specified in the {{MLReduceOptions/axes}} array parameter. For each specified axis, the dimension with that index is reduced, i.e. the resulting tensor will not contain it, unless {{MLReduceOptions/keepDimensions}} is specified. The values of the resulting tensor are calculated using the specified reduction function that takes as parameters all the input values across the reduced dimensions.