Skip to content

Commit

Permalink
address krista's comments
Browse files Browse the repository at this point in the history
  • Loading branch information
status authored and iscai-msft committed Feb 3, 2021
1 parent 8866ba4 commit ed075f4
Show file tree
Hide file tree
Showing 6 changed files with 24 additions and 32 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1172,7 +1172,7 @@ class AnalyzeBatchActionsType(str, Enum):
EXTRACT_KEY_PHRASES = "extract_key_phrases" #: Key Phrase Extraction action.

class AnalyzeBatchActionsResult(DictMixin):
"""RecognizeEntitiesActionResult contains the results of a recognize entities action
"""AnalyzeBatchActionsResult contains the results of a recognize entities action
on a list of documents. Returned by `begin_analyze_batch_actions`
:ivar document_results: A list of objects containing results for all Entity Recognition actions
Expand Down Expand Up @@ -1206,12 +1206,13 @@ class RecognizeEntitiesAction(DictMixin):
If you just want to recognize entities in a list of documents, and not perform a batch
of long running actions on the input of documents, call method `recognize_entities` instead
of interfacting with this model.
of interfacing with this model.
:ivar str model_version: The model version to use for the analysis.
:ivar str string_index_type: Specifies the method used to interpret string offsets.
Can be one of 'UnicodeCodePoint' (default), 'Utf16CodePoint', or 'TextElements_v8'.
For additional information see https://aka.ms/text-analytics-offsets
:keyword str model_version: The model version to use for the analysis.
:keyword str string_index_type: Specifies the method used to interpret string offsets.
`UnicodeCodePoint`, the Python encoding, is the default. To override the Python default,
you can also pass in `Utf16CodePoint` or TextElements_v8`. For additional information
see https://aka.ms/text-analytics-offsets
"""

def __init__(self, **kwargs):
Expand All @@ -1237,14 +1238,15 @@ class RecognizePiiEntitiesAction(DictMixin):
If you just want to recognize pii entities in a list of documents, and not perform a batch
of long running actions on the input of documents, call method `recognize_pii_entities` instead
of interfacting with this model.
of interfacing with this model.
:ivar str model_version: The model version to use for the analysis.
:ivar str domain_filter: An optional string to set the PII domain to include only a
:keyword str model_version: The model version to use for the analysis.
:keyword str domain_filter: An optional string to set the PII domain to include only a
subset of the PII entity categories. Possible values include 'PHI' or None.
:ivar str string_index_type: Specifies the method used to interpret string offsets.
Can be one of 'UnicodeCodePoint' (default), 'Utf16CodePoint', or 'TextElements_v8'.
For additional information see https://aka.ms/text-analytics-offsets
:keyword str string_index_type: Specifies the method used to interpret string offsets.
`UnicodeCodePoint`, the Python encoding, is the default. To override the Python default,
you can also pass in `Utf16CodePoint` or TextElements_v8`. For additional information
see https://aka.ms/text-analytics-offsets
"""

def __init__(self, **kwargs):
Expand Down Expand Up @@ -1272,9 +1274,9 @@ class ExtractKeyPhrasesAction(DictMixin):
If you just want to extract key phrases from a list of documents, and not perform a batch
of long running actions on the input of documents, call method `extract_key_phrases` instead
of interfacting with this model.
of interfacing with this model.
:ivar str model_version: The model version to use for the analysis.
:keyword str model_version: The model version to use for the analysis.
"""

def __init__(self, **kwargs):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ def analyze_paged_result(doc_id_order, task_order, analyze_status_callback, _, o
functools.partial(lro_get_next_page, analyze_status_callback, obj, show_stats=show_stats),
functools.partial(analyze_extract_page_data, doc_id_order, task_order, response_headers),
statistics=TextDocumentBatchStatistics._from_generated(obj.statistics) \
if show_stats and obj.statistics is not None else None # pylint: disable=protected-access
if (show_stats and obj.statistics) else None # pylint: disable=protected-access
)

def _get_deserialize():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -764,7 +764,7 @@ def begin_analyze_batch_actions( # type: ignore
:param actions: A heterogeneous list of actions to perform on the inputted documents.
Each action object encapsulates the parameters used for the particular action type.
The outputted action results will be in the same order you inputted your actions.
Can not put duplicate actions into list.
Duplicate actions in list not supported.
:type actions:
list[RecognizeEntitiesAction or RecognizePiiEntitiesAction or ExtractKeyPhrasesAction]
:keyword str display_name: An optional display name to set for the requested analysis.
Expand All @@ -781,12 +781,7 @@ def begin_analyze_batch_actions( # type: ignore
the actions were sent in this method.
:rtype:
~azure.core.polling.LROPoller[~azure.core.paging.ItemPaged[
list[
~azure.ai.textanalytics.RecognizeEntitiesActionResult or
~azure.ai.textanalytics.RecognizePiiEntitiesActionResult or
~azure.ai.textanalytics.ExtractKeyPhrasesActionResult
]
]]
~azure.ai.textanalytics.AnalyzeBatchActionsResult]]
:raises ~azure.core.exceptions.HttpResponseError or TypeError or ValueError or NotImplementedError:
.. admonition:: Example:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -753,7 +753,7 @@ async def begin_analyze_batch_actions( # type: ignore
:param actions: A heterogeneous list of actions to perform on the inputted documents.
Each action object encapsulates the parameters used for the particular action type.
The outputted action results will be in the same order you inputted your actions.
Can not put duplicate actions into list.
Duplicate actions in list not supported.
:type actions:
list[RecognizeEntitiesAction or RecognizePiiEntitiesAction or ExtractKeyPhrasesAction]
:keyword str display_name: An optional display name to set for the requested analysis.
Expand All @@ -769,13 +769,8 @@ async def begin_analyze_batch_actions( # type: ignore
object to return a pageable heterogeneous list of the action results in the order
the actions were sent in this method.
:rtype:
AsyncLROPoller[AsyncItemPaged[]
list[
~azure.ai.textanalytics.RecognizeEntitiesActionResult or
~azure.ai.textanalytics.RecognizePiiEntitiesActionResult or
~azure.ai.textanalytics.ExtractKeyPhrasesActionResult
]
]]
~azure.core.polling.AsyncLROPoller[~azure.core.async_paging.AsyncItemPaged[
~azure.ai.textanalytics.AnalyzeBatchActionsResult]]
:raises ~azure.core.exceptions.HttpResponseError or TypeError or ValueError or NotImplementedError:
.. admonition:: Example:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
DESCRIPTION:
This sample demonstrates how to submit a collection of text documents for analysis, which consists of a variety
of text analysis actions, such as Entity Recognition, PII Entity Recognition, Entity Linking, Sentiment Analysis,
of text analysis actions, such as Entity Recognition, PII Entity Recognition,
or Key Phrase Extraction. The response will contain results from each of the individual actions specified in the request.
USAGE:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
DESCRIPTION:
This sample demonstrates how to submit a collection of text documents for analysis, which consists of a variety
of text analysis actions, such as Entity Recognition, PII Entity Recognition, Entity Linking, Sentiment Analysis,
of text analysis actions, such as Entity Recognition, PII Entity Recognition,
or Key Phrase Extraction. The response will contain results from each of the individual actions specified in the request.
USAGE:
Expand Down

0 comments on commit ed075f4

Please sign in to comment.