RPA.Cloud.Azure
Detect facial attributes in the image
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
image_file | str, None | None | filepath of image file |
image_url | str, None | None | URI to image, if given will be used instead of image_file |
face_attributes | str, None | None | comma separated list of attributes, for example. "age,gender,smile" |
face_landmarks | bool | False | return face landmarks of the detected faces or not. The default value is False |
recognition_model | str | recognition_02 | model used by Azure to detech faces, options are "recognition_01" or "recognition_02", default is "recognition_02" |
json_file | str, None | None | filepath to write results into |
param image_file: | |
---|---|
filepath of image file | |
param image_url: | |
URI to image, if given will be used instead of image_file | |
param face_attributes: | |
comma separated list of attributes, for example. "age,gender,smile" | |
param face_landmarks: | |
return face landmarks of the detected faces or not. The default value is False | |
param recognition_model: | |
model used by Azure to detech faces, options are "recognition_01" or "recognition_02", default is "recognition_02" | |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Read more about face_attributes at Face detection explained:
- age
- gender
- smile
- facialHair
- headPose
- glasses
- emotion
- hair
- makeup
- accessories
- blur
- exposure
- nouse
Detect languages in the given text
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
text | str | null | A UTF-8 text string |
json_file | str, None | None | filepath to write results into |
param text: | A UTF-8 text string |
---|---|
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Detect entities in the given text
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
text | str | null | A UTF-8 text string |
language | str, None | None | if input language is known |
json_file | None | filepath to write results into |
param text: | A UTF-8 text string |
---|---|
param language: | if input language is known |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Initialize Azure Computer Vision
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
region | str, None | None | identifier for service region |
use_robocorp_vault | bool | False | use secret stored into Robocorp Vault |
param region: | identifier for service region |
---|---|
param use_robocorp_vault: | |
use secret stored into Robocorp Vault |
Initialize Azure Face
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
region | str, None | None | identifier for service region |
use_robocorp_vault | bool | False | use secret stored into Robocorp Vault |
param region: | identifier for service region |
---|---|
param use_robocorp_vault: | |
use secret stored into Robocorp Vault |
Initialize Azure Speech
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
region | str, None | None | identifier for service region |
use_robocorp_vault | bool | False | use secret stored into Robocorp Vault |
param region: | identifier for service region |
---|---|
param use_robocorp_vault: | |
use secret stored into Robocorp Vault |
Initialize Azure Text Analyticts
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
region | str, None | None | identifier for service region |
use_robocorp_vault | bool | False | use secret stored into Robocorp Vault |
param region: | identifier for service region |
---|---|
param use_robocorp_vault: | |
use secret stored into Robocorp Vault |
Detect key phrases in the given text
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
text | str | null | A UTF-8 text string |
language | str, None | None | if input language is known |
json_file | str, None | None | filepath to write results into |
param text: | A UTF-8 text string |
---|---|
param language: | if input language is known |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
List supported voices for Azure API Speech Services.
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
locale | str, None | None | list only voices specific to locale, by default return all voices |
neural_only | bool | False | True if only neural voices should be returned, False by default |
json_file | str, None | None | filepath to write results into |
param locale: | list only voices specific to locale, by default return all voices |
---|---|
param neural_only: | |
True if only neural voices should be returned, False by default | |
param json_file: | |
filepath to write results into | |
return: | voices in json |
Available voice selection might differ between regions.
Analyze sentiments in the given text
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
text | str | null | A UTF-8 text string |
language | str, None | None | if input language is known |
json_file | str, None | None | filepath to write results into |
param text: | A UTF-8 text string |
---|---|
param language: | if input language is known |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Set Robocorp Vault name
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
vault_name | null | Robocorp Vault name |
param vault_name: | |
---|---|
Robocorp Vault name |
Synthesize speech synchronously
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
text | str | null | input text to synthesize |
language | str | en-US | voice language, defaults to "en-US" |
name | str | en-US-AriaRUS | voice name, defaults to "en-US-AriaRUS" |
gender | str | FEMALE | voice gender, defaults to "FEMALE" |
encoding | str | MP3 | result encoding type, defaults to "MP3" |
neural_voice_style | Any, None | None | if given then neural voice is used, example style. "cheerful" |
target_file | str | synthesized.mp3 | save synthesized output to file, defaults to "synthesized.mp3" |
param text: | input text to synthesize |
---|---|
param language: | voice language, defaults to "en-US" |
param name: | voice name, defaults to "en-US-AriaRUS" |
param gender: | voice gender, defaults to "FEMALE" |
param encoding: | result encoding type, defaults to "MP3" |
param neural_voice_style: | |
if given then neural voice is used, example style. "cheerful" | |
param target_file: | |
save synthesized output to file, defaults to "synthesized.mp3" | |
return: | synthesized output in bytes |
Neural voices are only supported for Speech resources created in East US, South East Asia, and West Europe regions.
Identify features in the image
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
image_file | str, None | None | filepath of image file |
image_url | str, None | None | URI to image, if given will be used instead of image_file |
visual_features | str, None | None | comma separated list of features, for example. "Categories,Description,Color" |
json_file | str, None | None | filepath to write results into |
param image_file: | |
---|---|
filepath of image file | |
param image_url: | |
URI to image, if given will be used instead of image_file | |
param visual_features: | |
comma separated list of features, for example. "Categories,Description,Color" | |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
See Computer Vision API for valid feature names and their explanations:
- Adult
- Brands
- Categories
- Color
- Description
- Faces
- ImageType
- Objects
- Tags
Describe image with tags and captions
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
image_file | str, None | None | filepath of image file |
image_url | str, None | None | URI to image, if given will be used instead of image_file |
json_file | str, None | None | filepath to write results into |
param image_file: | |
---|---|
filepath of image file | |
param image_url: | |
URI to image, if given will be used instead of image_file | |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Detect objects in the image
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
image_file | str, None | None | filepath of image file |
image_url | str, None | None | URI to image, if given will be used instead of image_file |
json_file | str, None | None | filepath to write results into |
param image_file: | |
---|---|
filepath of image file | |
param image_url: | |
URI to image, if given will be used instead of image_file | |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |
Optical Character Recognition (OCR) detects text in an image
Arguments
Argument | Type | Default value | Description |
---|---|---|---|
image_file | str, None | None | filepath of image file |
image_url | str, None | None | URI to image, if given will be used instead of image_file |
json_file | str, None | None | filepath to write results into |
param image_file: | |
---|---|
filepath of image file | |
param image_url: | |
URI to image, if given will be used instead of image_file | |
param json_file: | |
filepath to write results into | |
return: | analysis in json format |