RPA.Cloud.Azure

Detect facial attributes in the image

Arguments

ArgumentTypeDefault valueDescription
image_filestr, NoneNonefilepath of image file
image_urlstr, NoneNoneURI to image, if given will be used instead of image_file
face_attributesstr, NoneNonecomma separated list of attributes, for example. "age,gender,smile"
face_landmarksboolFalsereturn face landmarks of the detected faces or not. The default value is False
recognition_modelstrrecognition_02model used by Azure to detech faces, options are "recognition_01" or "recognition_02", default is "recognition_02"
json_filestr, NoneNonefilepath to write results into
param image_file:
 filepath of image file
param image_url:
 URI to image, if given will be used instead of image_file
param face_attributes:
 comma separated list of attributes, for example. "age,gender,smile"
param face_landmarks:
 return face landmarks of the detected faces or not. The default value is False
param recognition_model:
 model used by Azure to detech faces, options are "recognition_01" or "recognition_02", default is "recognition_02"
param json_file:
 filepath to write results into
return:analysis in json format

Read more about face_attributes at Face detection explained:

  • age
  • gender
  • smile
  • facialHair
  • headPose
  • glasses
  • emotion
  • hair
  • makeup
  • accessories
  • blur
  • exposure
  • nouse

Detect languages in the given text

Arguments

ArgumentTypeDefault valueDescription
textstrnullA UTF-8 text string
json_filestr, NoneNonefilepath to write results into
param text:A UTF-8 text string
param json_file:
 filepath to write results into
return:analysis in json format

Detect entities in the given text

Arguments

ArgumentTypeDefault valueDescription
textstrnullA UTF-8 text string
languagestr, NoneNoneif input language is known
json_fileNonefilepath to write results into
param text:A UTF-8 text string
param language:if input language is known
param json_file:
 filepath to write results into
return:analysis in json format

Initialize Azure Computer Vision

Arguments

ArgumentTypeDefault valueDescription
regionstr, NoneNoneidentifier for service region
use_robocorp_vaultboolFalseuse secret stored into Robocorp Vault
param region:identifier for service region
param use_robocorp_vault:
 use secret stored into Robocorp Vault

Initialize Azure Face

Arguments

ArgumentTypeDefault valueDescription
regionstr, NoneNoneidentifier for service region
use_robocorp_vaultboolFalseuse secret stored into Robocorp Vault
param region:identifier for service region
param use_robocorp_vault:
 use secret stored into Robocorp Vault

Initialize Azure Speech

Arguments

ArgumentTypeDefault valueDescription
regionstr, NoneNoneidentifier for service region
use_robocorp_vaultboolFalseuse secret stored into Robocorp Vault
param region:identifier for service region
param use_robocorp_vault:
 use secret stored into Robocorp Vault

Initialize Azure Text Analyticts

Arguments

ArgumentTypeDefault valueDescription
regionstr, NoneNoneidentifier for service region
use_robocorp_vaultboolFalseuse secret stored into Robocorp Vault
param region:identifier for service region
param use_robocorp_vault:
 use secret stored into Robocorp Vault

Detect key phrases in the given text

Arguments

ArgumentTypeDefault valueDescription
textstrnullA UTF-8 text string
languagestr, NoneNoneif input language is known
json_filestr, NoneNonefilepath to write results into
param text:A UTF-8 text string
param language:if input language is known
param json_file:
 filepath to write results into
return:analysis in json format

List supported voices for Azure API Speech Services.

Arguments

ArgumentTypeDefault valueDescription
localestr, NoneNonelist only voices specific to locale, by default return all voices
neural_onlyboolFalseTrue if only neural voices should be returned, False by default
json_filestr, NoneNonefilepath to write results into
param locale:list only voices specific to locale, by default return all voices
param neural_only:
 True if only neural voices should be returned, False by default
param json_file:
 filepath to write results into
return:voices in json

Available voice selection might differ between regions.

Analyze sentiments in the given text

Arguments

ArgumentTypeDefault valueDescription
textstrnullA UTF-8 text string
languagestr, NoneNoneif input language is known
json_filestr, NoneNonefilepath to write results into
param text:A UTF-8 text string
param language:if input language is known
param json_file:
 filepath to write results into
return:analysis in json format

Set Robocorp Vault name

Arguments

ArgumentTypeDefault valueDescription
vault_namenullRobocorp Vault name
param vault_name:
 Robocorp Vault name

Synthesize speech synchronously

Arguments

ArgumentTypeDefault valueDescription
textstrnullinput text to synthesize
languagestren-USvoice language, defaults to "en-US"
namestren-US-AriaRUSvoice name, defaults to "en-US-AriaRUS"
genderstrFEMALEvoice gender, defaults to "FEMALE"
encodingstrMP3result encoding type, defaults to "MP3"
neural_voice_styleAny, NoneNoneif given then neural voice is used, example style. "cheerful"
target_filestrsynthesized.mp3save synthesized output to file, defaults to "synthesized.mp3"
param text:input text to synthesize
param language:voice language, defaults to "en-US"
param name:voice name, defaults to "en-US-AriaRUS"
param gender:voice gender, defaults to "FEMALE"
param encoding:result encoding type, defaults to "MP3"
param neural_voice_style:
 if given then neural voice is used, example style. "cheerful"
param target_file:
 save synthesized output to file, defaults to "synthesized.mp3"
return:synthesized output in bytes

Neural voices are only supported for Speech resources created in East US, South East Asia, and West Europe regions.

Identify features in the image

Arguments

ArgumentTypeDefault valueDescription
image_filestr, NoneNonefilepath of image file
image_urlstr, NoneNoneURI to image, if given will be used instead of image_file
visual_featuresstr, NoneNonecomma separated list of features, for example. "Categories,Description,Color"
json_filestr, NoneNonefilepath to write results into
param image_file:
 filepath of image file
param image_url:
 URI to image, if given will be used instead of image_file
param visual_features:
 comma separated list of features, for example. "Categories,Description,Color"
param json_file:
 filepath to write results into
return:analysis in json format

See Computer Vision API for valid feature names and their explanations:

  • Adult
  • Brands
  • Categories
  • Color
  • Description
  • Faces
  • ImageType
  • Objects
  • Tags

Describe image with tags and captions

Arguments

ArgumentTypeDefault valueDescription
image_filestr, NoneNonefilepath of image file
image_urlstr, NoneNoneURI to image, if given will be used instead of image_file
json_filestr, NoneNonefilepath to write results into
param image_file:
 filepath of image file
param image_url:
 URI to image, if given will be used instead of image_file
param json_file:
 filepath to write results into
return:analysis in json format

Detect objects in the image

Arguments

ArgumentTypeDefault valueDescription
image_filestr, NoneNonefilepath of image file
image_urlstr, NoneNoneURI to image, if given will be used instead of image_file
json_filestr, NoneNonefilepath to write results into
param image_file:
 filepath of image file
param image_url:
 URI to image, if given will be used instead of image_file
param json_file:
 filepath to write results into
return:analysis in json format

Optical Character Recognition (OCR) detects text in an image

Arguments

ArgumentTypeDefault valueDescription
image_filestr, NoneNonefilepath of image file
image_urlstr, NoneNoneURI to image, if given will be used instead of image_file
json_filestr, NoneNonefilepath to write results into
param image_file:
 filepath of image file
param image_url:
 URI to image, if given will be used instead of image_file
param json_file:
 filepath to write results into
return:analysis in json format