robocorp-tasks

Logging customization

Following the Getting Started section should be sufficient to get comprehensive logging for all user code executed and calls into libraries in site-packages and python libs (which by default are configured to show just when called from user code and will not show internal calls inside the library itself).

It's possible to change how libraries or user code is logged by customizing log_filter_rules by creating a [tool.robocorp.log] in pyproject.toml.

There are three different logging configurations that may be applied for each module:

  • exclude: excludes a module from logging.
  • full_log (default for user code): logs a module with full information, such as method calls, arguments, yields, local assigns, and more.
  • log_on_project_call (default for library code -- since 2.0): logs only method calls, arguments, return values and exceptions, but only when a library method is called from user code. This configuration is meant to be used for libraries (modules in site-packages or python lib) logging.

Example showing how to exclude from logging any user module which ends with producer:

[tool.robocorp.log] log_filter_rules = [ {name = "*producer", kind = "exclude"}, ]

By default libraries in site-packages and python lib will be configured as log_on_project_call, but it's possible to change its default through default_library_filter_kind.

Example of pyproject.toml where the rpaframework and selenium libraries are configured to be logged and all other libraries in site-packages/python lib are excluded by default:

[tool.robocorp.log] log_filter_rules = [ {name = "RPA", kind = "log_on_project_call"}, {name = "selenium", kind = "log_on_project_call"}, {name = "SeleniumLibrary", kind = "log_on_project_call"}, ] default_library_filter_kind = "exclude"

Note that when specifying a module name to match in log_filter_rules, the name may either match exactly or the module name must start with the name followed by a dot.

This means that, for example, RPA would match RPA.Browser, but not RPAmodule nor another.RPA.

As of robocorp-tasks 2.0, it's also possible to use fnmatch style names (where * matches anything and ? matches any single char -- see: https://docs.python.org/3/library/fnmatch.html for more information).

i.e.:

[tool.robocorp.log] log_filter_rules = [ {name = "proj.*", kind = "full_log"}, {name = "proj[AB]", kind = "full_log"}, ]

Note that the order of the rules is important as rules which appear first are matched before the ones that appear afterwards.

Log output customization

By default, the log output will be saved to an output directory, where each file can be up to 1MB and up to 5 files are kept before old ones are deleted. When the run finishes, a log.html file will be created in the output directory containing the log viewer with the log contents embedded.

However, you can customize the log output by changing the output directory, maximum number of log files to keep, and maximum size of each output file. You can do this through the command line by passing the appropriate arguments when running python -m robocorp.tasks run.

For example, to change the output directory to my_output, run:

python -m robocorp.tasks run path/to/tasks.py -o my_output

You can also set the maximum number of output files to keep by passing --max-log-files followed by a number. For example, to keep up to 10 log files, run:

python -m robocorp.tasks run path/to/tasks.py --max-log-files 10

Finally, you can set the maximum size of each output file by passing --max-log-file-size followed by a size in megabytes (e.g.: 2MB or 1000kb).

For example, to set the maximum size of each output file to 500kb, run:

python -m robocorp.tasks run path/to/tasks.py --max-log-file-size 500kb

Setups and teardowns

The library contains two fixture methods, setup and teardown, that can be used to call arbitrary functions before and after tasks are run.

The fixtures receive as arguments the tasks objects that the library has parsed from the tasks file, which allows fixtures to behave differently based on which task is going to be executed or if task failed or passed.

This is useful for having shared initialization or clean-up steps for all tasks, or for ensuring that a function is always executed when implementing libraries.

Scopes

The fixtures can be optionally configured with a scope that determines if a fixture is run once before or after all tasks, or each time a task will execute. Valid values are task and session.

For task scopes (default), the argument is the single task that will or was run. For session scopes, the argument to the fixture is a list of tasks.

Task objects

A Task object is the internal representation of a parsed task in a Python file. It contains, for instance, the filename and lineno of the function, but also the status and optional message after it has been executed.

Example usage

from robocorp import browser from robocorp.tasks import task, teardown @setup(scope="session") def configure_browser(tasks): browser.configure(headless=True, screenshot="only-on-failure") @teardown def handle_error(task): if task.failed: print(f"Task '{task.name}' failed: {task.message}") @task def scrape_website(): ... @task def process_data(): ...

Yielding fixtures

Sometimes it's necessary to access a resource created in a setup inside a teardown fixture. In these cases, it's possible to create a setup fixture that yields.

The library will call the fixture up until the yield statement, execute the task, and then call the rest of the fixture (in reverse order):

import time from robocorp.tasks import setup @setup def measure_time(task): start = time.time() yield # Task executes here duration = time.time() - start print(f"Task {task.name} took {duration} seconds") @task def my_long_task(): ...