Observability - Python SDK
The observability section of the Temporal Developer's guide covers the many ways to view the current state of your Temporal Application—that is, ways to view which Workflow Executions are tracked by the Temporal Platform and the state of any specified Workflow Execution, either currently or at points of an execution.
This section covers features related to viewing the state of the application, including:
Emit metrics
How to emit metrics
Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics reference.
Metrics can be scraped and stored in time series databases, such as:
Temporal also provides a dashboard you can integrate with graphing services like Grafana. For more information, see:
- Temporal's implementation of the Grafana dashboard
- How to export metrics in Grafana
Metrics in Python are configured globally; therefore, you should set a Prometheus endpoint before any other Temporal code.
The following example exposes a Prometheus endpoint on port 9000
.
from temporalio.runtime import Runtime, TelemetryConfig, PrometheusConfig
# Create a new runtime that has telemetry enabled. Create this first to avoid
# the default Runtime from being lazily created.
new_runtime = Runtime(telemetry=TelemetryConfig(metrics=PrometheusConfig(bind_address="0.0.0.0:9000")))
my_client = await Client.connect("my.temporal.host:7233", runtime=new_runtime)
Set up tracing
How to set up tracing
Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.
Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly.
To configure tracing in Python, install the opentelemetry
dependencies.
# This command installs the `opentelemetry` dependencies.
pip install temporalio[opentelemetry]
Then the temporalio.contrib.opentelemetry.TracingInterceptor
class can be set as an interceptor as an argument of Client.connect()
.
When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution.
Log from a Workflow
How to log from a Workflow
Send logs and errors to a logging service, so that when things go wrong, you can see what happened.
The SDK core uses WARN
for its default logging level.
You can log from a Workflow using Python's standard library, by importing the logging module logging
.
Set your logging configuration to a level you want to expose logs to.
The following example sets the logging information level to INFO
.
logging.basicConfig(level=logging.INFO)
Then in your Workflow, set your logger
and level on the Workflow. The following example logs the Workflow.
View the source code
in the context of the rest of the application code.
# ...
workflow.logger.info("Workflow input parameter: %s" % name)
Custom logger
Use a custom logger for logging.
Use the built-in Logging facility for Python to set a custom logger.
Visibility APIs
The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service.
Use Search Attributes
The typical method of retrieving a Workflow Execution is by its Workflow Id.
However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.
You can do this with Search Attributes.
- Default Search Attributes like
WorkflowType
,StartTime
andExecutionStatus
are automatically added to Workflow Executions. - Custom Search Attributes can contain their own domain-specific data (like
customerId
ornumItems
).- A few generic Custom Search Attributes like
CustomKeywordField
andCustomIntField
are created by default in Temporal's Docker Compose.
- A few generic Custom Search Attributes like
The steps to using custom Search Attributes are:
- Create a new Search Attribute in your Temporal Service in the Temporal CLI or Web UI.
- For example:
temporal operator search-attribute create --name CustomKeywordField --type Text
- Replace
CustomKeywordField
with the name of your Search Attribute. - Replace
Text
with a type value associated with your Search Attribute:Text
|Keyword
|Int
|Double
|Bool
|Datetime
|KeywordList
- Replace
- For example:
- Set the value of the Search Attribute for a Workflow Execution:
- On the Client by including it as an option when starting the Execution.
- In the Workflow by calling
upsert_search_attributes
.
- Read the value of the Search Attribute:
- On the Client by calling
DescribeWorkflow
. - In the Workflow by looking at
WorkflowInfo
.
- On the Client by calling
- Query Workflow Executions by the Search Attribute using a List Filter:
- In the Temporal CLI
- In code by calling
ListWorkflowExecutions
.
Here is how to query Workflow Executions:
Use the list_workflows() method on the Client handle and pass a List Filter as an argument to filter the listed Workflows.
View the source code
in the context of the rest of the application code.
# ...
async for workflow in client.list_workflows('WorkflowType="GreetingWorkflow"'):
print(f"Workflow: {workflow.id}")
How to set custom Search Attributes
After you've created custom Search Attributes in your Temporal Service (using temporal operator search-attribute create
or the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.
Use SearchAttributeKey
to create your Search Attributes. Then, when starting a Workflow execution using client.start_workflow()
, include the Custom Search Attributes by passing instances of SearchAttributePair()
containing each of your keys and starting values to a parameter called search_attributes
.
If you had Custom Search Attributes CustomerId
of type Keyword
and MiscData
of type Text
, you could provide these starting values:
customer_id_key = SearchAttributeKey.for_keyword("CustomerId")
misc_data_key = SearchAttributeKey.for_text("MiscData")
handle = await client.start_workflow(
GreetingWorkflow.run,
id="search-attributes-workflow-id",
task_queue="search-attributes-task-queue",
search_attributes=TypedSearchAttributes([
SearchAttributePair(customer_id_key, "customer_1"),
SearchAttributePair(misc_data_key, "customer_1_data")
]),
)
In this example, CustomerId
and MiscData
are set as Search Attributes.
These attributes are useful for querying Workflows based on the customer ID or the date the order was placed.
Upsert Search Attributes
You can upsert Search Attributes to add or update Search Attributes from within Workflow code.
To upsert custom Search Attributes, use the upsert_search_attributes()
method to pass instances of SearchAttributePair()
containing each of your keys and starting values to a parameter to a TypedSearchAttributes()
object:
workflow.upsert_search_attributes(TypedSearchAttributes([
SearchAttributePair(customer_id_key, "customer_2")
]))
Remove a Search Attribute from a Workflow
To remove a Search Attribute that was previously set, set it to an empty array: []
.
workflow.upsert_search_attributes(TypedSearchAttributes([
SearchAttributePair(customer_id_key, [])
]))