Public preview Open source

database_observability.postgres

Public preview: This is a public preview component. Public preview components are subject to breaking changes, and may be replaced with equivalent functionality that cover the same use case. To enable and use a public preview component, you must set the stability.level flag to public-preview or below.

Usage

Alloy
database_observability.postgres "<LABEL>" {
  data_source_name = <DATA_SOURCE_NAME>
  forward_to       = [<LOKI_RECEIVERS>]
  targets          = "<TARGET_LIST>"
}

Arguments

You can use the following arguments with database_observability.postgres:

NameTypeDescriptionDefaultRequired
data_source_namesecretData Source Name for the Postgres server to connect to.yes
forward_tolist(LogsReceiver)Where to forward log entries after processing.yes
targetslist(map(string))List of targets to scrape.yes
disable_collectorslist(string)A list of collectors to disable from the default set.no
enable_collectorslist(string)A list of collectors to enable on top of the default set.no
exclude_databaseslist(string)A list of databases to exclude from monitoring.no
watermark_pathstringPath to watermark file for tracking processed logs.<data_path>/dbo11y_pg_logs_watermark.txtno

Exports

The following fields are exported and can be referenced by other components:

NameTypeDescription
targetslist(map(string))Targets that can be used to collect metrics from the component.
logs_receiverLogsReceiverReceiver for PostgreSQL logs that processes and exports error metrics.

The following collectors are configurable:

NameDescriptionEnabled by default
query_detailsCollect queries information.yes
query_samplesCollect query samples and wait events information.yes
schema_detailsCollect schemas, tables, and columns from PostgreSQL system catalogs.yes
explain_plansCollect query explain plans.yes
logsProcess PostgreSQL logs and export error metrics.yes

Blocks

You can use the following blocks with database_observability.postgres:

BlockDescriptionRequired
cloud_providerProvide Cloud Provider information.no
cloud_provider > awsProvide AWS database host information.no
cloud_provider > azureProvide Azure database host information.no
query_detailsConfigure the queries collector.no
query_samplesConfigure the query samples collector.no
schema_detailsConfigure the schema and table details collector.no
explain_plansConfigure the explain plans collector.no
health_checkConfigure the health check collector.no

The > symbol indicates deeper levels of nesting. For example, cloud_provider > aws refers to a aws block defined inside an cloud_provider block.

cloud_provider

The cloud_provider block has no attributes. It contains zero or more aws blocks. You use the cloud_provider block to provide information related to the cloud provider that hosts the database under observation. This information is appended as labels to the collected metrics. The labels make it easier for you to filter and group your metrics.

aws

The aws block supplies the ARN identifier for the database being monitored.

NameTypeDescriptionDefaultRequired
arnstringThe ARN associated with the database under observation.yes

azure

The azure block supplies the identifying information for the database being monitored.

NameTypeDescriptionDefaultRequired
subscription_idstringThe Subscription ID for your Azure account.yes
resource_groupstringThe Resource Group that holds the database resource.yes
server_namestringThe database server name.no

query_details

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no

query_samples

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."15s"no
disable_query_redactionboolCollect unredacted SQL query text (might include parameters).falseno
exclude_current_userboolDo not collect query samples for current database user.trueno

schema_details

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no
cache_enabledbooleanWhether to enable caching of table definitions.trueno
cache_sizeintegerCache size.256no
cache_ttldurationCache TTL."10m"no

explain_plans

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1m"no
per_collect_ratiofloat64The ratio of queries to collect explain plans for.1.0no

health_check

NameTypeDescriptionDefaultRequired
collect_intervaldurationHow frequently to collect information from database."1h"no

Logs Collector

The logs collector processes PostgreSQL logs received through the logs_receiver entry point and exports Prometheus metrics for query and server errors.

Exported Receiver

The component exports a logs_receiver entry point that must be fed by log source components.

  • otelcol.receiver.awscloudwatch + otelcol.exporter.loki - reads CloudWatch Logs (RDS) and forwards to the receiver
  • loki.source.file - reads PostgreSQL log files and forwards to the receiver
  • otelcol.receiver.otlp + otelcol.exporter.loki - receives OTLP logs and forwards to the receiver

Metrics

The logs collector exports the following Prometheus metrics:

Metric NameTypeDescriptionLabels
postgres_errors_totalcounterTotal PostgreSQL errors by severity and SQLSTATE.severity, sqlstate, sqlstate_class, database, user, instance, server_id
postgres_error_log_parse_failures_totalcounterNumber of log lines that failed to parse.-

Required PostgreSQL Configuration

For the logs collector to work correctly, PostgreSQL must be configured with the following RDS log format:

SQL
-- Set log format (requires superuser or rds_superuser)
ALTER SYSTEM SET log_line_prefix = '%m:%r:%u@%d:[%p]:%l:%e:%s:%v:%x:%c:%q%a';

-- Reload configuration
SELECT pg_reload_conf();

Use the following SELECT statement to show and verify the current string format applied to the beginning of each log line.

SQL
SHOW log_line_prefix;

Supported Log Format

The collector expects PostgreSQL logs in the RDS format with these fields:

<timestamp>:<remote_host:port>:<user>@<database>:[<pid>]:<line>:<SQLSTATE>:<session_start>:<vtxid>:<txid>:<session_id>:<query><app><severity>: <message>

Example log line:

2026-02-02 21:35:40.130 UTC:10.24.155.141(34110):app_user@books_store:[32032]:2:40001:2026-02-02 21:33:19 UTC:25/112:0:693c34cb.2398::psqlERROR:  canceling statement due to user request

Watermark and Historical Log Processing

The logs collector uses a watermark file to track the last processed log timestamp. This prevents re-counting historical logs on component restart and maintains proper Prometheus counter semantics.

Default behavior:

  • Watermark file: <data_path>/dbo11y_pg_logs_watermark.txt
  • On first run: Starts processing logs from the current time (skips historical logs)
  • On restart: Resumes from the last processed timestamp
  • Sync frequency: Every 10 seconds (atomic writes for crash safety)

Benefits:

  • postgres_errors_total maintains monotonically increasing values
  • No duplicate counting on Alloy restarts
  • Proper rate() and increase() calculations in Prometheus
  • Resumes from last position after downtime (no data loss)

Example with custom watermark path:

Alloy
database_observability.postgres "orders_db" {
  data_source_name = "postgres://user:pass@localhost:5432/dbname"
  watermark_path   = "/var/lib/alloy/postgres_orders_watermark.txt"
  forward_to       = [loki.relabel.orders_db.receiver]
  targets          = prometheus.exporter.postgres.orders_db.targets
}

Important: When using log sources with start_from set to historical timestamps (e.g., otelcol.receiver.awscloudwatch with start_from = "2026-01-01T00:00:00Z"), the watermark ensures that historical logs are only counted once, even if Alloy restarts multiple times.

Example

Alloy
database_observability.postgres "orders_db" {
  data_source_name = "postgres://user:pass@localhost:5432/dbname"
  forward_to       = [loki.relabel.orders_db.receiver]
  targets          = prometheus.exporter.postgres.orders_db.targets

  enable_collectors = ["query_samples", "explain_plans"]

  cloud_provider {
    aws {
      arn = "your-rds-db-arn"
    }
  }
}

prometheus.exporter.postgres "orders_db" {
  data_source_name   = "postgres://user:pass@localhost:5432/dbname"
  enabled_collectors = ["stat_statements"]
}

// Read PostgreSQL log files and forward to logs collector
loki.source.file "postgres_logs" {
  targets = [{
    __path__ = "/var/log/postgresql/postgresql-*.log",
    job      = "postgres-logs",
  }]
  
  forward_to = [database_observability.postgres.orders_db.logs_receiver]
}

loki.relabel "orders_db" {
  forward_to = [loki.write.logs_service.receiver]
  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }
  rule {
    target_label = "instance"
    replacement  = "orders_db"
  }
}

discovery.relabel "orders_db" {
  targets = database_observability.postgres.orders_db.targets

  rule {
    target_label = "job"
    replacement  = "integrations/db-o11y"
  }
  rule {
    target_label = "instance"
    replacement  = "orders_db"
  }
}

prometheus.scrape "orders_db" {
  targets    = discovery.relabel.orders_db.targets
  job_name   = "integrations/db-o11y"
  forward_to = [prometheus.remote_write.metrics_service.receiver]
}

prometheus.remote_write "metrics_service" {
  endpoint {
    url = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_URL>")
    basic_auth {
      username = sys.env("<GRAFANA_CLOUD_HOSTED_METRICS_ID>")
      password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
    }
  }
}

loki.write "logs_service" {
  endpoint {
    url = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_URL>")
    basic_auth {
      username = sys.env("<GRAFANA_CLOUD_HOSTED_LOGS_ID>")
      password = sys.env("<GRAFANA_CLOUD_RW_API_KEY>")
    }
  }
}

Replace the following:

  • <GRAFANA_CLOUD_HOSTED_METRICS_URL>: The URL for your Grafana Cloud hosted metrics.
  • <GRAFANA_CLOUD_HOSTED_METRICS_ID>: The user ID for your Grafana Cloud hosted metrics.
  • <GRAFANA_CLOUD_RW_API_KEY>: Your Grafana Cloud API key.
  • <GRAFANA_CLOUD_HOSTED_LOGS_URL>: The URL for your Grafana Cloud hosted logs.
  • <GRAFANA_CLOUD_HOSTED_LOGS_ID>: The user ID for your Grafana Cloud hosted logs.

Compatible components

database_observability.postgres can accept arguments from the following components:

database_observability.postgres has exports that can be consumed by the following components:

Note

Connecting some components may not be sensible or components may require further configuration to make the connection work correctly. Refer to the linked documentation for more details.