PostgreSQL

The PostgreSQL integration collects database-usage metrics, such as the database size, the number of backends, or the number of operations. The integration also collects PostgreSQL logs and parses them into a JSON payload. This result includes fields for role, user, level, and message.

For more information about PostgreSQL, see the PostgreSQL documentation.

Prerequisites

To collect PostgreSQL telemetry, you must install the Ops Agent:

  • For metrics, install version 2.21.0 or higher.
  • For logs, install version 2.9.0 or higher.

This integration supports PostgreSQL version 10.18+.

Configure your PostgreSQL instance

The postgresql receiver connects by default to a local postgresql server using a Unix socket and Unix authentication as the root user.

Configure the Ops Agent for PostgreSQL

Following the guide to Configure the Ops Agent, add the required elements to collect telemetry from PostgreSQL instances, and restart the agent.

Example configuration

The following commands create the configuration to collect and ingest telemetry for PostgreSQL:

# Configures Ops Agent to collect telemetry from the app. You must restart the agent for the configuration to take effect.

set -e

# Check if the file exists
if [ ! -f /etc/google-cloud-ops-agent/config.yaml ]; then
  # Create the file if it doesn't exist.
  sudo mkdir -p /etc/google-cloud-ops-agent
  sudo touch /etc/google-cloud-ops-agent/config.yaml
fi

# Create a back up of the existing file so existing configurations are not lost.
sudo cp /etc/google-cloud-ops-agent/config.yaml /etc/google-cloud-ops-agent/config.yaml.bak

# Configure the Ops Agent.
sudo tee /etc/google-cloud-ops-agent/config.yaml > /dev/null << EOF
metrics:
  receivers:
    postgresql:
      type: postgresql
      username: postgres
      password: abc123
      insecure: true
      endpoint: localhost:5432
  service:
    pipelines:
      postgresql:
        receivers:
        - postgresql
logging:
  receivers:
    postgresql_general:
      type: postgresql_general
  service:
    pipelines:
      postgresql:
        receivers:
          - postgresql_general
EOF

For these changes to take effect, you must restart the Ops Agent:

Linux

  1. To restart the agent, run the following command on your instance:
    sudo systemctl restart google-cloud-ops-agent
    
  2. To confirm that the agent restarted, run the following command and verify that the components "Metrics Agent" and "Logging Agent" started:
    sudo systemctl status "google-cloud-ops-agent*"
    

Windows

  1. Connect to your instance using RDP or a similar tool and login to Windows.
  2. Open a PowerShell terminal with administrator privileges by right-clicking the PowerShell icon and selecting Run as Administrator
  3. To restart the agent, run the following PowerShell command:
    Restart-Service google-cloud-ops-agent -Force
    
  4. To confirm that the agent restarted, run the following command and verify that the components "Metrics Agent" and "Logging Agent" started:
    Get-Service google-cloud-ops-agent*
    

Configure logs collection

To ingest logs from PostgreSQL, you must create a receiver for the logs that PostgreSQL produces and then create a pipeline for the new receiver.

To configure a receiver for your postgresql_general logs, specify the following fields:

FieldDefaultDescription
exclude_pathsA list of filesystem path patterns to exclude from the set matched by include_paths.
include_paths[/var/log/postgresql/postgresql*.log, /var/lib/pgsql/data/log/postgresql*.log, /var/lib/pgsql/*/data/log/postgresql*.log]A list of filesystem paths to read by tailing each file. A wild card (*) can be used in the paths.
record_log_file_pathfalseIf set to true, then the path to the specific file from which the log record was obtained appears in the output log entry as the value of the agent.googleapis.com/log_file_path label. When using a wildcard, only the path of the file from which the record was obtained is recorded.
typeThis value must be postgresql_general.
wildcard_refresh_interval60sThe interval at which wildcard file paths in include_paths are refreshed. Given as a time duration, for example 30s or 2m. This property might be useful under high logging throughputs where log files are rotated faster than the default interval.

What is logged

The logName is derived from the receiver IDs specified in the configuration. Detailed fields inside the LogEntry are as follows.

The postgresql_general logs contain the following fields in the LogEntry:

FieldTypeDescription
jsonPayload.databasestringDatabase name for the action being logged when relevant
jsonPayload.levelstringLog severity or type of database interaction type for some logs
jsonPayload.messagestringLog of the database action
jsonPayload.tidnumberThread ID where the log originated
jsonPayload.userstringAuthenticated user for the action being logged when relevant
severitystring (LogSeverity)Log entry level (translated).

Configure metrics collection

To ingest metrics from PostgreSQL, you must create a receiver for the metrics that PostgreSQL produces and then create a pipeline for the new receiver.

This receiver does not support the use of multiple instances in the configuration, for example, to monitor multiple endpoints. All such instances write to the same time series, and Cloud Monitoring has no way to distinguish among them.

To configure a receiver for your postgresql metrics, specify the following fields:

FieldDefaultDescription
ca_filePath to the CA certificate. As a client, this verifies the server certificate. If empty, the receiver uses the system root CA.
cert_filePath to the TLS certificate to use for mTLS-required connections.
collection_interval60sA time duration value, such as 30s or 5m.
endpoint/var/run/postgresql/.s.PGSQL.5432The hostname:port or Unix socket path starting with / used to connect to the PostgreSQL server.
insecuretrueSets whether or not to use a secure TLS connection. If set to false, then TLS is enabled.
insecure_skip_verifyfalseSets whether or not to skip verifying the certificate. If insecure is set to true, then the insecure_skip_verify value is not used.
key_filePath to the TLS key to use for mTLS-required connections.
passwordThe password used to connect to the server.
typeThis value must be postgresql.
usernameThe username used to connect to the server.

What is monitored

The following table provides the list of metrics that the Ops Agent collects from the PostgreSQL instance.

Metric type 
Kind, Type
Monitored resources
Labels
workload.googleapis.com/postgresql.backends
GAUGEINT64
gce_instance
database
workload.googleapis.com/postgresql.bgwriter.buffers.allocated
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/postgresql.bgwriter.buffers.writes
CUMULATIVEINT64
gce_instance
source
workload.googleapis.com/postgresql.bgwriter.checkpoint.count
CUMULATIVEINT64
gce_instance
type
workload.googleapis.com/postgresql.bgwriter.duration
CUMULATIVEINT64
gce_instance
type
workload.googleapis.com/postgresql.bgwriter.maxwritten
CUMULATIVEINT64
gce_instance
 
workload.googleapis.com/postgresql.blocks_read
CUMULATIVEINT64
gce_instance
database
source
table
workload.googleapis.com/postgresql.commits
CUMULATIVEINT64
gce_instance
database
workload.googleapis.com/postgresql.connection.max
GAUGEINT64
gce_instance
 
workload.googleapis.com/postgresql.database.count
GAUGEINT64
gce_instance
 
workload.googleapis.com/postgresql.db_size
GAUGEINT64
gce_instance
database
workload.googleapis.com/postgresql.index.scans
CUMULATIVEINT64
gce_instance
database
index
table
workload.googleapis.com/postgresql.index.size
GAUGEINT64
gce_instance
database
index
table
workload.googleapis.com/postgresql.operations
CUMULATIVEINT64
gce_instance
database
operation
table
workload.googleapis.com/postgresql.replication.data_delay
GAUGEINT64
gce_instance
replication_client
workload.googleapis.com/postgresql.rollbacks
CUMULATIVEINT64
gce_instance
database
workload.googleapis.com/postgresql.rows
GAUGEINT64
gce_instance
database
state
table
workload.googleapis.com/postgresql.table.count
GAUGEINT64
gce_instance
database
workload.googleapis.com/postgresql.table.size
GAUGEINT64
gce_instance
database
table
workload.googleapis.com/postgresql.table.vacuum.count
CUMULATIVEINT64
gce_instance
database
table
workload.googleapis.com/postgresql.wal.age
GAUGEINT64
gce_instance
 
workload.googleapis.com/postgresql.wal.lag
GAUGEINT64
gce_instance
operation
replication_client

Verify the configuration

This section describes how to verify that you correctly configured the PostgreSQL receiver. It might take one or two minutes for the Ops Agent to begin collecting telemetry.

To verify that PostgreSQL logs are being sent to Cloud Logging, do the following:

  1. In the Google Cloud console, go to the Logs Explorer page:

    Go to Logs Explorer

    If you use the search bar to find this page, then select the result whose subheading is Logging.

  2. Enter the following query in the editor, and then click Run query:
    resource.type="gce_instance"
    log_id("postgresql_general")
    

To verify that PostgreSQL metrics are being sent to Cloud Monitoring, do the following:

  1. In the Google Cloud console, go to the  Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. In the toolbar of the query-builder pane, select the button whose name is either  MQL or  PromQL.
  3. Verify that MQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
  4. Enter the following query in the editor, and then click Run query:
    fetch gce_instance
    | metric 'workload.googleapis.com/postgresql.backends'
    | every 1m
    

View dasard

To view your PostgreSQL metrics, you must have a chart or dasard configured. The PostgreSQL integration includes one or more dasards for you. Any dasards are automatically installed after you configure the integration and the Ops Agent has begun collecting metric data.

You can also view static previews of dasards without installing the integration.

To view an installed dasard, do the following:

  1. In the Google Cloud console, go to the  Dasards page:

    Go to Dasards

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Select the Dasard List tab, and then choose the Integrations category.
  3. Click the name of the dasard you want to view.

If you have configured an integration but the dasard has not been installed, then check that the Ops Agent is running. When there is no metric data for a chart in the dasard, installation of the dasard fails. After the Ops Agent begins collecting metrics, the dasard is installed for you.

To view a static preview of the dasard, do the following:

  1. In the Google Cloud console, go to the  Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Click the Compute Engine deployment-platform filter.
  3. Locate the entry for PostgreSQL and click View Details.
  4. Select the Dasards tab to see a static preview. If the dasard is installed, then you can navigate to it by clicking View dasard.

For more information about dasards in Cloud Monitoring, see Dasards and charts.

For more information about using the Integrations page, see Manage integrations.

Install alerting policies

Alerting policies instruct Cloud Monitoring to notify you when specified conditions occur. The PostgreSQL integration includes one or more alerting policies for you to use. You can view and install these alerting policies from the Integrations page in Monitoring.

To view the descriptions of available alerting policies and install them, do the following:

  1. In the Google Cloud console, go to the  Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. Locate the entry for PostgreSQL and click View Details.
  3. Select the Alerts tab. This tab provides descriptions of available alerting policies and provides an interface for installing them.
  4. Install alerting policies. Alerting policies need to know where to send notifications that the alert has been triggered, so they require information from you for installation. To install alerting policies, do the following:
    1. From the list of available alerting policies, select those that you want to install.
    2. In the Configure notifications section, select one or more notification channels. You have the option to disable the use of notification channels, but if you do, then your alerting policies fire silently. You can check their status in Monitoring, but you receive no notifications.

      For more information about notification channels, see Manage notification channels.

    3. Click Create Policies.

For more information about alerting policies in Cloud Monitoring, see Introduction to alerting.

For more information about using the Integrations page, see Manage integrations.

What's next

For a walkthrough on how to use Ansible to install the Ops Agent, configure a third-party application, and install a sample dasard, see the Install the Ops Agent to troubleshoot third-party applications video.