V12onZ-OperationGuide
V12onZ-OperationGuide
on IBM LinuxONE™
Operation Guide
Linux
J2UL-2628-01ZLZ0(00)
September 2020
Preface
Purpose of this document
The FUJITSU Enterprise Postgres database system extends the PostgreSQL features and runs on the Linux
platform. This document is the FUJITSU Enterprise Postgres Operation Guide.
Intended readers
This document is intended for those who install and operate FUJITSU Enterprise
Postgres. Readers of this document are assumed to have general knowledge of:
- PostgreSQL
- SQL
- Linux
-i-
Chapter 14 Actions when an Error Occurs
Describes how to perform recovery when disk failure or data corruption occurs.
Appendix A Parameters
Describes the FUJITSU Enterprise Postgres parameters.
Appendix B System Administration Functions
Describes the system administration functions of FUJITSU Enterprise Postgres.
Appendix C System Views
Describes how to use the system view in FUJITSU Enterprise Postgres.
Appendix D Tables Used by Data Masking
Describes the tables used by the data masking feature.
Appendix E Tables Used by High-Speed Data Load
Describes the tables used by high-speed data load.
Appendix F Starting and Stopping the Web Server Feature of WebAdmin
Describes how to start and stop WebAdmin (Web server feature).
Appendix G WebAdmin Wallet
Describes how to use the Wallet feature of WebAdmin.
Appendix H WebAdmin Disallow User Inputs Containing Hazardous Characters
Describes characters not allowed in WebAdmin.
Appendix I Collecting Failure Investigation Data
Describes how to collect information for initial investigation.
Appendix J Operation of Transparent data Encryption in File-based Keystores
Describes operation of transparent data Encription in file-based keystores.
Export restrictions
Exportation/release of this document may require necessary procedures in accordance with the regulations of your resident country
and/or US export control laws.
Copyright
Copyright 2019-2020 FUJITSU LIMITED
- ii -
Contents
Chapter 1 Operating FUJITSU Enterprise Postgres.................................................................................................................1
1.1 Operating Methods.............................................................................................................................................................................. 1
1.2 Starting WebAdmin............................................................................................................................................................................. 2
1.2.1 Logging in to WebAdmin............................................................................................................................................................. 2
1.3 Operations Using Commands.............................................................................................................................................................. 3
1.4 Operating Environment of FUJITSU Enterprise Postgres...................................................................................................................4
1.4.1 Operating Environment.................................................................................................................................................................4
1.4.2 File Composition...........................................................................................................................................................................6
1.5 Notes on Compatibility of Applications Used for Operations.............................................................................................................7
1.6 Notes on Upgrading Database Instances............................................................................................................................................. 7
1.6.1 Additional Steps for upgrading to FUJITSU Enterprise Postgres with Vertical Clustered Index (VCI) Enabled....................... 8
- iii -
6.1.1 Masking Target........................................................................................................................................................................... 36
6.1.2 Masking Type............................................................................................................................................................................. 36
6.1.3 Masking Condition..................................................................................................................................................................... 36
6.1.4 Masking Format..........................................................................................................................................................................37
6.2 Usage Method.................................................................................................................................................................................... 39
6.2.1 Creating a Masking Policy..........................................................................................................................................................40
6.2.2 Changing a Masking Policy........................................................................................................................................................41
6.2.3 Confirming a Masking Policy.....................................................................................................................................................41
6.2.4 Enabling and Disabling a Masking Policy..................................................................................................................................42
6.2.5 Deleting a Masking Policy..........................................................................................................................................................43
6.3 Data Types for Masking.................................................................................................................................................................... 43
6.4 Security Notes....................................................................................................................................................................................44
- iv -
11.1.3 Setup......................................................................................................................................................................................... 68
11.1.3.1 Setting Parameters............................................................................................................................................................. 68
11.1.3.2 Installing the Extension..................................................................................................................................................... 69
11.2 Using High-Speed Data Load.......................................................................................................................................................... 69
11.2.1 Loading Data.............................................................................................................................................................................69
11.2.2 Recovering from a Data Load that Ended Abnormally............................................................................................................ 70
11.3 Removing High-Speed Data Load................................................................................................................................................... 71
11.3.1 Removing the Extension...........................................................................................................................................................71
-v-
14.10 Actions in Response to Instance Startup Failure......................................................................................................................... 111
14.10.1 Errors in the Configuration File............................................................................................................................................111
14.10.2 Errors Caused by Power Failure or Mounting Issues........................................................................................................... 111
14.10.3 Other Errors.......................................................................................................................................................................... 112
14.10.3.1 Using WebAdmin.......................................................................................................................................................... 112
14.10.3.2 Using Server Commands............................................................................................................................................... 112
14.11 Actions in Response to Failure to Stop an Instance.....................................................................................................................112
14.11.1 Using WebAdmin................................................................................................................................................................. 112
14.11.2 Using Server Commands...................................................................................................................................................... 113
14.11.2.1 Stopping the Instance Using the Fast Mode.................................................................................................................. 113
14.11.2.2 Stopping the Instance Using the Immediate Mode........................................................................................................ 113
14.11.2.3 Forcibly Stopping the Server Process............................................................................................................................113
14.12 Actions in Response to Failure to Create a Streaming Replication Standby Instance................................................................ 114
14.13 Actions in Response to Error in a Distributed Transaction......................................................................................................... 114
14.14 I/O Errors Other than Disk Failure.............................................................................................................................................. 115
14.14.1 Network Error with an External Disk................................................................................................................................... 115
14.14.2 Errors Caused by Power Failure or Mounting Issues........................................................................................................... 116
14.15 Anomaly Detection and Resolution.............................................................................................................................................116
14.15.1 Port Number and Backup Storage Path Anomalies.............................................................................................................. 116
14.15.2 Mirroring Controller Anomalies...........................................................................................................................................117
Appendix A Parameters........................................................................................................................................................118
- vi -
Appendix H WebAdmin Disallow User Inputs Containing Hazardous Characters............................................................... 147
Index.....................................................................................................................................................................................156
- vii -
Chapter 1 Operating FUJITSU Enterprise Postgres
This chapter describes how to operate FUJITSU Enterprise Postgres.
See
Before performing database multiplexing using database multiplexing, refer to "Database Multiplexing Mode" in the Cluster Operation
Guide (Database Multiplexing).
Note
You cannot combine WebAdmin and server commands to perform the following operations:
- To operate an instance created using the initdb command in WebAdmin, the instance needs to be imported using WebAdmin.
-1-
Operation Operation with the GUI Operation with commands
WebAdmin performs a base backup
of the source instance and creates a
standby instance.
Changing the configuration The configuration file is edited
WebAdmin is used.
files directly.
Starting and stopping an instance WebAdmin is used. The pg_ctl command is used.
Creating a database None. This is defined using the psql
command or the application after
specifying the DDL statement.
Backing up the database WebAdmin, or the pgx_dmpall It is recommended that the
command, is used. pgx_dmpall command be used.
Recovery to the latest database can be
performed.
Database recovery WebAdmin is used. To use the backup that was
performed using the pgx_dmpall
command, the pgx_rcvall command
is used.
Monitoring Database errors The status in the WebAdmin window The messages that are output to the
can be checked. database server log are monitored.
Disk space The status in the WebAdmin window This is monitored using the df
can be checked. A warning will be command of the operating system,
displayed if the free space falls below for example.
20%.
Connection status None. This can be checked referencing
pg_stat_activity of the standard
statistics view from psql or the
application.
User environment
It is recommended to use the following browsers with WebAdmin:
- Internet Explorer 11
- Microsoft Edge (Build41 or later)
WebAdmin will work with other browsers, such as Firefox and Chrome, however, the look and feel may be slightly different.
http://hostNameOrIpAddress:portNumber/
- hostNameOrIpAddress: The host name or IP address of the server where WebAdmin is installed.
- portNumber: The port number of WebAdmin. The default port number is 27515.
-2-
Example
For a server with IP address "192.0.2.0" and port number "27515"
http://192.0.2.0:27515/
The startup URL window shown below is displayed. From this window you can log in to WebAdmin or access the product documentation.
Note
- You must start the Web server feature of WebAdmin before using WebAdmin.
- Refer to "Appendix F Starting and Stopping the Web Server Feature of WebAdmin" for information on how to start the Web server
feature of WebAdmin.
Log in to WebAdmin
Click [Launch WebAdmin] in the startup URL window to start WebAdmin and display the login window.
To log in, specify the following values:
- [User name]: User name (OS user account) of the instance administrator
- [Password]: Password corresponding to the user name
Point
Use the OS user account as the user name of the instance administrator. Refer to "Creating an Instance Administrator" in the Installation
and Setup Guide for Server for details.
-3-
- Server commands
This group of commands includes commands for creating a database cluster and controlling the database. You can run these commands
on the server where the database is operating.
To use these commands, you must configure the environment variables.
See
- Refer to "PostgreSQL Server Applications" under "Reference" in the PostgreSQL Documentation, or "Reference" for information
on server commands.
- Refer to "Configure the environment variables" in the procedure to create instances in "Using the initdb Command" in the
Installation and Setup Guide for Server for information on configuring the environment variables.
- Client commands
This group of commands includes the psql command and commands for extracting the database cluster to a script file. These commands
can be executed on the client that can connect to the database, or on the server on which the database is running.
To use these commands, you must configure the environment variables.
See
- Refer to "PostgreSQL Client Applications" under "Reference" in the PostgreSQL Documentation, or "Reference" for information
on client commands.
- Refer to "Configuring Environment Variables" in the Installation and Setup Guide for Client for information on the values to be
set in the environment variables.
-4-
Table 1.1 OS resources
Type Role
Shared memory Used when a database process exchanges information with an external
process.
Semaphore
-5-
Table 1.3 Server resources of FUJITSU Enterprise Postgres
Type Role
Database cluster Database storage area on the database storage disk. It is a collection of
databases managed by an instance.
System catalog Contains information required for the system to run, including the database
definition information and the operation information created by the user
Default tablespace Contains table files and index files stored by default.
Transaction log Contains log information in case of a crash recovery or rollback. This is the
same as the WAL (Write Ahead Log).
Work file Work file used when executing applications or commands.
postgresql.conf Contains information that defines the operating environment of FUJITSU
Enterprise Postgres.
pg_hba.conf FUJITSU Enterprise Postgres uses this file to authenticate individual client
hosts.
Server certificate file Contains information about the server certificate to be used when
encrypting communication data and authenticating a server
Server private key file Contains information about the server private key to be used when
encrypting communication data and authenticating a server
Tablespace Stores table files and index files in a separate area from the database cluster.
Specify a space other than that under the database cluster.
Backup Stores the data required for recovering the database when an error, such as
disk failure, occurs.
Database backup Contains the backup data for the database.
Archive log Contains the log information for recovery.
Mirrored transaction log (mirrored Enables a database cluster to be restored to the state immediately before an
WAL) error even if both the database cluster and transaction log fail when
performing backup/recovery operations using the pgx_dmpall command or
WebAdmin.
Core file FUJITSU Enterprise Postgres process core file that is output when an error
occurs during a FUJITSU Enterprise Postgres process.
Key management server or key Server or storage where the master encryption key file is located.
management storage
Master encryption key file Contains the master encryption key to be used when encrypting storage
data. The master encryption key file is managed on the key management
server or key management storage.
Table 1.4 Number of files within a single instance and how to specify their location
File type Required Quantity How to specify the location
Note that "<x>" indicates the product version.
Program files Y Multiple
/opt/fsepv<x>server64
Database cluster Y 1 Specify using WebAdmin or server commands.
Specify a space other than that under the database cluster,
Tablespace Y Multiple
using the DDL statement.
-6-
File type Required Quantity How to specify the location
Backup Y Multiple Specify using WebAdmin or server commands.
Specify using WebAdmin, server commands, or
Core file Y Multiple
postgresql.conf.
Server certificate
N 1 Specify using postgresql.conf.
file (*1)
Server private key
N 1 Specify using postgresql.conf.
file (*1)
Master encryption Specify the directory created as the key store using
N 1
key file (*1) postgresql.conf.
Connection service
N 1 Specify using environment variables.
file (*1)
Password file (*1) N 1 Specify using environment variables.
CA certificate file
N 1 Specify using environment variables.
(*1)
Y: Mandatory
N: Optional
*1: Set manually when using the applicable feature.
Note
- Do not use an NFS for files used in FUJITSU Enterprise Postgres except when creating a database space in a storage device on a
network.
- If anti-virus software is used, set scan exception settings for directories so that none of the files that comprise FUJITSU Enterprise
Postgres are scanned for viruses. Alternatively, if the files that comprise FUJITSU Enterprise Postgres are to be scanned for viruses,
stop FUJITSU Enterprise Postgres and perform the scan when tasks that use FUJITSU Enterprise Postgres are not operating.
See
Refer to "Notes on Application Compatibility" in the Application Development Guide for details.
- pg_stat_statements
- oracle_compatible
- pg_dbms_stats
- pg_hint_plan
-7-
For all databases except "template0", execute the following command to remove these extensions:
Once the pg_upgrade operation is complete, for all databases except "template0", execute the following command to re-create these
extensions as required:
Note
- It is strongly recommended to back up the database using pg_dump before performing pg_upgrade or using DROP EXTENSION.
- If there are any columns created in the user tables using a data type from these extensions, then DROP EXTENSION will also drop these
columns. Therefore, it is essential that alternate upgrade mechanisms are considered instead of pg_upgrade, in such scenarios. These
may include pg_dump/pg_restore.
Before upgrading
1. Obtain the CREATE INDEX Definitions
Run the query below to list all the VCI indexes created in the database. Ensure that these indexes are re-created in the FUJITSU
Enterprise Postgres 12 or later instance after pg_upgrade has finished.
For each index_relname listed above, execute the commands below to obtain the CREATE INDEX definition (to use the same SQL
syntax while re-creating the indexes on the FUJITSU Enterprise Postgres 12 or later instance).
SELECT pg_get_indexdef('indexName'::regclass);
2. Drop the VCI indexes and VCI extension along with all its dependencies.
To remove all the VCI indexes and VCI internal objects that are created in FUJITSU Enterprise Postgres, execute the commands
below. VCI internal objects will be created in FUJITSU Enterprise Postgres 12 or later automatically when CREATE EXTENSION
for VCI is executed.
Note
To restore the VCI extension in the FUJITSU Enterprise Postgres 11 instance, execute CREATE EXTENSION.
After upgrading
Once the pg_upgrade operation is complete, for all databases except "template0", execute CREATE EXTENSION to create the VCI
extension, and then execute CREATE INDEX for all the VCI indexes as required.
-8-
Chapter 2 Starting an Instance and Creating a Database
This chapter describes basic operations, from starting an instance to creating a database.
Point
To automatically start or stop an instance when the operating system on the database server is started or stopped, refer to "Configuring
Automatic Start and Stop of an Instance" in the Installation and Setup Guide for Server and configure the settings.
Note
The collected statistics are initialized if an instance is stopped in the "Immediate" mode or if it is abnormally terminated. To prepare for such
initialization of statistics, consider regular collection of the statistics by using the SELECT statement. Refer to "The Statistics Collector"
in "Server Administration" in the PostgreSQL Documentation for information on the statistics.
Starting an instance
Start an instance by using the [Instances] tab in WebAdmin.
Stopping an instance
Stop an instance by using the [Instances] tab in WebAdmin.
Stop mode
Select the mode in which to stop the instance. The following describes the operations of the modes:
-9-
Stop mode Connected clients Backup being executed using the
command
Kill process mode Send SIGKILL to the process and abort all active transactions. This will lead to a crash-recovery
run at the next restart.
*1: When the processing to stop the instance in the Smart mode has started and you want to stop immediately, use the following
procedure:
3. In the [Instances] tab, click , and select the Immediate mode to stop the instance.
If an instance stops abnormally, remove the cause of the stoppage and start the instance by using WebAdmin.
- 10 -
Figure 2.1 Example of operating status indicators
Note
- When operating WebAdmin, click to update the status. WebAdmin will reflect the latest status of the operation or the instance
resources from the server.
- If an error occurs while communicating with the server, there may be no response from WebAdmin. When this happens, close the
browser and then log in again. If this does not resolve the issue, check the system log of the server and confirm whether a communication
error has occurred.
- The following message is output during startup of an instance when the startup process is operating normally, therefore, the user does
not need to be aware of this message:
See
Refer to " Configure the environment variables" in the procedure to create instances in " Using the initdb Command" in the Installation and
Setup Guide for Server for information on configuring the environment variables.
Starting an instance
Use the pg_ctl command to start an instance.
- 11 -
Specify the following values in the pg_ctl command:
If an application, command, or process tries to connect to the database while the instance is starting up, the message "FATAL:the database
system is starting up(11189)" is output. However, this message may also be output if the instance is started without the -W option specified.
This message is output by the pg_ctl command to check if the instance has started successfully. Therefore, ignore this message if there are
no other applications, commands, or processes that connect to the database.
Example
Note
If the -W option is specified, the command will return without waiting for the instance to start. Therefore, it may be unclear as to whether
the instance startup was successful or failed.
Stopping an instance
Use the pg_ctl command to stop an instance.
Specify the following values in the pg_ctl command:
Example
Example
When the instance is active:
- 12 -
See
Refer to "pg_ctl" under "Reference" in the PostgreSQL Documentation for information on pg_ctl command.
postgres=# \l+
postgres=# \q
See
Refer to "Creating a Database" in "Tutorial" in the PostgreSQL Documentation for information on creating a database using the createdb
command.
- 13 -
Chapter 3 Backing Up the Database
This chapter describes how to back up the database.
Backup methods
The following backup methods enable you to recover data to a backup point or to the state immediately preceding disk physical
breakdown or data logical failure.
Information
By using a copy command created by the user, the pgx_dmpall command and the pgx_rcvall command can back up database clusters
and tablespaces to any destination and recover them from any destination using any copy method. Refer to "Chapter 13 Backup/
Recovery Using the Copy Command" for details.
- 1.5: Coefficient to factor in tasks other than disk write (which is the most time-consuming step)
If using the copy command with the pgx_dmpall command, the backup time will depend on the implementation of the copy command.
Note
- Backup operation cannot be performed on an instance that is part of a streaming replication cluster in standby mode.
- Use the selected backup method continuously.
There are several differences, such as the data format, across the backup methods. For this reason, the following restrictions apply:
- It is not possible to use one method for backup and another for recovery.
- It is not possible to convert one type of backup data to a different type of backup data.
- Mirrored WALs can be used only for backup/recovery using the pgx_dmpall command or WebAdmin.
- There are several considerations for the backup of the keystore and backup of the database in case the data stored in the database is
encrypted. Refer to the following for details:
- 14 -
- If you have defined a tablespace, back it up. If you do not back it up, directories for the tablespace are not created during recovery, which
may cause the recovery to fail. If the recovery fails, refer to the system log, create the tablespace, and then perform the recovery process
again.
Information
The following methods can also be used to perform backup. Performing a backup using these methods allows you to restore to the point
when the backup was performed.
Refer to "Backup and Restore" in "Server Administration" in the PostgreSQL Documentation for information on these backup methods.
- This method reduces disk usage, because obsolete archive logs (transaction logs copied to the backup data storage destination) are
deleted. It also minimizes the recovery time when an error occurs.
Backup cycle
The time interval when backup is performed periodically is called the backup cycle. For example, if backup is performed every morning,
the backup cycle is 1 day.
The backup cycle depends on the jobs being run, but on FUJITSU Enterprise Postgres it is recommended that operations are run with a
backup cycle of at least once per day.
Note
- If backup is disabled for an instance, you will not be able to back up or restore the instance. Refer to "[Backup]" in "Creating an Instance"
in the Installation and Setup Guide for Server for details.
- If the data to be stored in the database is to be encrypted, it is necessary to enable the automatic opening of the keystore before doing
so. Refer to "5.7.4 Enabling Automatic Opening of the Keystore" for details.
- 15 -
- WebAdmin uses the labels "Data storage path", "Backup storage path" and "Transaction log path" to indicate "data storage destination",
"backup data storage destination" and "transaction log storage destination" respectively. In this manual these terms are used
interchangeably.
Backup operation
Follow the procedure below to back up the database.
Backup status
If an error occurs and backup fails, [Error] is displayed adjacent to [Data storage status] or [Backup storage status] in the [Instances] tab.
An error message is also displayed in the message list.
In this case, the backup data is not optimized. Ensure that you check the backup result whenever you perform backup. If backup fails,
[Solution] appears to the right of the error message. Clicking this button displays information explaining how to resolve the cause of the
error. Remove the cause of failure, and perform backup again.
See
Refer to "Preparing Directories to Deploy Resources" in the Installation and Setup Guide for Server for information on the location of
directories required for backup and for points to take into account.
# mkdir /backup/inst1
# chown fsepuser:fsepuser /backup/inst1
# chmod 700 /backup/inst1
- 16 -
Parameter name Setting Description
backup_destination Name of the directory where the backup Specify the name of the directory
data will be stored where the backup data will be stored.
Appropriate privileges that allow only
the instance administrator to access the
directory must already be set.
Place the backup data storage
destination directory outside the data
storage destination directory, the
tablespace directory, and the
transaction log storage destination
directory.
archive_mode on Specify the archive log mode.
Specify [on] (execute).
archive_command 'installationDirectory/bin/ Specify the path name of the command
pgx_walcopy.cmd "%p" that will save the transaction log and
"backupDataStorageDestinationDirectory/ the storage destination.
archived_wal/%f"'
Refer to "Appendix A Parameters" and "Write Ahead Log" under "Server Administration" in the PostgreSQL Documentation for
information on the parameters.
Example
Note
Backup stores the data obtained during the backup and the backup data of the data obtained during previous backup.
If the data to be stored in the database is encrypted, refer to the following and back up the keystore:
Backup status
Use the pgx_rcvall command to check the backup status.
Specify the following values in the pgx_rcvall command:
- 17 -
> pgx_rcvall -l -D /database/inst1
Date Status Dir
2017-05-01 13:30:40 COMPLETE /backup/inst1/2017-05-01_13-30-40
If an error occurs and backup fails, a message is output to the system log.
In this case, the backup data is not optimized. Ensure that you check the backup result whenever you perform backup. If backup fails, remove
the cause of failure and perform backup again.
See
Refer to "pgx_dmpall" and "pgx_rcvall" in the Reference for information on the pgx_dmpall command and pgx_rcvall command.
Example
The following example uses the psql command to connect to the database and execute the SQL statement to set a restore point.
However, when considering continued compatibility of applications, do not use functions directly in SQL statements. Refer to "Notes on
Application Compatibility" in the Application Development Guide for details.
Refer to "14.3.2 Using the pgx_rcvall Command" for information on using a restore point to recover the database.
Note
- Name restore points so that they are unique within the database. Add the date and time of setting a restore point to distinguish it from
other restore points, as shown below:
- YYMMDD_HHMMSS
- YYMMDD: Indicates the date
- HHMMSS: Indicates the time
- There is no way to check restore points you have set. Keep a record in, for example, a file.
See
Refer to "System Administration Functions" under "Functions and Operators" in the PostgreSQL Documentation for information on
pg_create_restore_point.
- 18 -
Chapter 4 Configuring Secure Communication Using
Secure Sockets Layer
If communication data transferred between a client and a server contains confidential information, encrypting the communication data can
protect it against threats, such as eavesdropping on the network.
The following figure illustrates the environment for communication data encryption.
- 19 -
4.1.1 Issuing a Certificate
For authenticating servers, you must acquire a certificate issued by the certificate authority (CA).
FUJITSU Enterprise Postgres supports X.509 standard PEM format files. If the certificate authority issues a file in DER format, use a tool
such as the openssl command to convert the DER format file to PEM format.
The following provides an overview of the procedure. Refer to the procedure published by the public or independent certificate authority
(CA) that provides the certificate file for details.
4.1.2 Deploying a Server Certificate File and a Server Private Key File
Create a directory on the local disk of the database server and store the server certificate file and the server private key file in it.
Use the operating system features to set access privileges for the server certificate file and the server private key file so that only the database
administrator has load privileges.
Back up the server certificate file and the server private key file in the event that data corruption occurs and store them securely.
See
Refer to "Secure TCP/IP Connections with SSL" under "Server Administration" in the PostgreSQL Documentation for details.
See
Refer to the following sections in the Application Development Guide for details, depending on your application development environment:
- 20 -
4.1.6 Performing Database Multiplexing
When you perform communication that uses database multiplexing and a Secure Socket Layer server certificate, take one of the following
actions:
- Create one server certificate, replicate it, and place a copy on each server used for database multiplexing.
If sslmode is set to verify-full, add all domain names in subjectAltName.
- Create server certificate for each server used for database multiplexing.
See
Refer to "Using the Application Connection Switch Feature" in the Application Development Guide for information on how to specify
applications on the client.
- 21 -
Chapter 5 Protecting Storage Data Using Transparent Data
Encryption
This chapter describes how to encrypt data to be stored in the database.
Encryption mechanisms
Two-layer encryption key and the keystore
In each tablespace, there is a tablespace encryption key that encrypts and decrypts all the data within. The tablespace encryption key is
encrypted by the master encryption key and saved.
There is only one master encryption key in the database cluster, which is encrypted and stored in the keystore.
Therefore, an attacker cannot read the master encryption key from the keystore.
Keystore management
FUJITSU Enterprise Postgres works in conjunction with the IBM Z Hardware Security Module (HSM) to provide hardware
management of master encryption keys for robust security. The master encryption key is encrypted based on the master key in the HSM
and is never leaked out over its lifetime. Use hardware-stored keystores to reduce deployment and operating costs for keystore
management.
File-based keystores that do not work with the HSM are also possible. The master encryption key is then encrypted based on the
passphrase that you specify and stored in the keystore. Refer to "Appendix J Operation of Transparent data Encryption in File-based
Keystores" for information about the operation of transparent data encryption in file-based.
Strong encryption algorithms
TDE uses the Advanced Encryption Standard (AES) as its encryption algorithm. AES was adopted as a standard in 2002 by the United
States Federal Government, and is used throughout the world.
Faster hardware-based encryption/decryption
Take advantage of the CPACF (CP Assist for Cryptographic Functions) in the IBM Z processor to minimize encryption and decryption
overhead. This means that even in situations where previously the minimum encryption target was selected as a tradeoff between
performance and security, it is now possible to encrypt all the data of an application.
Zero overhead storage areas
Encryption does not change the size of data stored in tables, indexes, or WAL. There is, therefore, no need for additional estimates or
disks.
Scope of encryption
All user data within the specified tablespace
The tablespace is the unit for specifying encryption. All tables, indexes, temporary tables, and temporary indexes created in the
encrypted tablespace are encrypted. There is no need for the user to consider which tables and strings to encrypt.
Refer to "5.5 Encrypting a Tablespace" for details.
- 22 -
Backup data
The pgx_dmpall command and pg_basebackup command create backup data by copying the OS file. Backups of the encrypted data are,
therefore, also encrypted. Information is protected from leakage even if the backup medium is stolen.
WAL and temporary files
WAL, which is created by updating encrypted tables and indexes, is encrypted with the same security strength as the update target. When
large merges and sorts are performed, the encrypted data is written to a temporary file in encrypted format.
Streaming replication support
You can combine streaming replication and transparent data encryption. The data and WAL encrypted on the primary server is
transferred to the standby server in its encrypted format and stored.
Note
The following are not encrypted:
keystore_location = '/key/store/location'
shared_preload_libraries = 'tde_z'
tde_z.SLOT_ID = 5
When the token model "CCA" is used, it is necessary to take care of multi-coprocessor and multi-domain selection. Specify the
postgresql.conf parameter for CCA configuration so that FUJITSU Enterprise Postgres can use the specific coprocessor and domain.
- 23 -
tde_z.IBM_CCA_CSU_DEFAULT_ADAPTER = 'CRP01'
tde_z.IBM_CCA_CSU_DEFAULT_DOMAIN = '3'
- Using WebAdmin
Refer to "2.1.1 Using WebAdmin", and restart the instance.
- Specify the -w option. This means that the command returns after waiting for the instance to start. If the -w option is not
specified, it may not be possible to determine if the starting of the instance completed successfully or if it failed.
Example
2. Execute an SQL function, such as the one below, to set the master encryption key. This must be performed by the database superuser.
SELECT pgx_set_master_key('user pin');
The argument should be the user pin set in "5.2 Preparing for HSM Collaboration".
user pin is the user pin configured in "5.2 Preparing for HSM Collaboration".
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_open_keystore function.
Note that, in the following cases, the user pin must be entered when starting the instance, because the encrypted WAL must be decrypted
for recovery. In this case, the above-mentioned pgx_open_keystore function cannot be executed.
- 24 -
Point
When using an automatically opening keystore, you do not need to enter the user pin and you can automatically open the keystore when the
database server starts. Refer to "5.7.4 Enabling Automatic Opening of the Keystore" for details.
Or
When the tablespace is empty and not set with encryption algorithm, the encryption algorithm can be set with the command below.
Trying to set the encryption algorithm for a non-empty tablespace causes an error.
You can use AES with a key length of 128 bits or 256 bits as the encryption algorithm. It is recommended that you use 256-bit AES. Refer
to "Appendix A Parameters" for information on how to specify the runtime parameters.
If user provides both GUC and command line options while creating the tablespace, the preference is given to the command line option.
The pg_default and pg_global tablespaces cannot be encrypted.
Create tables and indexes in the encrypted tablespace that you created. Relations created in the encrypted tablespace are automatically
encrypted.
Example
Example 1: Specifying an encrypted tablespace when creating it
Example 2: Not explicitly specifying a tablespace when creating it and instead using the default tablespace
The process is the same for encrypting temporary tables and temporary indexes. In other words, either explicitly specify the TABLESPACE
clause or list encrypted tablespaces in the temp_tablespaces parameter, and then execute CREATE TEMPORARY TABLE or CREATE
INDEX.
Point
If an encrypted tablespace is specified in the TABLESPACE clause of the CREATE DATABASE statement, relations created in the
database without explicitly specifying a tablespace will be encrypted. Furthermore, the system catalog will also be encrypted, so the source
code of user-defined functions is also protected.
- 25 -
Example: Specifying a tablespace in a database definition statement
Part of the data is also stored in the system catalog - to encrypt this data as well, specify an encrypted tablespace as above and create a
database.
Example
postgres=# SELECT spcname, spcencalgo FROM pg_tablespace ts, pgx_tablespaces tsx WHERE ts.oid =
tsx.spctablespace;
spcname | spcencalgo
-------------------+------------
pg_default | none
pg_global | none
secure_tablespace | AES256
(3 rows)
See
Refer to "Notes on Application Compatibility" in the Application Development Guide for information on how to maintain application
compatibility.
- 26 -
5.7.2 Changing the HSM master key
The master encryption key is encrypted with the master key hidden by the HSM, and the master key is highly secure, but the HSM master
key can be changed. Refer to the IBM documentation for instructions on how to do this. First, stop the database server. As soon as you have
change the HSM master key, back up the entire opencryptoki token directory of the slot assigned to FUJITSU Enterprise Postgres.
SELECT pgx_open_keystore('newUserpin');
As soon as you change the user pin, back up the entire opencryptoki token directory of the slot you assigned to FUJITSU Enterprise Postgres.
Specify the user pin as set in "5.2 Preparing for HSM Collaboration".
Point
Do not overwrite an old keystore when backing up a keystore. This is because during database recovery, you must restore the keystore to
its state at the time of database backup. When the backup data of the database is no longer required, delete the corresponding keystore.
Example
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- 27 -
- Change the master encryption key, and back up the keystore on May 5, 2020.
> psql -c "SELECT pgx_set_master_key('user pin')" postgres
> cp -p /key/store/location/keystore.ks /keybackup/keystore_20200505.ks
> tar -cf token_directory_fep_20200505.tar /var/lib/opencryptoki/fep
- Specify the SQL function that sets the master encryption key in the -c option.
- Specify the name of the database to be connected to as the argument.
If the keystore is corrupted or lost, restore the keystore (containing the latest master encryption key) and the entire opencryptoki token
directory of the slot allocated to FUJITSU Enterprise Postgres. If there is no keystore containing the latest master encryption key, restore
the keystore and the entire opencryptoki token directory of the slot assigned to FUJITSU Enterprise Postgres to their state at the time of
database backup, and recover the database from the database backup. This action recovers the keystore to its latest state.
Example
- Restore the keystore containing the latest master encryption key as of May 5, 2020.
> cp -p /keybackup/keystore_20200505.ks /key/store/location/keystore.ks
> tar -xf token_directory_fep_20200505.tar
- If there is no backup of the keystore containing the latest master encryption key, recover the keystore by restoring the keystore that was
backed up along with the database on 1 May 2020.
- Specify the data storage directory in the -D option. If the -D option is omitted, the value of the PGDATA environment variable is
used by default.
See
Refer to "pgx_rcvall" and "pgx_dmpall" in the Reference for information on the pgx_rcvall and pgx_dmpall commands.
Refer to "psql" under "Reference" in the PostgreSQL Documentation for information on the psql command.
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_set_master_key function.
Refer to "5.7.4 Enabling Automatic Opening of the Keystore" for information on how to enable automatic opening of the keystore.
- 28 -
Note
When recoverying the database, it is necessary to restore the configuration of both Crypto Express Adapter Card and openCryptoki which
contains the same key as was set at the time of taking backup so that the recovered database can cooperate with hardware security module.
- Recovery
Restore the keystore and the entire opencryptoki token directory of the slot allocated to FUJITSU Enterprise Postgres to their state at
the time of database backup. Refer to "5.7.5 Backing Up and Recovering the Keystore" for details.
Enable automatic opening of the keystore in accordance with the procedure described in "5.7.4 Enabling Automatic Opening of the
Keystore". Then, use WebAdmin to recover the database.
- Recovery
Restore the keystore and the entire opencryptoki token directory of the slot allocated to FUJITSU Enterprise Postgres to their state at
the time of the database backup.
Configure automatic opening of the key store as necessary.
If automatic opening of the keystore is not enabled, execute the pgx_rcvall command with the --user-pin option specified. This will
display the prompt for the user pin to be entered.
Example
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- Recover the database and the keystore from the backup taken on May 1, 2020.
> cp -p /keybackup/keystore_20200501.ks /key/store/location/keystore.ks
> pgx_rcvall -B /backup/inst1 -D /database/inst1 --user-pin
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- 29 -
- The --user-pin option prompts you to enter the user pin to open the keystore.
- Restore
If the backup data has been encrypted using, for example Open SSL commands, decrypt that data.
The data generated by the pg_dumpall command includes a specification to encrypt tablespaces by default. For this reason, the psql
command encrypts tablespaces during restoration.
- Restore
Restore the keystore and the entire opencryptoki token directory of the slot allocated to FUJITSU Enterprise Postgres to their state at
the time of the database backup.
Stop the instance and restore the data directory and the tablespace directory using the file copy command of the operating system.
- Recovery
Restore the keystore and the entire opencryptoki token directory of the slot allocated to FUJITSU Enterprise Postgres to their state at
the time of the database backup.
Configure automatic opening of the key store as necessary.
If automatic opening of the keystore is not enabled, execute the pg_ctl command to start the instance with the --user-pin option specified.
This will display the prompt for the user pin to be entered.
See
- Refer to "pg_ctl" under "Reference" in the PostgreSQL Documentation for information on the pg_ctl command.
- Refer to "Reference" in the PostgreSQL Documentation for information on the following commands:
- psql
- pg_dump
- pg_basebackup
- 30 -
- Refer to the Reference for information on the following commands:
- pgx_rcvall
- pgx_dmpall
- pg_dumpall
- shred command
Example
If you use COPY FROM to import data to tables and indexes in an encrypted tablespace, the imported data is automatically encrypted before
being stored.
See
Refer to "SQL Commands" under "Reference" in the PostgreSQL Documentation for information on SQL commands.
- 31 -
5.11.1 Database Multiplexing Mode
Note the following when using transparent data encryption in environments that use streaming replication, or database multiplexing with
streaming replication.
1. On source machine
a. Backup the token directory of the source machine (retain file owners and groups) in the backup
2. On target machine
a. Update the opencryptoki.conf file
Add new Slot/tokname to use on the target machine
c. Copy all the backed-up token directory files into this new token directory (preserving the file owners and groups)
d. Be sure to remove any shared memory files in /dev/shm which are associated with the new tokname
e. Restart Slot Manager daemon (pkcsslotd)
f. New token should be ready to use.
(The SO PIN and User PIN are same as they were on the source machine)
Note
- If you start a standby server without copying the openCryptoki token directory from the primary server to the standby server, an error
occurs during the following operations.
- 32 -
- If an error occurs, you can recover by copying the opencryptoki token directory from the primary server to the standby server and
restarting the standby server.
See
Refer to "pgx_rcvall " in the Reference for information on pgx_rcvall command.
Refer to "pg_ctl" under "Reference" in the PostgreSQL Documentation for information on pg_ctl command.
Refer to "pg_basebackup" under "Reference" in the PostgreSQL Documentation for information on pg_basebackup command.
Refer to "High Availability, Load Balancing, and Replication" under "Server Administration" in the PostgreSQL Documentation for
information on how to set up streaming replication.
- shred command
- Unencrypted data may be written from the database server memory to the operating system's swap area. To prevent leakage of
information from the swap area, consider either disabling the use of swap area or encrypting the swap area using a full-disk encryption
product.
- The content of the server log file is not encrypted. Therefore, in some cases the value of a constant specified in a SQL statement is output
to the server log file. To prevent this, consider setting a parameter such as log_min_error_statement.
- When executing an SQL function that opens the keystore and modifies the master encryption key, ensure that the SQL statement
containing the passphrase is not output to the server log file. To prevent this, consider setting a parameter such as
log_min_error_statement. If you are executing this type of SQL function on a different computer from the database server, encrypt the
communication between the client and the database server with SSL.
- The logical replication is available which allows non-backed up clusters to subscribe to databases where transparent data encryption
is enabled. Logical replication does not need to have the same encryption strategy between publisher and subscriber.
In this scenario, if the user wants to encrypt the subscribed copy of data as well, then it is the user's responsibility to create encryption
policies to the subscribed databases. By default, published encrypted tablespace data will not be encrypted in the subscriber side.
1. (Normal procedure) Create an owner and a database for the built application.
CREATE USER crm_admin ...;
CREATE DATABASE crm_db ...;
2. (Procedure for encryption) Create an encrypted tablespace to store the data for the built application.
SET tablespace_encryption_algorithm = 'AES256';
CREATE TABLESPACE crm_tablespace LOCATION '/crm/data';
- 33 -
3. (Procedure for encryption) Configure an encrypted tablespace as the default tablespace for the owner of the built application.
ALTER USER crm_admin SET default_tablespace = 'crm_tablespace';
ALTER USER crm_admin SET temp_tablespaces = 'crm_tablespace';
4. (Normal procedure) Install the built application. The application installer prompts you to enter the host name and the port number of
the database server, the user name, and the database name. The installer uses the entered information to connect to the database server
and execute the SQL script. For applications that do not have an installer, the database administrator must manually execute the SQL
script.
Normally, the application's SQL script includes logic definition SQL statements, such as CREATE TABLE, CREATE INDEX, and
GRANT or REVOKE, converted from the entity-relationship diagram. It does not include SQL statements that create databases, users, and
tablespaces. Configuring the default tablespace of the users who will execute the SQL script deploys the objects generated by the SQL script
to the tablespace.
- 34 -
Chapter 6 Data Masking
Data masking is a feature that can change the returned data for queries generated by applications, so that it can be referenced by users.
For example, for a query of employee data, digits except the last four digits of an eight-digit employee number can be changed to "*" so that
it can be used for reference.
Note
When using this feature, it is recommended that the changed data be transferred to another medium for users to reference. This is because,
if users directly access the database to extract the masked data, there is a possibility that they can deduce the original data by analyzing the
masking policy or query result to the masking target column.
Note
When a masking policy is defined, the search performance for the corresponding table may deteriorate.
- 35 -
6.1.1 Masking Target
Masking target refers to a column to which a masking policy will be applied. When referring to a masking target or a function that includes
a masking target, the execution result will be changed and obtained.
The following commands can change the execution result:
- SELECT
- COPY
- pg_dump
- pg_dumpall
Note
- If a masking target is specified to INSERT...SELECT target columns, processing will be performed using data before change.
- If a masking target other than SELECT target columns is specified, processing will be performed using data before change.
- If a masking target is specified in a function where the data type will be converted, an error will occur.
Full masking
All the data in the specified column is changed. The changed value returned to the application that made the query varies depending on the
column data type.
For example, 0 is used for a numeric type column and a space is used for a character type column.
Partial masking
The data in the specified column is partially changed.
For example, digits except the last four digits of an employee number can be changed to "*".
Note
- If multiple valid masking targets are specified for a function, the masking type for the left-most masking target will be applied.
For example, if "SELECT GREATEST(c1, c2) FROM t1" is executed for numeric type masking target c1 and c2, the masking type for
c1 will be applied.
- When masking the data that includes multibyte characters, do not specify partial masking for masking type. The result may not be as
expected.
- 36 -
be specified.
For example, when masking data only for "postgres" users, specify 'current_user = ''postgres''' in the masking condition.
Information
Specify '1=1' so the masking condition is always evaluated to be TRUE and masking is performed all the time.
Full masking
With full masking, all characters are changed to values as determined by the database. Changed characters can be referenced in the
pgx_confidential_values table. Also, replacement characters can be changed using the pgx_update_confidential_values system
management function.
See
Refer to "6.3 Data Types for Masking" for information on the data types for which data masking can be performed.
Partial masking
With partial masking, data is changed according to the content in the function_parameters parameter. The method of specifying
function_parameters varies depending on the data type.
Example
Specify as below to change the values from the 1st to 5th digits to 9.
function_parameters := '9, 1, 5'
In this example, if the original data is "123456789", it will be changed to "999996789".
- outputFormat: Define the method to format the displayed data. Specify "V" for characters that will potentially
be masked. Any character to be output can be specified for each character "F" in inputFormat. If you want to
output a single quotation mark, specify two of them consecutively.
- replacementCharacter: Specify any single character. If you want to output a single quotation mark, specify two
of them consecutively.
- startPosition: Specify the position of "V" as the start position of masking. For example, to specify the position
of the 4th "V" from the left, specify 4. Specify a positive integer.
- 37 -
Category Method of specifying function_parameters
- endPosition: Specify the position of "V" as an end position of masking. When working out the end position,
do not include positions of "F". For example, to specify the position of the 11th "V" from the left, specify 11.
Specify a positive integer that is greater than startPosition.
Example
Specify as below to mask a telephone number other than the first three digits using *.
function_parameters := 'VVVFVVVVFVVVV, VVV-VVVV-VVVV, *, 4, 11'
In this example, if the original data is "012-3156-7890", it will be changed to "012-****-****".
- M: Masks month. To mask month, enter the month from 1 to 12 after a lowercase letter m. Specify an uppercase
letter M to not mask month.
- D: Masks date. To mask date, enter the date from 1 to 31 after a lowercase letter d. If a value bigger than the
last day of the month is entered, the last day of the month will be displayed. Specify an uppercase letter D to
not mask date.
- Y: Masks year. To mask year, enter the year from 1 to 9999 after a lowercase letter y. Specify an uppercase
letter Y to not mask year.
- H: Masks hour. To mask hour, enter the hour from 0 to 23 after a lowercase letter h. Specify an uppercase letter
H to not mask hour.
- M: Masks minute. To mask minute, enter the minute from 0 to 59 after a lowercase letter m. Specify an
uppercase letter M to not mask minute.
- S: Masks second. To mask second, enter the second from 0 to 59 after a lowercase letter s. Specify an uppercase
letter S to not mask second.
Example
Specify as below to mask hour, minute, and second and display 00:00:00.
function_parameters := 'MDYh0m0s0'
In this example, if the original data is "2010-10-10 10:10:10", it will be changed to "2010-10-10 00:00:00".
See
Example
Specify as below to change all three characters starting from b to X.
regexp_pattern := 'b..'
- 38 -
regexp_replacement:= 'X'
regexp_flags := 'g'
In this example, if the original data is "foobarbaz", it will be changed to "fooXX".
See
- Refer to "POSIX Regular Expressions" in the PostgreSQL Documentation and check pattern, replacement, and flags for information
on the values that can be specified for regexp_pattern, regexp_replacement, and regexp_flags.
- Refer to "6.3 Data Types for Masking" for information on the data types for which masking can be performed.
Note
- When column data type is character(n) or char(n) and if the string length after change exceeds n, the extra characters will be truncated
and only characters up to the nth character will be displayed.
- When column data type is character varying(n) or varchar(n) and if the string length after change exceeds the length before the change,
the extra characters will be truncated and only characters up to the length before change will be displayed.
Example
Note
You must always prepend "pgx_datamasking" to the "shared_preload_libraries" parameter.
Information
- Specify "false" for pgx_datamasking.enable to not use this feature. Data will not be masked even if a masking policy is
configured. This feature becomes available again once "true" is specified for pgx_datamasking.enable. This setting can be made
- 39 -
by specifying a SET statement or specifying a parameter in the postgresql.conf file.
Example
- Hereafter, also perform this preparatory task for the "template1" database, so that this feature can be used by default when
creating a new database.
Usage
To perform masking, a masking policy needs to be configured. The masking policy can be created, changed, confirmed, enabled, disabled
or deleted during operation.
The procedures to perform these tasks are explained below with examples.
Note
Only database superusers can configure masking policies.
See
- Refer to "B.3.2 pgx_create_confidential_policy" for information on the pgx_create_confidential_policy system management function.
- 40 -
Note
See
- Refer to "B.3.1 pgx_alter_confidential_policy" for information on the pgx_alter_confidential_policy system management function.
- 41 -
public | t1 | p1 | c2 | PARTIAL | VVVFVVVVFVVVV, VVV-VVVV-
VVVV, *, 4, 11 | | | |
(2 row)
See
- 42 -
0 | 012-****-****
(3 row)
See
See
- 43 -
Category Data type Masking type
Full masking Partial masking Regular
expression
masking
Character type character varying(n) Y Y Y
varchar(n) Y Y Y
character(n) Y Y Y
char(n) Y Y Y
Date/timestamp type date Y Y N
timestamp Y Y N
- Take strong caution in publishing data masking's confidential tables (pgx_confidential_policies, pgx_confidential_columns, etc.)
unless the user is publishing all tables of the database and wants to apply the same data masking's policies on the subscribed database
for all of them.
Otherwise, as these confidential tables contain the masking policies for all tables of the database, confidential policies of unpublished
tables may be unintentionally published. Additionally, it is not possible to apply different data masking policies on the subscriber
database.
- 44 -
Chapter 7 Periodic Operations
This chapter describes the operations that must be performed periodically when running daily database jobs.
See
- Refer to "Error Reporting and Logging" under "Server Administration" in the PostgreSQL Documentation for information on logs.
- Refer to "Configuring Parameters" in the Installation and Setup Guide for Server for information on log settings when operating with
WebAdmin.
- df command
You can even use SQL statements to check tables and indexes individually.
Refer to "Determining Disk Usage" under "Server Administration" in the PostgreSQL Documentation for information on this method.
Information
If you are using WebAdmin for operations, a warning is displayed when disk usage reaches 80%
- rm command
You can also secure disk space by performing the following tasks periodically:
- 45 -
- To secure space on the data storage destination disk:
Execute the REINDEX statement. Refer to "7.5 Reorganizing Indexes" for details.
Note
If a value other than 0 is specified for the tcp_user_timeout parameter, the waiting time set by the tcp_keepalives_idle parameter and
tcp_keepalives_interval parameter will be invalid and the waiting time specified by the tcp_user_timeout parameter will be used.
See
Refer to "Connection Settings" under "Server Administration" in the PostgreSQL Documentation for information on the parameters.
- 46 -
See
Refer to "Routine Vacuuming" under "Server Administration" in the PostgreSQL Documentation for information on the VACUUM
command.
In such cases, you can minimize performance degradation of the database by monitoring problematic connections.
The following method is supported for monitoring connections that have been in the waiting status for an extended period:
Example
The example below shows connections where the client has been in the waiting status for at least 60 minutes.
However, when considering continued compatibility of applications, do not reference system catalogs directly in the following SQL
statements.
postgres=# select * from pg_stat_activity where state='idle in transaction' and current_timestamp >
cast(query_start + interval '60 minutes' as timestamp);
-[ RECORD 1 ]----+------------------------------
datid | 13003
datname | db01
pid | 4638
usesysid | 10
usename | fsep
application_name | apl01
client_addr | 192.33.44.15
client_hostname |
client_port | 27500
backend_start | 2018-02-24 09:09:21.730641+09
xact_start | 2018-02-24 09:09:23.858727+09
query_start | 2018-02-24 09:09:23.858727+09
state_change | 2018-02-24 09:09:23.858834+09
wait_event_type | Client
wait_event | ClientRead
state | idle in transaction
backend_xid |
backend_xmin |
query | begin;
backend_type | client backend
See
- Refer to "Notes on Application Compatibility" in the Application Development Guide for information on maintaining application
compatibility.
- Refer to "The Statistics Collector" under "Server Administration" in the PostgreSQL Documentation for information on
pg_stat_activity.
- 47 -
To rearrange used space on the disk and prevent the database access performance from declining, it is recommended that you periodically
execute the REINDEX command to reorganize indexes.
Check the disk usage of the data storage destination using the method described in "7.2 Monitoring Disk Usage and Securing Free Space".
Note
Because the REINDEX command retrieves the exclusive lock for an index being processed and locks writing of tables that are the source
of the index, other processes that access these may stop while waiting to be locked.
Therefore, it is necessary to consider measures such as executing the command after the task is completed.
See
Refer to "Routine Reindexing" under "Server Administration" in the PostgreSQL Documentation for information on reorganizing indexes
by periodically executing the REINDEX command.
Point
Typically, reorganize indexes once a month at a suitable time such as when conducting database maintenance. Use SQL statements to check
index usage. If this usage is increasing on a daily basis, adjust the frequency of recreating the index as compared to the free disk space.
The following example shows the SQL statements and the output.
However, when considering continued compatibility of applications, do not reference system catalogs and functions directly in the
following SQL statements. Refer to "Notes on Application Compatibility" in the Application Development Guide for details.
[SQL statements]
SELECT
nspname AS schema_name,
relname AS index_name,
round(100 * pg_relation_size(indexrelid) / pg_relation_size(indrelid)) / 100 AS index_ratio,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size,
pg_size_pretty(pg_relation_size(indrelid)) AS table_size
FROM pg_index I
LEFT JOIN pg_class C ON (C.oid = I.indexrelid)
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
C.relkind = 'i' AND
pg_relation_size(indrelid) > 0
ORDER BY pg_relation_size(indexrelid) DESC, index_ratio DESC;
[Output]
See
Refer to "Notes on Application Compatibility" in the Application Development Guide for information on maintaining application
compatibility.
- 48 -
7.6 Monitoring Database Activity
FUJITSU Enterprise Postgres enables you to collect information related to database activity. By monitoring this information, you can check
changes in the database status.
This information includes wait information for resources such as internal locks, and is useful for detecting performance bottlenecks.
Furthermore, you should collect this information in case you need to request Fujitsu technical support for an investigation.
2. Reset statistics after work hours, that is, after jobs have finished.
Refer to "7.6.3 Information Reset" for information on how to reset statistics.
Where jobs run 24 hours a day, reset statistics and save the file with collected information when the workload is low, for example, at night.
Note
Statistics cumulatively add the daily database value, so if you do not reset them, the values will exceed the upper limit, and therefore will
not provide accurate information.
- 49 -
7.6.1 Information that can be Collected
Information that can be collected is categorized into the following types:
See
Refer to "Monitoring Database Activity" under "Server Administration" in the PostgreSQL Documentation for information on information
common to PostgreSQL.
See
Refer to "The Statistics Collector" in "Monitoring Database Activity" under "Server Administration" in the PostgreSQL Documentation for
information on information common to PostgreSQL.
- 50 -
Information added by FUJITSU Enterprise Postgres
Information added by FUJITSU Enterprise Postgres is collected by default.
To enable or disable information collection, change the configuration parameters in postgresql.conf. The following table lists the views for
which you can enable or disable information collection, and the configuration parameters.
- If pg_stat_reset_shared('latch') is called:
All values displayed in pgx_stat_latch
- If pg_stat_reset_shared('walwriter') is
called:
All values displayed in pgx_stat_walwriter
- If pg_stat_reset_shared('sql') is called:
All values displayed in pgx_stat_sql
- If pg_stat_reset_shared('gmc') is called:
All values except size column in
pgx_stat_gmc
- 51 -
See
Refer to "Statistics Functions" in "Monitoring Database Activity" under "Server Administration" in the PostgreSQL Documentation for
information on other parameters of the pg_stat_reset_shared function.
- 52 -
Chapter 8 Streaming Replication Using WebAdmin
This chapter describes how to create a streaming replication cluster using WebAdmin.
Streaming replication allows the creation of one or more standby instances, which connect to the master instances and replicate the data
using WAL records. The standby instance can be used for read-only operations.
WebAdmin can be used to create a streaming replication cluster. WebAdmin allows the creation of a cluster in the following configurations:
- Master-Standby Configuration: This configuration creates a master and standby instance together.
- Standby Only Configuration: This configuration creates a standby instance from an already existing instance.
Point
- A standby instance can be created from a standalone instance, a master instance, or even from another standby instance.
- If a streaming replication cluster is created using WebAdmin, the network with the host name (or IP address) specified in [Host name]
will be used across sessions of WebAdmin, and also used as the log transfer network.
- To use a network other than the job network as the log transfer network, specify the host name other than the job network one in [Host
name].
1. In the [Instances] tab, select the instance from which a standby instance is to be created.
2. Click .
- 53 -
3. Enter the information for the standby instance to be created. In the example below, a standby instance is created from instance "inst1".
The instance name, host address and port of the selected instance are already displayed for easy reference.
- [Location]: Whether to create the instance in the server that the current user is logged in to, or in a remote server. The default is
"Local", which will create the instance in the server machine where WebAdmin is currently running.
- [Replication credential]: The user name and password required for the standby instance to connect to the master instance. The
user name and password can be entered or selected from the Wallet. Refer to "Appendix G WebAdmin Wallet" for information
on creating wallet entries.
- Maximum of 16 characters
- The first character must be an ASCII alphabetic character
- The other characters must be ASCII alphanumeric characters
- [Instance port]: Port number of the standby database instance.
- [Host IP address]: The IP address of the server machine where the standby instance is to be created. This information is needed
to configure the standby instance to be connected to the master.
- [Data storage path]: Directory where the database data will be stored
- [Backup storage path]: Directory where the database backup will be stored
- [Transaction log path]: Directory where the transaction log will be stored
- 54 -
- [Encoding]: Database encoding system
- [Replication mode]: Replication mode of the standby instance to be created ("Asynchronous" or "Synchronous")
- [Application name]: The reference name of the standby instance used to identify it to the master instance.
The name must meet the conditions below:
- Maximum of 16 characters
- The first character must be an ASCII alphabetic character
- The other characters must be ASCII alphanumeric characters
4. Click to create the standby instance.
5. Once the standby instance is created successfully, select standby instance in the [Instances] tab. The following page will be displayed:
Note
- Backups are not possible for standby instances in WebAdmin. As a result, and are disabled and no value is shown for [Backup
storage status] and [Backup time].
- If using WebAdmin to manage Mirroring Controller, the message below may be output to the server log or system log in the standby
instance. No action is required, as the instance is running normally.
- Replication credential (user name and password) should not contain hazardous characters. Refer to “Appendix H WebAdmin Disallow
User Inputs Containing Hazardous Characters”.
- 55 -
8.2 Promoting a Standby Instance
Streaming replication between a master and standby instance can be discontinued using WebAdmin.
Follow the procedure below to promote a standby instance to a standalone instance, thereby discontinuing the streaming replication.
1. In the [Instances] tab, select the standby instance that needs to be promoted.
2. Click .
1. In the [Instances] tab, select the master instance of the relevant cluster.
2. Click .
3. In the [Streaming replication] section, edit the value for [Synchronous standby names].
- Add the "Application name" of the standby instance you want to be in Synchronous mode.
4. Click .
6. Select the standby instance. [Instance type] will now show the updated status.
Note
- Converting an Asynchronous standby instance to Synchronous can cause the master instance to queue the incoming transactions until
the standby instance is ready. For this reason, it is recommended that this operation be performed during a scheduled maintenance
period.
- When adding a synchronous standby instance, FUJITSU Enterprise Postgres will only keep the first entry in [Synchronous standby
names] in synchronous state.
- To learn more about the differences between synchronous and asynchronous standby modes and their behavior, refer to "Streaming
Replication" in "High Availability, Load Balancing, and Replication" in the PostgreSQL Documentation.
1. In the [Instances] tab, select the master instance of the relevant cluster.
2. Click .
- 56 -
3. In the [Streaming replication] section, edit the value for [Synchronous standby names].
- Remove the "Application name" of the standby instance you want to be in Asynchronous mode.
4. Click .
6. Select the standby instance. [Instance type] will now show the updated status.
Note
To learn more about the differences between synchronous and asynchronous standby modes and their behavior, refer to "Streaming
Replication" in "High Availability, Load Balancing, and Replication" in the PostgreSQL Documentation.
1. In the [Instances] tab, select the remote instance (from where the new cluster node will stream WAL entries), and then click .
2. Configure the node to accept streaming requests from the new node.
3. In the [Instances] tab, select the new standby instance (which needs to be connected to the cluster), and then click .
8. Select [Restart later] or [Restart now], and then click [Yes] to set up the standby instance.
9. Upon successful completion, the confirmation dialog box will be displayed.
10. Click [Close] to return to the instance details window.
The instance will become a standby instance, and will be part of the streaming replication cluster. The replication diagram will display the
relationship between the standby instance and the remote instance. The user can change the replication relationship of the remote instance
from asynchronous to synchronous (and vice versa) using the [Configuration] window.
- 57 -
Chapter 9 Installing and Operating the In-memory Feature
The in-memory feature enables fast aggregation using Vertical Clustered Index (VCI) and memory-resident feature.
VCI has a data structure suitable for aggregation, and features parallel scan and disk compression, which enable faster aggregation through
reduced disk I/O.
The memory-resident feature reduces disk I/O that occurs during aggregation. It consists of the preload feature that reads VCI data to
memory in advance, and the stable buffer feature that suppresses VCI data eviction from memory. The stable buffer feature secures the
proportion specified by parameter in the shared memory for VCI.
This chapter describes how to install and operate the in-memory feature.
- The data type of the target table or column contains VCI restrictions.
- The SQL statement does not meet the VCI operating conditions
- VCI is determined to be slower based on cost estimation
Note
If performing operations that use VCI, the full_page_writes parameter setting in postgresql.conf must be enabled (on). For this reason, if
this parameter is disabled (off), operations that use VCI return an error. In addition, to perform operations for tables that do not create a VCI
when the full_page_writes parameter setting is temporarily disabled (off), do not create a VCI or perform operations to tables that created
a VCI during that time.
See
- Refer to "9.1.4 Data that can Use VCI" for information on VCI restrictions.
- Refer to "Scan Using a Vertical Clustered Index (VCI)" - "Operating Conditions" in the Application Development Guide for
information on VCI operating conditions.
- 58 -
Select the aggregation that you want to speed up and identify the required column data. The additional resources below are required
according to the number of columns.
- Memory
Secure additional capacity required for the disk space for the column for which VCI is to be created.
- Disk
Secure additional disks based on the disk space required for the column for which VCI is to be created, as VCI stores column data as
well as existing table data on the disk. It is recommended to provide a separate disk in addition to the existing one, and specify it as the
tablespace to avoid impact on any other jobs caused by I/O.
Information
The operations on VCI can continue even if the memory configured for VCI is insufficient by using VCI data on the disk.
See
Refer to "Estimating Memory Requirements" and "Estimating Database Disk Space Requirements" in the Installation and Setup Guide for
Server for information on how to estimate required memory and disk space.
9.1.3 Setting up
This section describes how to set up VCI.
Setup flow
1. Setting Parameters
2. Installing the Extensions
3. Creating VCI
4. Confirming that VCI has been Created
- 59 -
Example
Note
An error occurs if you use VCI to start instances when procfs is not mounted. Ensure that procfs is mounted before starting instances.
See
- Refer to "Appendix A Parameters" for information on all parameters for VCI. Refer also to default value for each parameter and details
such as specification range in the same chapter. Refer to "Server Configuration" under "Server Administration" in the PostgreSQL
documentation for information on shared_preload_libraries, session_preload_libraries, and max_worker_processes.
- Installing VCI
db01=# CREATE EXTENSION vci;
- Installing pg_prewarm
db01=# CREATE EXTENSION pg_prewarm;
Note
db01=# CREATE INDEX idx_vci ON table01 USING vci (col01, col02) WITH (stable_buffer=true);
Note
- Some table types cannot be specified on the ON clause of CREATE INDEX. Refer to "9.1.4.1 Relation Types" for details.
- Some data types cannot be specified on the column specification of CREATE INDEX. Refer to "9.1.4.2 Data Types" for details.
- Some operations cannot be performed for VCI. Refer to "9.2.1 Commands that cannot be Used for VCI" for details.
- 60 -
- The same column cannot be specified more than once on the column specification of CREATE INDEX.
- VCI cannot be created for table columns that belong to the template database.
- CREATE INDEX creates multiple views named vci_10digitRelOid_5digitRelAttr_1charRelType alongside VCI itself. These are
called VCI internal relations. Do not update or delete them as they are used for VCI aggregation.
- All data for the specified column will be replaced in columnar format when VCI is created, so executing CREATE INDEX on an
existing table with data inserted takes more time compared with a general index (B-tree). Jobs can continue while CREATE INDEX
is running.
- When CREATE INDEX USING VCI is invoked on a partitioned table, the default behavior is to recurse to all partitions to ensure they
all have matching indexes. Each partition is first checked to determine whether an equivalent index already exists, and if so, that index
will become attached as a partition index to the index being created, which will become its parent index. If no matching index exists,
a new index will be created and automatically attached; the name of the new index in each partition will be determined as if no index
name had been specified in the command. If the ONLY option is specified, no recursion is done, and the index is marked invalid.
(ALTER INDEX ... ATTACH PARTITION marks the index valid, once all partitions acquire matching indexes.) Note, however, that
any partition that is created in the future using CREATE TABLE ... PARTITION OF will automatically have a matching index,
regardless of whether ONLY is specified.
Example
- 61 -
9.1.4.2 Data Types
VCIs cannot be created for some data types.
The column specification of CREATE INDEX described in "9.1.3.3 Creating a VCI" cannot specify a column with data type on which VCIs
cannot be created.
- 62 -
Category Data type Supported by
VCI?
Bit string bit(n) Y
bit varying(n) Y
Text search tsvector N
tsquery N
UUID uuid Y
XML xml N
JSON json N
jbson N
Range int4range N
int8range N
numrange N
tsrange N
tstzrange N
daterange N
Object identifier oid N
regproc N
regprocedure N
regoper N
regoperator N
regclass N
regtype N
regconfig N
regdictionary N
pg_lsn type pg_lsn N
Array type - N
User-defined type - N
(Basic type, enumerated type,
composite type, and range type)
SQL commands
- Operations that cannot be performed for the VCI extension
- 63 -
Command Subcommand Description
ALTER EXTENSION UPDATE The VCI extension cannot be specified.
SET SCHEMA This operation is not required for VCI.
ADD
DROP
CREATE EXTENSION SCHEMA The subcommands on the left cannot be performed
if the VCI extension is specified.
This operation is not required for VCI.
- 64 -
Command Subcommand Description
[ ASC | DESC ]
[ NULLS { FIRST | LAST } ]
WITH
WHERE
INCLUDE
CREATE OPERATOR - A VCI cannot be specified.
CLASS
This operation is not supported in VCI.
CREATE OPERATOR -
FAMILY
CREATE TABLE EXCLUDE
DROP INDEX CONCURRENTLY The subcommands on the left cannot be performed
if a VCI is specified.
This operation is not supported in VCI.
REINDEX - A VCI cannot be specified.
This command is not required as VCI uses
daemon's automatic maintenance to prevent disk
space from increasing.
Command Description
clusterdb Clustering cannot be performed for tables that contain a VCI.
reindexdb VCIs cannot be specified on the --index option.
See
Refer to "B.4 VCI Data Load Control Function" for information on pgx_prewarm_vci.
- 65 -
Chapter 10 Parallel Query
FUJITSU Enterprise Postgres enhances parallel queries, by taking into consideration the aspects below:
Note
The ability to increase workers during runtime is available only with parallel sequence scan.
- 66 -
Chapter 11 High-Speed Data Load
High-speed data load uses the pgx_loader command to load data from files at high speed into FUJITSU Enterprise Postgres.
Note
This feature is not available in single-user mode. This is because in single-user mode instances run in a single process, and it cannot start
parallel workers.
Installation flow
1. Deciding whether to Install
2. Estimating Resources
3. Setup
See
The standby event name is stored in the wait_event column of the pg_stat_activity view. Refer to "wait_event Description" in "The
Statistics Collector" in the PostgreSQL Documentation for details.
Frequency of checkpoints
If checkpoints are issued at short intervals, write performance is reduced. If the messages below are output to the server log during data
writes, increase the values of max_wal_size and checkpoint_timeout in postgresql.conf to reduce the frequency of checkpoints.
Example
- 67 -
- Dynamic shared memory created during data load
The feature creates shared memory and shared memory message queues during data load. These are used to send external data from the
back end to the parallel workers, and for error notifications.
Note
If the value of shared_buffers in postgresql.conf is small, the system will often have to wait for write exclusive locks to the same data page
(as described in "Database server memory" in "11.1.1 Deciding whether to Install"). Since input data cannot be loaded from the shared
memory message queues during such waits, they will often be full. In these cases, it will not be possible to write to the shared memory
message queues, resulting in degraded data load performance.
See
Refer to "High-Speed Data Load Memory Requirements" in the Installation and Setup Guide for Server for information on the formula for
estimating memory requirements.
11.1.3 Setup
This section describes how to set up high-speed data load.
Setup flow
1. Setting Parameters
2. Installing the Extension
Example
The example below shows how to configure 2 instances of high-speed data load being executed simultaneously using a degree of parallelism
of 4.
- 68 -
Note
As shown in the example above, set the value of max_prepared_transactions, max_worker_processes and max_parallel_workers multiplied
by the number of instances of this feature executed simultaneously.
The table below lists the postgresql.conf parameters that must also be checked.
See
Refer to "Resource Consumption" in the PostgreSQL Documentation for information on the parameters.
Example
The example below installs the extension on the "postgres" database.
Note
Example
The example below loads the file /path/to/data.csv (2000 records) into table tbl using a degree of parallelism of 3.
- 69 -
Point
If an external file contains data that violates the format or constraints, the data load may fail partway through, resulting in delays for routine
tasks such as nightly batch processing. Therefore, it is recommended to remove the invalid data before executing the data load.
Note
The data inserted using this feature is dumped as a COPY command by the pg_dump command and the pg_dumpall command.
See
Example
The example below retrieves the global transaction identifier (gid) of in-doubt transactions performed by the database role myrole
and that used table tbl. The retrieved global transaction identifiers pgx_loader:9589 and pgx_loader:9590 identify in-doubt
transactions.
- 70 -
Example
The example below checks if in-doubt transactions with the global transaction identifiers pgx_loader:9589 and pgx_loader:9590
exist.
See
Refer to "E.1 pgx_loader_state" for information on the pgx_loader_state table.
Example
The example below completes the in-doubt transactions prepared for table tbl.
Point
The recovery mode of the pgx_loader command only resolves transactions prepared by high-speed data load. For transactions prepared by
an application using distributed transactions other than this feature, follow the procedure described in "14.13 Actions in Response to Error
in a Distributed Transaction".
Example
The example below removes the extension on the "postgres" database.
- 71 -
Note
- The information required for operation of high-speed data load is stored in the pgx_loader_state table of the pgx_loader schema. Do
not remove the high-speed data load extension if the pgx_loader_state table is not empty.
- 72 -
Chapter 12 Global Meta Cache
The Global Meta Cache (GMC) feature loads a meta cache into shared memory using the pgx_global_metacache parameter. This reduces
the amount of memory required throughout the system.
12.1 Usage
Describes how to use the Global Meta Cache feature.
When the cache is created, if the total amount of meta caches on shared memory exceeds the value specified by pgx_global_metacache, the
inactive, unreferenced meta caches are removed from the GMC area. Note that if all GMC are in use and the cache cannot be created in the
GMC area, the cache is temporarily created in the local memory of the backend process.
- 73 -
Example
Here is an example postgresql.conf configuration:
pgx_global_metacache = 800 MB
Wait Events
The Global Meta Cache feature may cause wait events. Wait events are identified in the wait_event column of the pg_stat_activity view.
GMC specific wait events are described below.
[GMC Feature Wait Events]
Note
If global_metacache_sweep is happened frequently, increase the pgx_global_metacache setting.
See
Refer to "Viewing Statistics" in the PostgreSQL Documentation for information on the pg_stat_activity view.
12.2 Statistics
Describes the statistics for the Global Meta Cache feature.
- 74 -
Chapter 13 Backup/Recovery Using the Copy Command
By using a copy command created by the user, the pgx_dmpall command and the pgx_rcvall command can perform backup to any
destination and can perform recovery from any destination using any copy method.
Copy commands must be created in advance as executable scripts for the user to implement the copy process on database clusters and
tablespaces, and are called when executing the pgx_dmpall and pgx_rcvall commands.
This appendix describes backup/recovery using the copy command.
Point
It is also possible to back up only some tablespaces using the copy command. However, database resources not backed up using the copy
command are still backed up to the backup data storage destination.
Note
Both the backup data storage destination and the optional backup destination are necessary for recovery - if they are located in secondary
media, combined management of these is necessary.
- 75 -
Note
The backup data storage destination cannot be used as these backup areas used by the copy command.
- 76 -
Information
The backup information file is prepared in the backup data storage destination by the pgx_dmpall command, and contains information that
can be read or updated by the copy command. This file is managed by associating it with the latest backup successfully completed by the
pgx_dmpall command, so the latest backup information relating to the copy command registered by the user can be retrieved. Additionally,
the content of the backup information file can be displayed using the pgx_rcvall command.
- prepare mode
Determines which of the two backup areas will be used for the current backup.
The backup area to be used for the current backup is determined by reading the information relating to the latest backup destination
where the backup information file was written to during the previous backup.
- backup mode
Performs backup on the backup area determined by prepare mode, using any copy method.
- finalize mode
Writes information relating to the destination of the current backup to the backup information file.
This enables the prepare mode to check the destination of the previous backup during the next backup.
Note
The user can use any method to hand over backup information between modes within the copy command, such as creating temporary files.
- restore mode
Any copy method can be used to implement restore from the backup destination retrieved using the copy command for backup.
Point
By referring to the mode assigned to the copy command as an argument, backup and recovery can be implemented using a single copy
command.
Example
Using a bash script
case $1 in
prepare)
processingRequiredForPrepareMode
;;
backup)
processingRequiredForBackupMode
;;
finalize)
processingRequiredForFinalizeMode
- 77 -
;;
restore)
processingRequiredForRestoreMode
;;
esac
Point
A sample batch file that backs up the database cluster and tablespace directory to a specific directory is supplied to demonstrate how to write
a copy command.
The sample is stored in the directory below:
/installDir/share/copy_command.archive.sh.sample
- Database cluster
- Tablespace
To back up only some tablespaces, create a file listing them. This file is not necessary to back up all tablespaces.
Example
To back up only tablespaces tblspc1 and tblspc2
tblspc1
tblspc2
Performing backup
Execute the pgx_dmpall command with the -Y option specifying the full path of the copy command for backup created in step 3 of
preparation for backup.
The example below backs up only some tablespaces, but not the database cluster, using the copy command.
- 78 -
Example
Point
- To exclude up the database cluster from backup using the copy command, specify the --exclude-copy-cluster option.
- To back up only some tablespaces using the copy command, use the -P option specifying the full path of the file created in step 1 of
preparation for backup.
See
Example
$ pgx_rcvall -l -D /database/inst1
Date Status Dir Resources backed up by the copy command
2017-05-01 13:30:40 COMPLETE /backup/inst1/2017-05-01_13-30-40 pg_data,dbspace,indexspace
Example
Perform recovery
Execute the pgx_rcvall command with the -Y option specifying the full path of the copy command for recovery created in step 3 of the
preparation for backup described in "13.2 Backup Using the Copy Command".
The example below recover only some tablespaces, but not the database cluster, using the copy command.
- 79 -
Example
Point
If the latest backup was performed using the copy command, the pgx_rcvall command automatically recognizes which database resources
were backed up using the copy command, or whether resources were backed up to the backup data storage destination. Therefore, recovery
can be performed by simply executing the pgx_rcvall command specifying the copy command for recovery.
See
Refer to "pgx_rcvall" in the Reference for information on the command.
Format
The syntax for calling the copy command from the pgx_dmpall command is described below.
If the operation mode is "prepare"
copyCommandName backup
Argument
- Operation mode
Mode Description
prepare Implements the preparation process for backing up using the copy command.
Called before the PostgreSQL online backup mode is started.
backup Implements the backup process.
Called during the PostgreSQL online backup mode.
- 80 -
Mode Description
finalize Implements the backup completion process.
Called after the PostgreSQL online backup mode is completed.
Resource Description
Database cluster pg_data
Tablespace Tablespace name
Example
To back up the database cluster and the tablespaces dbspace and indexspace using the copy command, the file should contain the
following:
pg_data
dbspace
indexspace
Information
The encoding of resource names output to the backup target list file by the pgx_dmpall command is the encoding used when this
command connects to the database with auto specified for the client_encoding parameter, and is dependent on the locale at the time of
command execution.
The number of arguments vary depending on operation mode. The argument of each operation mode is as follows.
Additionally, the access permissions for the backup information file and backup target list file are different depending on the operation
mode. The access permissions of each operation mode are as follows.
Return value
- 81 -
Return value Description
0 Normal end
The pgx_dmpall command continues processing.
Other than 0 Abnormal end
The pgx_dmpall command terminates in error.
Description
- The copy command operates with the privileges of the operating system user who executed the pgx_dmpall command. Therefore, grant
copy command execution privileges to users who will execute the pgx_dmpall command. Additionally, have the copy command change
users as necessary.
- To write to the backup information file, use a method such as redirection from the copy command.
- Because the copy command is called for each mode, implement all processing for each one.
- To copy multiple resources simultaneously, have the copy command copy them in parallel.
Note
- The backup information file and backup target list file cannot be deleted. Additionally, the privileges cannot be changed.
- Standard output and standard error output of the copy command are output to the terminal where the pgx_dmpall command was
executed.
- If the copy command becomes unresponsive, the pgx_dmpall command will also become unresponsive. If the copy command is deemed
to be unresponsive by the operating system, use an operating system command to forcibly stop it.
- Output the copy command execution trace and the result to a temporary file, so that if it terminates in error, the cause can be investigated
at a later time.
- For prepare mode only, it is possible to use the PostgreSQL client application to access the database using the copy command. For all
other modes, do not execute FUJITSU Enterprise Postgres commands or PostgreSQL applications.
- Enable the fsync parameter in postgresql.conf, because data on the shared memory buffer needs to have been already written to disk
when backup starts.
Format
The syntax for calling the copy command from the pgx_rcvall command is described below.
Argument
- Operation mode
Mode Description
restore Performs restore.
- 82 -
- Full path of the backup target list file
Full path of the file containing the resources to be restored using the copy command, enclosed in single quotation marks.
The access permissions for the backup information file and backup target list file are as below.
Return value
Description
- The copy command operates with the privileges of the operating system user who executed the pgx_rcvall command. Therefore, grant
copy command execution privileges to users who will execute the pgx_rcvall command. Additionally, have the copy command change
users as necessary.
Note
- The backup information file and backup target list file cannot be deleted. Additionally, the privileges cannot be changed.
- Standard output and standard error output of the copy command are output to the terminal where the pgx_rcvall command was executed.
- If the copy command becomes unresponsive, the pgx_rcvall command will also become unresponsive. If the status of the copy
command is deemed to be unresponsive by the operating system, use an operating system command to forcibly stop it.
- Output the copy command execution trace and the result to a temporary file, so that if it terminates in error, the cause can be investigated
at a later time.
- Do not execute FUJITSU Enterprise Postgres commands or PostgreSQL applications in the copy command.
- There may be files and directories not required for recovery using the archive log included in the backup, such as postmaster.pid,
pg_wal/subdirectory and pg_replslot in the database cluster. If such unnecessary files and directories exist, have the copy command
delete them after the restore.
- 83 -
Chapter 14 Actions when an Error Occurs
This chapter describes the actions to take when an error occurs in the database or an application, while FUJITSU Enterprise Postgres is
operating.
Depending on the type of error, it may be necessary to recover the database cluster. The recovery process recovers the following resources:
Note
Even if a disk is not defective, the same input-output error messages, as those generated when the disk is defective, may be output. The
recovery actions differ for these error messages.
Check the status of the disk, and select one of the following actions:
See
Refer to "Configuring Parameters" in the Installation and Setup Guide for Server for information on server logs.
- 1.5: Coefficient assuming the time excluding disk write, which is the most time-consuming step
- Backup data storage destination
Recovery time = usageByTheBackupDataStorageDestination / diskWritePerformance x 1.5
- 84 -
- usageByTheBackupDataStorageDestination: Disk space used by the backup data
- diskWritePerformance: Measured maximum data volume (bytes/second) that can be written per second in the system environment
where the operation is performed
- 1.5: Coefficient assuming the time excluding disk write, which is the most time-consuming step
Point
Back up the database cluster after recovering it. Backup deletes obsolete archive logs (transaction logs copied to the backup data storage
destination), freeing up disk space and reducing the recovery time.
Note
Recovery operation cannot be performed on an instance that is part of a streaming replication cluster in standby mode.
If disk failure occurs on a standby instance, it may be necessary to delete and re-create the instance.
Recovery operation can be performed on an instance that is part of a streaming replication cluster in "Master" mode. If a recovery operation
is performed on a master instance, it will break the replication cluster and streaming replication will stop between the master instance and
all its standby instances. In such an event, the standby instances can be promoted to standalone instances or can be deleted and re-created.
If failure occurred in the data storage disk or the transaction log storage disk
Follow the procedure below to recover the data storage disk or the transaction log storage disk.
1. Stop applications
Stop applications that are using the database.
- Restore the keystore to its state at the time of the database backup.
- Enable automatic opening of the keystore.
- 85 -
6. Recover the database cluster
Log in to WebAdmin, and in the [Instances] tab, click [Solution] for the error message in the lower-right corner.
7. Run recovery
In the [Restore Instance] dialog box, click [Yes].
Instance restore is performed. An instance is automatically started when recovery is successful.
Note
WebAdmin does not support recovery of hash index. If you are using a hash index, then after recovery, execute the REINDEX
command to rebuild it. Use of hash indexes is not recommended.
8. Resume applications
Resume applications that are using the database.
Point
WebAdmin may be unable to detect disk errors, depending on how the error occurred.
If this happens, refer to "14.10.3 Other Errors" to perform recovery.
3. Run backup
Perform backup to enable recovery of the backup data. In the [Backup] dialog box, click [Yes]. The backup is performed. An instance
is automatically started when backup is performed.
Point
If you click [Recheck the status], the resources in the data storage destination and the backup data storage destination are reconfirmed. As
a result, the following occurs:
- If an error is detected
An error message is displayed in the message list again. Click [Solution], and resolve the problem by following the resolution for the
cause of the error displayed in the dialog box.
If failure occurred on the data storage disk or the transaction log storage directory
Follow the procedure below to recover the data storage disk or the transaction log storage directory.
- 86 -
1. Stop applications
Stop applications that are using the database.
Example
To create a data storage destination directory:
$ mkdir /database/inst1
$ chown fsepuser:fsepuser /database/inst1
$ chmod 700 /database/inst1
See
Refer to "Preparing Directories to Deploy Resources" under "Setup" in the Installation and Setup Guide for Server for information
on how to create a storage directory.
- Specify the data storage location in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
Note
If recovery fails, remove the cause of the error in accordance with the displayed error message and then re-execute the pgx_rcvall
command.
If the message "pgx_rcvall: an error occurred during recovery" is displayed, then the log recorded when recovery was executed is
output after this message. The cause of the error is output in around the last fifteen lines of the log, so remove the cause of the error
in accordance with the message and then re-execute the pgx_rcvall command.
The following message displayed during recovery is output as part of normal operation of pgx_rcvall command (therefore the user
does not need not be concerned).
- 87 -
FATAL: The database system is starting
8. Resume applications
Resume applications that are using the database.
The following table shows the different steps to be performed depending on whether you stop the instance.
Y: Required
N: Not required
The procedure is as follows:
If an instance has not been stopped
If transaction log mirroring has not stopped, then stop it using the following SQL function.
- 88 -
pgx_pause_wal_multiplexing
----------------------------
(1 row)
- Changing archive_command
Specify a command that will surely complete normally, such as "echo skipped archiving WAL file %f" or "/bin/true", so that
archive logs will be regarded as having been output.
If you specify echo, a message is output to the server log, so it may be used as a reference when you conduct investigations.
If you simply want to stop output of errors without the risk that operations will not be able to continue, specify an empty string
(") in archive_command and reload the configuration file.
$ mkdir /database/inst1
$ chown fsepuser:fsepuser /database/inst1
$ chmod 700 /database/inst1
Refer to "3.2.2 Using Server Commands" for information on how to create a backup data storage destination.
SELECT pgx_resume_wal_multiplexing()
7. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following value in the pgx_dmpall command:
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment
variable is used by default.
Example
- 89 -
If an instance has been stopped
1. Stop applications
Stop applications that are using the database.
# mkdir /backup/inst1
# chown fsepuser:fsepuser /backup/inst1
# chmod 700 /backup/inst1
6. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following value in the pgx_dmpall command:
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment
variable is used by default.
Example
7. Resume applications
Resume applications that are using the database.
See
- Refer to "pgx_rcvall" and "pgx_dmpall" in the Reference for information on the pgx_rcvall command and pgx_dmpall command.
- Refer to "Write Ahead Log" under "Server Administration" in the PostgreSQL Documentation for information on archive_command.
- Refer to "B.1 WAL Mirroring Control Functions" for information on pgx_resume_wal_multiplexing.
- 90 -
Note
- Back up the database cluster after recovering it. Backup deletes obsolete archive logs (transaction logs copied to the backup data storage
destination), freeing up disk space and reducing the recovery time.
- If you recover data to a point in the past, a new time series (database update history) will start from that recovery point. When recovery
is complete, the recovery point is the latest point in the new time series. When you subsequently recover data to the latest state, the
database update is re-executed on the new time series.
1. Stop applications
Stop applications that are using the database.
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- 91 -
- Specify the recovery date and time in the -e option.
Example
In the following examples, "May 20, 2017 10:00:00" is specified as the recovery time.
Note
If recovery fails, remove the cause of the error in accordance with the displayed error message and then re-execute the pgx_rcvall
command.
If the message "pgx_rcvall: an error occurred during recovery" is displayed, then the log recorded when recovery was executed is
output after this message. The cause of the error is output in around the last fifteen lines of the log, so remove the cause of the error
in accordance with the message and then re-execute the pgx_rcvall command.
The following message displayed during recovery is output as part of normal operation of pgx_rcvall command (therefore the user
does not need not be concerned).
Note
The pgx_rcvall command cannot accurately recover a hash index. If you are using a hash index, wait for the instance to start and then
execute the REINDEX command for the appropriate index.
7. Resume applications
Resume applications that are using the database.
See
Refer to "pgx_rcvall" in the Reference for information on the pgx_rcvall command.
Note
- Back up the database cluster after recovering it. Backup deletes obsolete archive logs (transaction logs copied to the backup data storage
destination), freeing up disk space and reducing the recovery time.
- If you recover data to a point in the past, a new time series (database update history) will start from that recovery point. When recovery
is complete, the recovery point is the latest point in the new time series. When you subsequently recover data to the latest state, the
database update is re-executed on the new time series.
- 92 -
- An effective restore point is one created on a time series for which you have made a backup. That is, if you recover data to a point in
the past, you cannot use any restore points set after that recovery point. Therefore, once you manage to recover your target past data,
make a backup.
Note
Recovery operation cannot be performed on an instance that is part of a streaming replication cluster in standby mode.
If disk failure occurs on a standby instance, it may be necessary to delete and re-create the instance.
Recovery operation can be performed on an instance that is part of a streaming replication cluster in "Master" mode. If a recovery operation
is performed on a master instance, it will break the replication cluster and streaming replication will stop between the master instance and
all its standby instances. In such an event, the standby instances can be promoted to standalone instances or can be deleted and re-created.
Follow the procedure below to recover the data in the data storage disk.
1. Stop applications
Stop applications that are using the database.
- Restore the keystore to its state at the time of the database backup.
- Enable automatic opening of the keystore.
4. Recover the database cluster
Log in to WebAdmin, and in the [Instances] tab, select the instance to be recovered and click .
Note
WebAdmin cannot accurately recover a hash index. If you are using a hash index, then after recovery, execute the REINDEX
command for the appropriate index.
- 93 -
1. Stop applications
Stop applications that are using the database.
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
Note
If recovery fails, remove the cause of the error in accordance with the displayed error message and then re-execute the pgx_rcvall
command.
If the message "pgx_rcvall: an error occurred during recovery" is displayed, then the log recorded when recovery was executed is
output after this message. The cause of the error is output in around the last fifteen lines of the log, so remove the cause of the error
in accordance with the message and then re-execute the pgx_rcvall command.
The following message displayed during recovery is output as part of normal operation of pgx_rcvall (therefore the user does not need
not be concerned).
- 94 -
6. Start the instance
Start the instance.
Refer to "2.1.2 Using Server Commands" for information on how to start an instance.
Note
The pgx_rcvall command cannot accurately recover a hash index. If you are using a hash index, wait for the instance to start and then
execute the REINDEX command for the appropriate index.
See
Refer to "pgx_rcvall" in the Reference for information on the pgx_rcvall command.
2. Close connections from clients that have been in the waiting state for an extended period.
Use pg_terminate_backend() to close connections that have been trying to connect for an extended period.
However, when considering continued compatibility of applications, do not reference or use system catalogs and functions directly
in SQL statements. Refer to " Notes on Application Compatibility" in the Application Development Guide for details.
Example
The following example closes connections where the client has been in the waiting state for at least 60 minutes.
- 95 -
See
- Refer to "System Administration Functions" under "The SQL Language" in the PostgreSQL Documentation for information on
pg_terminate_backend.
- Refer to "Notes on Application Compatibility" in the Application Development Guide for information on how to maintain application
compatibility.
Process ID 16643 may be a connection that was established a considerable time ago by the UPDATE statement, or a connection that
has occupied resources (waiting).
2. Close connections from clients that have been in the waiting state for an extended period.
Use pg_terminate_backend() to close the connection with the process ID identified in step 1 above.
The example below disconnects the process with ID 16643.
However, when considering continued compatibility of applications, do not reference or use system catalogs and functions directly
in SQL statements.
See
- Refer to "System Administration Functions" under "The SQL Language" in the PostgreSQL Documentation for information on
pg_terminate_backend.
- Refer to "Notes on Application Compatibility" in the Application Development Guide for information on how to maintain application
compatibility.
- 96 -
14.5 Actions in Response to an Access Error
If access is denied, grant privileges allowing the instance administrator to operate the following directories, and then re-execute the
operation. Also, refer to the event log and the server log, and confirm that the file system has not been mounted as read-only due to a disk
error. If the file system has been mounted as read-only, mount it properly and then re-execute the operation.
See
Refer to "Preparing Directories to Deploy Resources" under "Setup" in the Installation and Setup Guide for Server for information on the
privileges required for the directory.
1. Create a tablespace
Use the CREATE TABLESPACE command to create a new tablespace in the prepared disk.
See
Refer to "SQL Commands" under "Reference" in the PostgreSQL Documentation for information on the CREATE TABLESPACE
command and ALTER TABLE command.
- 97 -
- 14.6.2.2 Using Server Commands
The following sections describe procedures that use each of these methods to replace the disk and migrate resources at the data storage
destination.
Note
- Before replacing the disk, stop applications and instances that are using the database.
- It is recommended that you back up the database cluster following recovery. Backup deletes obsolete archive logs (transaction logs
copied to the backup data storage destination), freeing up disk space and reducing the recovery time.
1. Back up files
If the disk at the data storage destination contains any required files, back up the files. It is not necessary to back up the data storage
destination.
2. Stop applications
Stop applications that are using the database.
7. Resume applications
Resume applications that are using the database.
1. Back up files
If the disk at the data storage destination contains any required files, back up the files. It is not necessary to back up the data storage
destination.
2. Stop applications
Stop applications that are using the database.
- 98 -
4. Stop the instance
After backup is complete, stop the instance. Refer to "2.1.2 Using Server Commands" for information on how to stop an instance.
If the instance fails to stop, refer to "14.11 Actions in Response to Failure to Stop an Instance".
$ mkdir /database/inst1
$ chown fsepuser:fsepuser /database/inst1
$ chmod 700 /database/inst1
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
Note
If recovery fails, remove the cause of the error in accordance with the displayed error message and then re-execute the pgx_rcvall
command.
If the message "pgx_rcvall: an error occurred during recovery" is displayed, then the log recorded when recovery was executed is
output after this message. The cause of the error is output in around the last fifteen lines of the log, so remove the cause of the error
in accordance with the message and then re-execute the pgx_rcvall command.
The following message displayed during recovery is output as part of normal operation of pgx_rcvall (therefore the user does not need
not be concerned).
See
Refer to "pgx_rcvall" in the Reference for information on the pgx_rcvall command.
- 99 -
Note
The pgx_rcvall command cannot accurately recover a hash index. If you are using a hash index, wait for the pgx_rcvall command to
end and then execute the REINDEX command for the appropriate index.
- 100 -
3. Delete temporarily saved backup data
If backup completes normally, the temporarily saved backup data becomes unnecessary and is deleted.
The following example deletes backup data that was temporarily saved in /mnt/usb.
Example
Y: Required
N: Not required
(1 row)
- 101 -
To prevent this, use the following methods to stop output of archive logs.
If you simply want to stop output of errors without the risk that operations will not be able to continue, specify an empty string (")
in archive_command and reload the configuration file.
SELECT pgx_resume_wal_multiplexing()
6. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following option in the pgx_dmpall command:
- Specify the directory of the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA
environment variable is used by default.
Example
- 102 -
2. Stop the instance
Stop the instance. Refer to "2.1.2 Using Server Commands" for details.
If the instance fails to stop, refer to "14.11 Actions in Response to Failure to Stop an Instance".
5. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following value in the pgx_dmpall command:
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
Example
6. Resume applications
Resume applications that are using the database.
See
- Refer to "pgx_rcvall" and "pgx_dmpall" in the Reference for information on the pgx_rcvall command and pgx_dmpall command.
- Refer to "Write Ahead Log" under "Server Administration" in the PostgreSQL Documentation for information on archive_command.
- Refer to "B.1 WAL Mirroring Control Functions" for information on the pgx_is_wal_multiplexing_paused and
pgx_resume_wal_multiplexing.
- 103 -
- 14.7.2.2 Using Server Commands
Note
Before replacing the disk, stop applications that are using the database.
1. Back up files
If the disk at the backup data storage destination contains any required files, back up the files. It is not necessary to back up the backup
data storage destination.
4. Run backup
Log in to WebAdmin, and perform recovery operations. Refer to steps 2 ("Recover the backup data") and 3 ("Run backup") under
"If failure occurred on the backup storage disk" in "14.1.1 Using WebAdmin".
5. Restore files
Restore the files backed up in step 1.
- 104 -
No Step Instance stopped
No Yes
1 Back up files Y Y
2 Temporarily save backup data Y Y
3 Confirm that transaction log mirroring has stopped Y N
4 Stop output of archive logs Y N
5 Stop applications N Y
6 Stop the instance N Y
7 Replace with a larger capacity disk Y Y
8 Create a backup storage directory Y Y
9 Resume output of archive logs Y N
10 Resume transaction log mirroring Y N
11 Start the instance N Y
12 Run backup Y Y
13 Resume applications N Y
14 Restore files Y Y
15 Delete temporarily saved backup data Y Y
Y: Required
N: Not required
1. Back up files
If the disk at the backup data storage destination contains any required files, back up the files. It is not necessary to back up the
backup data storage destination.
- 105 -
If transaction log mirroring has not stopped, then stop it using the following SQL function.
(1 row)
If you simply want to stop output of errors without the risk that operations will not be able to continue, specify an empty string
(") in archive_command and reload the configuration file.
# mkdir /backup/inst1
# chown fsepuser:fsepuser /backup/inst1
# chmod 700 /backup/inst1
SELECT pgx_resume_wal_multiplexing()
9. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following value in the pgx_dmpall command:
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment
variable is used by default.
Example
- 106 -
10. Restore files
Restore the files backed up in step 1.
1. Back up files
If the disk at the backup data storage destination contains any required files, back up the files. It is not necessary to back up the
backup data storage destination.
3. Stop applications
Stop applications that are using the database.
# mkdir /backup/inst1
# chown fsepuser:fsepuser /backup/inst1
# chmod 700 /backup/inst1
8. Run backup
Use the pgx_dmpall command to back up the database cluster.
Specify the following value in the pgx_dmpall command:
- 107 -
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment
variable is used by default.
Example
9. Resume applications
Resume applications that are using the database.
See
- Refer to "pgx_rcvall" and "pgx_dmpall" in the Reference for information on the pgx_rcvall command and pgx_dmpall command.
- Refer to "Write Ahead Log" under "Server Administration" in the PostgreSQL Documentation for information on archive_command.
- Refer to "B.1 WAL Mirroring Control Functions" for information on the pgx_is_wal_multiplexing_paused and
pgx_resume_wal_multiplexing.
Note
- Before replacing the disk, stop applications that are using the database.
- It is recommended that you back up the database cluster following recovery. Backup deletes obsolete archive logs (transaction logs
copied to the backup data storage destination), freeing up disk space and reducing the recovery time.
- 108 -
14.8.1.1 Using WebAdmin
Follow the procedure below to replace the disk and migrate resources at the transaction log storage destination by using WebAdmin.
1. Back up files
If the disk at the transaction log storage destination contains any required files, back up the files. It is not necessary to back up the
transaction log storage destination.
3. Stop applications
Stop applications that are using the database.
- Restore the keystore to its state at the time of the database backup.
- Enable automatic opening of the keystore.
8. Recover the database cluster
Log in to WebAdmin, and perform recovery operations. Refer to steps 4 ("Create a tablespace directory ") to 7 ("Run Recovery")
under " If failure occurred in the data storage disk or the transaction log storage disk " in "14.1.1 Using WebAdmin" for information
on the procedure. An instance is automatically started when recovery is successful.
9. Resume applications
Resume applications that are using the database.
1. Back up files
If the disk at the transaction log storage destination contains any required files, back up the files. It is not necessary to back up the
transaction log storage destination.
3. Stop applications
Stop applications that are using the database.
- 109 -
If the instance fails to stop, refer to "14.11 Actions in Response to Failure to Stop an Instance".
# mkdir /tranlog/inst1
# chown fsepuser:fsepuser /tranlog/inst1
# chmod 700 /tranlog/inst1
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
Note
If recovery fails, remove the cause of the error in accordance with the displayed error message and then re-execute the pgx_rcvall
command.
If the message "pgx_rcvall: an error occurred during recovery" is displayed, then the log recorded when recovery was executed is
output after this message. The cause of the error is output in around the last fifteen lines of the log, so remove the cause of the error
in accordance with the message and then re-execute the pgx_rcvall command.
The following message displayed during recovery is output as part of normal operation of pgx_rcvall command (therefore the user
does not need not be concerned).
See
Refer to "pgx_rcvall" in the Reference for information on the pgx_rcvall command.
Note
The pgx_rcvall command cannot accurately recover a hash index. If you are using a hash index, wait for the instance to start and then
execute the REINDEX command for the appropriate index.
- 110 -
10. Resume applications
Resume applications that are using the database.
See
Refer to "Setup" in the Installation and Setup Guide for Server for information on how to create an instance and build the runtime
environment.
- postgresql.conf
- pg_hba.conf
See
Refer to the following for information on the parameters in the configuration file:
- 111 -
Refer to "14.14.2 Errors Caused by Power Failure or Mounting Issues", and take actions accordingly.
1. Delete the data storage destination directory and the transaction log storage destination directory
Back up the data storage destination directory and the transaction log storage destination directory before deleting them.
3. Run recovery
Restore the database cluster after WebAdmin detects an error.
Refer to "14.2.1 Using WebAdmin" for details.
1. Delete the data storage destination directory and the transaction log storage destination directory
Save the data storage destination directory and the transaction log storage destination directory, and then delete them.
2. Execute recovery
Use the pgx_rcvall command to recover the database cluster.
Refer to "14.2.2 Using the pgx_rcvall Command" for details.
- 112 -
14.11.2 Using Server Commands
There are three methods:
Example
Example
- 113 -
2. Forcibly stop the server process
As instance manager, forcibly stop the server process.
Using the pg_ctl command
- As the instance is yet to be created completely, there are no applications connecting to the database.
- The standby instance is in error state and is not running.
- There are no backups for the standby instance and as a result, it cannot be recovered.
See
Refer to "Deleting Instances" in the Installation and Setup Guide for details on how to delete an instance.
1. An in-doubt transaction will have occurred if a message similar to the one below is output to the log when the server is restarted.
Example
2. Refer to system view pg_prepared_xacts to obtain information about the prepared transaction.
If the transaction identifier of the prepared transaction in the list (in the transaction column of pg_prepared_xacts) is the same as
the identifier of the in-doubt transaction obtained from the log output when the server was restarted, then that row is the
information about the in-doubt transaction.
Example
- 114 -
2103 | 374cc221-f6dc-4b73-9d62-d4fec9b430cd | 2017-05-06 16:28:48.471+08 | postgres |
postgres (1 row)
Information about the in-doubt transaction is output to the row with the transaction ID 2103 in the transaction column.
If it still cannot be determined from this information, wait a few moments and then check pg_prepared_xacts again.
If there is a transaction that has continued since the last time you checked, then it is likely that it is the one in the in-doubt state.
Point
As you can see from the explanations in this section, there is no one way to definitively determine in-doubt transactions.
Consider collecting other supplementary information (for example, logging on the client) or performing other operations (for example,
allocating database users per job).
Example
- 115 -
Determine the cause of the error by checking the information in the system log and the server log, the disk access LED, network wiring, and
network card status. Take appropriate action to remove the cause of the error, for example, replace problematic devices.
- All instance operation buttons are disabled, except for "Edit instance", "Refresh instance", and "Delete Mirroring Controller"
- A red error status indicator is displayed on the instance icon
- For an anomaly specific to backup storage path, a red error status indicator is displayed on the [Backup storage] disk icon, and [Backup
storage status] is set to "Error"
- The message, "WebAdmin has detected an anomaly with...", is displayed in the [Message] section along with an associated [Solution]
button
- 116 -
Select the required option, click [OK], and then resolve the anomaly error.
Refer to "Editing instance information" in the Installation and Setup Guide for Server for information on the [Edit instance] page.
Note
Critical errors encountered during anomaly resolution will be displayed, however, rollback of the instance to its previous state is not
supported.
- The Mirroring Controller management folder or configuration files have been deleted
- The permissions to the Mirroring Controller management folder or configuration files have been changed such that:
- The instance administrator's access to Mirroring Controller configuration is denied
- Users other than an instance administrator have access privileges to Mirroring Controller configuration files
WebAdmin checks for anomalies when Mirroring Controller status check is performed.
The following occurs when a Mirroring Controller anomaly is detected:
- All Mirroring Controller functionality is disabled for the replication cluster, except for "Delete Mirroring Controller"
- [Mirroring Controller status] is set to "Error"
- Either of the following messages is displayed in the [Message] section
"Failed to access the Mirroring Controller management folder or configuration files 'path'. Mirroring Controller functionality has been
disabled. Consider deleting Mirroring Controller and adding it again."
"Failed to find the Mirroring Controller management folder or configuration files 'path'. Mirroring Controller functionality has been
disabled. Consider deleting Mirroring Controller and adding it again."
- 117 -
Appendix A Parameters
This appendix describes the parameters to be set in the postgresql.conf file of FUJITSU Enterprise Postgres.
The postgresql.conf file is located in the data storage destination.
- core_directory (string)
This parameter specifies the directory where the corefile is to be output. If this parameter is omitted, the data storage destination is used
by default. This parameter can only be set when specified on starting an instance. It cannot be changed dynamically, while an instance
is active.
- core_contents (string)
This parameter specifies the contents to be included in the corefile.
- full: Outputs all contents of the server process memory to the corefile.
- none: Does not output a corefile.
- minimum: Outputs only non-shared memory server processes to the corefile. This reduces the size of the corefile. However, in some
cases, this file may not contain sufficient information for examining the factor that caused the corefile to be output.
If this parameter is omitted, "minimum" is used by default. This parameter can only be set when specified on starting an instance. It
cannot be changed dynamically, while an instance is active.
- keystore_location (string)
This parameter specifies the directory that stores the keystore file. Specify a different location from other database clusters. This
parameter can only be set when specified on starting an instance. It cannot be changed dynamically, while an instance is active.
- tde_z.IBM_CCA_CSU_DEFAULT_ADAPTER (string)
This parameter specifies the value of the environment variable CSU_DEFAULT_ADAPTER for IBM Common Cryptographic
Architecture (CCA) configuration. If this parameter is omitted, will follow the current CCA configuration. This parameter can only be
set when specified on starting an instance. It cannot be changed dynamically, while an instance is active.
Information
Refer to IBM documentaion for the environment variable CSU_DEFAULT_ADAPTER.
- tde_z.IBM_CCA_CSU_DEFAULT_DOMAIN (string)
This parameter specifies the value of the environment variable CSU_DEFAULT_DOMAIN for IBM Common Cryptographic
Architecture (CCA) configuration. If this parameter is omitted, will follow the current CCA configuration. This parameter can only be
set when specified on starting an instance. It cannot be changed dynamically, while an instance is active.
Information
Refer to IBM documentaion for the environment variable CSU_DEFAULT_DOMAIN.
- tde_z.SLOT_ID (string)
Specifies the slot ID assigned to the instance of FUJITSU Enterprise Postgres. This parameter can only be set when specified on starting
an instance. It cannot be changed dynamically, while an instance is active.
- tde_z.USER_PIN (string)
User PIN to open a keystore. Specifies the user PIN set to the slot ID assigned to the instance of FUJITSU Enterprise Postgres. This
parameter can only be set when specified on starting an instance. It cannot be changed dynamically, while an instance is active.
- tablespace_encryption_algorithm (string)
This parameter specifies the encryption algorithm for tablespaces that will be created. Valid values are "AES128", "AES256", and
"none". If you specify "none", encryption is not performed. The default value is "none". To perform encryption, it is recommended that
you specify "AES256". Only superusers can change this setting.
- 118 -
- backup_destination (string)
This parameter specifies the absolute path of the directory where pgx_dmpall will store the backup data. Specify a different location
from other database clusters. This parameter can only be set when specified on starting an instance. It cannot be changed dynamically,
while an instance is active.
Place this directory on a different disk from the data directory to be backed up and the tablespace directory. Ensure that users do not
store arbitrary files in this directory, because the contents of this directory are managed by the database system.
- search_path (string)
When using the SUBSTR function compatible with Oracle databases, set "oracle" and "pg_catalog" in the search_path parameter. You
must specify "oracle" before "pg_catalog".
Example
Information
- The search_path feature specifies the priority of the schema search path. The SUBSTR function in Oracle database is defined in the
oracle schema.
- Refer to "Statement Behavior" under "Server Administration" in the PostgreSQL Documentation for information on search_path.
- track_waits (string)
This parameter enables collection of statistics for pgx_stat_lwlock and pgx_stat_latch.
- track_sql (string)
This parameter enables collection of statistics for pgx_stat_sql.
- Minimum value: 0
- Maximum value: 80
If this parameter is omitted, 0 will be used.
- Minimum value: 0
- 119 -
- Maximum value: Maximum value that can be expressed as a 4-byte signed integer
If this parameter is omitted or a value outside this range is specified, 18000 will be used.
- Minimum value: 1
- Maximum value: 8388607
If this parameter is omitted or a value outside this range is specified, 8 will be used.
- vci.enable (string)
This parameter enables or disables VCI.
- vci.log_query (string)
This parameter enables or disables log output when VCI is not used due to insufficient memory specified by vci.max_local_ros.
- Minimum value: 1 MB
- Maximum value: Maximum value that can be expressed as a 4-byte signed integer
If this parameter is omitted or a value outside this range is specified, 256 MB will be used.
- Minimum value: 64 MB
- Maximum value: Maximum value that can be expressed as a 4-byte signed integer
If this parameter is omitted or a value outside this range is specified, 64 MB will be used.
Information
The maximum value that can be expressed as a 4-byte signed integer changes according to the operating system. Follow the definition
of the operating system in use.
- Integer (1 or greater): Parallel scan is performed using the specified degree of parallelism.
- 0: Stops the parallel scan process.
- Negative number: The specified value minus the maximum number of CPUs obtained from the environment is used as the degree
of parallelism and parallel scan is performed.
- 120 -
If this parameter is omitted or a value outside this range is specified, 0 will be used.
- Minimum value: 32 MB
- Maximum value: Maximum value that can be expressed as a 4-byte signed integer
If this parameter is omitted or a value outside this range is specified, 1 GB will be used.
- track_gmc (string)
This parameter enables collection of statistics for pgx_stat_gmc.
See
Refer to "Server Configuration" under "Server Administration" in the PostgreSQL Documentation for information on other postgresql.conf
parameters.
- 121 -
Appendix B System Administration Functions
This appendix describes the system administration functions of FUJITSU Enterprise Postgres.
See
Refer to "System Administration Functions" under "The SQL Language" in the PostgreSQL Documentation for information on other
system administration functions.
If WAL multiplexing has not been configured, these functions return an error. Setting the backup_destination parameter in postgresql.conf
configures WAL multiplexing.
Only superusers can execute these functions.
The pgx_open_keystore function uses the specified user pin or passphrase to open the keystore. When the keystore is opened, it enables
access to the master encryption key. In this way, you can access the encrypted data and create encrypted tablespaces. If you are using a file-
based key store, and the keystore is already open, this function returns an error.
Only superusers can execute this function. Also, this function cannot be executed within a transaction block.
The pgx_set_master_key function generates a master encryption key and stores it in the keystore. If the keystore does not exist, this function
creates a keystore. If the keystore already exists, this function modifies the master encryption key. If the keystore has not been opened, this
function opens it.
The user pin is a string of 4 to 8 bytes.
The passphrase is a string of 8 to 200 bytes.
Only superusers can execute this function. Also, this function cannot be executed within a transaction block. Processing is not affected by
whether the keystore is open.
- 122 -
The pgx_set_keystore_passphrase function changes the keystore passphrase. Specify the current passphrase in oldPassphrase, and a new
passphrase in newPassphrase.
The passphrase is a string of 8 to 200 bytes.
Only superusers can execute this function. Also, this function cannot be executed within a transaction block. Processing is not affected by
whether the keystore is open.
B.3.1 pgx_alter_confidential_policy
Description
Changes masking policies
Format
The format varies depending on the content to be changed. The format is shown below.
- Common format
common_arg:
[schema_name := 'schemaName',]
table_name := 'tableName',
policy_name := 'policyName'
partialOpt:
function_parameters := 'maskingFmt'
regexpOpt:
regexp_pattern := 'regexpPattern',
- 123 -
regexp_replacement := 'regexpReplacementChar',
[, regexp_flags := 'regexpFlags']
partialOpt:
function_parameters := 'maskingFmt'
regexpOpt:
regexp_pattern := 'regexpPattern',
regexp_replacement := 'regexpReplacementChar',
[, regexp_flags := 'regexpFlags']
Argument
The argument varies depending on the content to be changed. Details are as follows.
- Common arguments
- 124 -
Masking type for Argument Data type Description Default value
which an argument
can be specified
All schema_name varchar(63) Schema name of table for which a masking 'public'
policy is applied
table_name varchar(63) Name of table for which a masking policy is Mandatory
applied
policy_name varchar(63) Masking policy name Mandatory
- 125 -
Masking type for Argument Data type Description Default value
which an
argument can be
specified
All action varchar(63) 'MODIFY_COLUMN' Mandatory
column_name varchar(63) Masking target name Mandatory
function_type varchar(63) Masking type 'FULL'
- 126 -
Argument Mandatory or optional
ADD_COLUMN DROP_C MODIFY MODIFY_COLUM SET_POLICY SET_COLUMN
OLUMN _EXPRE N _DESCRIPTI _DESCRIPTIO
SSION ON N
Full Partia Regul Full Partia Regul
mas l ar mas l ar
king maski expre king maski expre
ng ssion ng ssion
maski maski
ng ng
policy_name N N N N N N N N N N
action Y Y Y N N N N N N N
column_name N N N N - N N N - N
function_type Y N N - - Y N N - -
expression - - - - N - - - - -
policy_description - - - - - - - - N -
column_description - - - - - - - - - N
function_parameters - N - - - - N - - -
regexp_pattern - - N - - - - N - -
regexp_replacement - - N - - - - N - -
regexp_flags - - Y - - - - Y - -
Return value
Execution example 1
Adding masking policy p1 to masking target c2
Execution example 2
Deleting masking target c1 from masking policy p1
- 127 -
Execution example 3
Changing the masking condition for masking policy p1
Execution example 4
Changing the content of masking policy p1 set for masking target c2
Execution example 5
Changing the description of masking policy p1
Execution example 6
Changing the description of masking target c2
Description
- The arguments for the pgx_alter_confidential_policy system management function can be specified in any order.
- The action parameters below can be specified. When action parameters are omitted, ADD_COLUMN is applied.
Parameter Description
ADD_COLUMN Adds a masking target to a masking policy.
DROP_COLUMN Deletes a masking target from a masking policy.
MODIFY_EXPRESSION Changes expression.
MODIFY_COLUMN Changes the content of a masking policy set for a masking target.
SET_POLICY_DESCRIPTION Changes policy_description.
SET_COLUMN_DESCRIPTION Changes column_description.
- The function_parameters argument is enabled when the function_type is PARTIAL. If the function_type is other than PARTIAL, it will
be ignored.
- 128 -
- The arguments below are enabled when the function_type is REGEXP. If the function_type is other than REGEXP, these arguments
will be ignored.
- regexp_pattern
- regexp_replacement
- regexp_flags
See
- Refer to "String Constants" in the PostgreSQL Documentation for information on the strings to specify for arguments.
- Refer to "POSIX Regular Expressions" in the PostgreSQL Documentation and check pattern, replacement, and flags for information
on the values that can be specified for regexp_pattern, regexp_replacement, and regexp_flags.
B.3.2 pgx_create_confidential_policy
Description
Creates masking policies
Format
The format varies depending on the masking type. The format is shown below.
pgx_create_confidential_policy(
[schema_name := 'schemaName',]
table_name := 'tableName',
policy_name := 'policyName',
expression := 'expr'
[, enable := 'policyStatus']
[, policy_description := 'policyDesc']
[, column_name := 'colName'
[, function_type := 'FULL'] |
[, function_type := 'PARTIAL', partialOpt] |
[, function_type := 'REGEXP', regexpOpt]
[, column_description := 'colDesc']
])
partialOpt:
function_parameters := 'maskingFmt'
regexpOpt:
regexp_pattern := 'regexpPattern',
regexp_replacement := 'regexpReplacementChar',
[, regexp_flags := 'regexpFlags']
Argument
Details are as follows.
- 129 -
Masking type for Argument Data type Description Default value
which an argument
can be specified
policy_name varchar(63) Masking policy name Mandatory
expression varchar(1024) Masking condition Mandatory
enable boolean Masking policy status 't'
- 't': Enabled
- 'f': Disabled
policy_description varchar(1024) Masking policy description NULL
column_name varchar(63) Masking target name NULL
function_type varchar(63) Masking type 'FULL'
- 130 -
Return value
Execution example 1
Creating masking policy p1 that does not contain a masking target
Execution example 2
Creating masking policy p1 that contains masking target c1 of which the masking type is full masking
Execution example 3
Creating masking policy p1 that contains masking target c2 of which the masking type is partial masking
Execution example 4
Creating masking policy p1 that contains masking target c3 of which the masking type is regular expression masking
Description
- The arguments for the pgx_create_confidential_policy system management function can be specified in any order.
- If column_name is omitted, only masking policies that do not contain masking target will be created.
- One masking policy can be created for each table. Use the pgx_alter_confidential_policy system management function to add a masking
target to a masking policy.
- 131 -
- The function_parameters argument is enabled when the function_type is PARTIAL. If the function_type is other than PARTIAL, it will
be ignored.
- The arguments below are enabled when the function_type is REGEXP. If the function_type is other than REGEXP, these arguments
will be ignored.
- regexp_pattern
- regexp_replacement
- regexp_flags
Note
If a table for which a masking policy is to be applied is deleted, delete the masking policy as well.
See
- Refer to "String Constants" in the PostgreSQL Documentation for information on the strings to specify for arguments.
- Refer to "POSIX Regular Expressions" in the PostgreSQL Documentation and check pattern, replacement, and flags for information
on the values that can be specified for regexp_pattern, regexp_replacement, and regexp_flags.
B.3.3 pgx_drop_confidential_policy
Description
Deletes masking policies
Format
pgx_drop_confidential_policy(
[schema_name := 'schemaName', ]
table_name := 'tableName',
policy_name := 'policyName'
)
Argument
Details are as follows.
- 132 -
Return value
Execution example
Deleting masking policy p1
Description
The arguments for the pgx_drop_confidential_policy system management function can be specified in any order.
Note
If a table for which a masking policy is to be applied is deleted, delete the masking policy as well.
See
Refer to "String Constants" in the PostgreSQL Documentation for information on the strings to specify for arguments.
B.3.4 pgx_enable_confidential_policy
Description
Enables or disables masking policies
Format
pgx_enable_confidential_policy(
[schema_name := 'schemaName', ]
table_name := 'tableName',
policy_name := 'policyName',
enable := 'policyStatus'
)
Argument
Details are as follows.
- 't': Enabled
- 133 -
Argument Data type Description Default value
- 'f': Disabled
Return value
Execution example
Enabling masking policy p1
Description
The arguments for the pgx_enable_confidential_policy system management function can be specified in any order.
See
Refer to "String Constants" in the PostgreSQL Documentation for information on the strings to specify for arguments.
B.3.5 pgx_update_confidential_values
Description
Changes replacement characters when full masking is specified for masking type
Format
pgx_update_confidential_values(
[number_value := 'numberValue']
[, char_value := 'charValue']
[, varchar_value := 'varcharValue']
[, date_value := 'dateValue']
[, ts_value := 'tsValue']
)
- 134 -
Argument
Details are as follows.
Return value
Execution example
Using '*' as a replacement character in char type and varchar type
Description
- The arguments for the pgx_update_confidential_values system management function can be specified in any order.
- Specify one or more arguments for the pgx_update_confidential_values system management function. A replacement character is not
changed for an omitted argument.
See
Refer to "String Constants" in the PostgreSQL Documentation for information on the strings to specify for arguments.
pgx_prewarm_vci loads the specified VCI data to buffer cache and returns the number of blocks of the loaded VCI data.
The aggregation process using VCI may take time immediately after an instance is started, because the VCI data has not been loaded to
buffer cache. Therefore, the first aggregation process can be sped up by executing pgx_prewarm_vci after an instance is started.
The amount of memory required for preloading is the number of blocks returned by pgx_prewarm_vci multiplied by the size of one block.
This function can only be executed if the user has reference privilege to the VCI index and execution privilege to the pg_prewarm function.
- 135 -
B.5 High-Speed Data Load Control Functions
The table below lists the functions that can be used for high-speed data load.
- 136 -
Appendix C System Views
This appendix describes how to use the system views in FUJITSU Enterprise Postgres.
See
Refer to "System Views" under "Internals" in the PostgreSQL Documentation for information on other system views.
C.1 pgx_tablespaces
The pgx_tablespaces catalog provides information related to the encryption of tablespaces.
C.2 pgx_stat_lwlock
The pgx_stat_lwlock view displays statistics related to lightweight locks, with each type of content displayed on a separate line.
C.3 pgx_stat_latch
The pgx_stat_latch view displays statistics related to latches, with each type of wait information within FUJITSU Enterprise Postgres
displayed on a separate line.
C.4 pgx_stat_walwriter
The pgx_stat_walwriter view displays statistics related to WAL writing, in a single line.
- 137 -
Table C.3 pgx_stat_walwriter view
Column Type Description
dirty_writes bigint Number of times old WAL buffers were written to the
disk because the WAL buffer was full when WAL records
were added
writes bigint Number of WAL writes
write_blocks bigint Number of WAL write blocks
total_write_time double precision Number of milliseconds spent on WAL writing
stats_reset timestamp with timezone Last time at which this statistic was reset
C.5 pgx_stat_sql
The pgx_stat_sql view displays statistics related to SQL statement executions, with each type of SQL statement displayed on a separate line.
C.6 pgx_stat_gmc
The pgx_stat_gmc view provides information about the GMC areas.
- 138 -
Table C.5 pgx_stat_gmc view
Column Type Description
searches bigint Number of times the cache table is searched.
hits bigint Number of times the cache table is hit.
size bigint The current amount of memory (bytes) used in the GMC
area.
stats_reset timestamp with timezone Last time these statistics were reset.
- 139 -
Appendix D Tables Used by Data Masking
This appendix explains tables used by the data masking feature.
Note
These tables are updated by the data masking control function, so do not use SQL statements to directly update these tables.
D.1 pgx_confidential_columns
This table provides information on masking target for which masking policies are set.
Execution example
postgres=# select * from pgx_confidential_columns;
schema_name | table_name | policy_name | column_name | function_type |
function_parameters | regexp_pattern | regexp_replacement | regexp_flags |
column_description
-------------+------------+-------------+-------------+---------------
+----------------------------------------+----------------+--------------------+-------------
+--------------------
public | t1 | p1 | c1 | FULL |
| | | |
public | t1 | p1 | c2 | PARTIAL | VVVFVVVVFVVVV, VVV-VVVV-VVVV,
*, 4, 11 | | | |
(2 row)
D.2 pgx_confidential_policies
This table provides information on masking policies.
- 140 -
Column Type Description
table_name varchar(63) Name of table for which a masking policy is applied
policy_name varchar(63) Masking policy name
expression varchar(1024) Masking condition
enable boolean Masking policy status
- 't': Enabled
- 'f': Disabled
policy_description varchar(1024) Masking policy description
Execution example
postgres=# select * from pgx_confidential_policies;
schema_name | table_name | policy_name | expression | enable | policy_description
-------------+------------+-------------+------------+--------+--------------------
public | t1 | p1 | 1=1 | t |
(1 row)
D.3 pgx_confidential_values
This table provides information on replacement characters when full masking is specified for masking type.
Execution example
postgres=# select * from pgx_confidential_values;
number_value | char_value | varchar_value | date_value | ts_value
--------------+------------+---------------+------------+---------------------
0 | | | 1970-01-01 | 1970-01-01 00:00:00
(1 row)
- 141 -
Appendix E Tables Used by High-Speed Data Load
This appendix describes the tables used by high-speed data load.
E.1 pgx_loader_state
The pgx_loader_state table provides information about transactions prepared by high-speed data load.
Note
The pgx_loader_state table and pgx_loader_state_id_seq sequence are updated by high-speed data load. Do not update these database
objects directly using SQL.
- 142 -
Appendix F Starting and Stopping the Web Server Feature
of WebAdmin
To use WebAdmin for creating and managing a FUJITSU Enterprise Postgres instance on a server where FUJITSU Enterprise Postgres is
installed, you must first start the Web server feature of WebAdmin.
This appendix describes how to start and stop the Web server feature of WebAdmin.
Note that "<x>" in paths indicates the product version.
See
Refer to "Installing WebAdmin in a Multiserver Configuration" in the Installation and Setup Guide for Server for information on
multiserver installation.
1. Change to superuser
Acquire superuser privileges on the system.
Example
$ su -
Password:******
# cd /opt/fsepv<x>webadmin/sbin
# ./WebAdminStart
1. Change to superuser
Acquire superuser privileges on the system.
Example
$ su -
Password:******
- 143 -
Example
If WebAdmin is installed in /opt/fsepv<x>webadmin:
# cd /opt/fsepv<x>webadmin/sbin
# ./WebAdminStop
- 144 -
Appendix G WebAdmin Wallet
This appendix describes how to use the Wallet feature of WebAdmin.
When a remote instance or a standby instance is created, it is necessary to provide user name and password for authentication with the remote
machine or the database instance.
The Wallet feature in WebAdmin is a convenient way to create and store these credentials.
Once created, these credentials can be repeatedly used in one or more instances.
Note
It is not mandatory to create a credential in the Wallet. It is possible to create a remote instance or a standby instance without creating any
credential in the Wallet.
If no credential is created beforehand, a user name and password can be entered in the instance creation page. When creating a "Remote"
instance, if operating system credentials are entered without using a credential stored in the Wallet, WebAdmin automatically creates a
credential with the given user name and password, and stores it in the user’s wallet for future use.
Enter the following items. Credential name, User name and Password should not contain hazardous characters. Refer to “Appendix H
WebAdmin Disallow User Inputs Containing Hazardous Characters”.
- Maximum of 16 characters
- The first character must be an ASCII alphabetic character
- The other characters must be ASCII alphanumeric characters
- 145 -
- [User name]: The operating system user name or database instance user name that will be used later
- [Password]: Password for the user
- [Confirm password]: Reenter the password.
3. Click to store the credential.
When "Cred1" is selected in [Operating system credential], the user name and password are automatically populated from the credential.
- 146 -
Appendix H WebAdmin Disallow User Inputs Containing
Hazardous Characters
WebAdmin considers the following as hazardous characters, which are not allowed in user inputs.
| (pipe sign)
& (ampersand sign)
; (semicolon sign)
$ (dollar sign)
% (percent sign)
@ (at sign)
' (single apostrophe)
" (quotation mark)
\' (backslash-escaped apostrophe)
\" (backslash-escaped quotation mark)
<> (triangular parenthesis)
() (parenthesis)
+ (plus sign)
CR (Carriage return, ASCII 0x0d)
LF (Line feed, ASCII 0x0a)
, (comma sign)
\ (backslash)
- 147 -
Appendix I Collecting Failure Investigation Data
If the cause of an error that occurs while building the environment or during operations is unclear, data must be collected for initial
investigation.
This appendix describes how to collect data for initial investigation.
Use the pgx_fjqssinf command to collect data for initial investigation.
See
Refer to the Reference for information on the pgx_fjqssinf command.
- 148 -
Appendix J Operation of Transparent data Encryption in
File-based Keystores
This appendix explains about the operation of transparent data encryption in file-based.
1. In the keystore_location parameter of postgresql.conf, specify the directory to store the keystore.
Specify a different location for each database cluster.
keystore_location = '/key/store/location'
- Using WebAdmin
Refer to "2.1.1 Using WebAdmin", and restart the instance.
- Specify the -w option. This means that the command returns after waiting for the instance to start. If the -w option is not
specified, it may not be possible to determine if the starting of the instance completed successfully or if it failed.
Example
2. Execute an SQL function, such as the one below, to set the master encryption key. This must be performed by the superuser. Execute
it as the database superuser.
SELECT pgx_set_master_key('passphrase');
The value "passphrase" is the passphrase that will be used to open the keystore. The master encryption key is protected by this
passphrase, so avoid specifying a short simple string that is easy to guess.
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_set_master_key function.
Note
Note that if you forget the passphrase, you will not be able to access the encrypted data. There is no method to retrieve a forgotten passphrase
and decrypt data. Do not, under any circumstances, forget the passphrase.
The pgx_set_master_key function creates a file with the name keystore.ks in the keystore storage destination. It also creates a master
encryption key from random bit strings, encrypts it with the specified passphrase, and stores it in keystore.ks. At this point, the keystore is
open.
- 149 -
You need to open the keystore each time you start the instance. To open the keystore, the database superuser must execute the following
SQL function.
SELECT pgx_open_keystore('passphrase');
The value "passphrase" is the passphrase specified during creation of the keystore.
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_open_keystore function.
Note that, in the following cases, the passphrase must be entered when starting the instance, because the encrypted WAL must be decrypted
for recovery. In this case, the above-mentioned pgx_open_keystore function cannot be executed.
Point
When using an automatically opening keystore, you do not need to enter the passphrase and you can automatically open the keystore when
the database server starts. Refer to "J.5.3 Enabling Automatic Opening of the Keystore" for details.
- 150 -
To change the keystore passphrase, execute the following SQL function as a superuser.
After changing the passphrase, you must immediately back up the keystore.
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_set_keystore_passphrase function.
See
Refer to "pgx_keystore" in the Reference for information on pgx_keystore command.
When automatic opening is enabled, an automatically opening keystore is created in the same directory as the original keystore. The file
name of the automatically opening keystore is keystore.aks. The file keystore.aks is an obfuscated copy of the decrypted content of the
keystore.ks file. As long as this file exists, there is no need to enter the passphrase to open the keystore when starting the instance.
Do not delete the original keystore file, keystore.ks. It is required for changing the master encryption key and the passphrase. When you
change the master encryption key and the passphrase, keystore.aks is recreated from the original keystore file, keystore.ks.
Protect keystore.ks, keystore.aks, and the directory that stores the keystore so that only the user who starts the instance can access them.
Configure the permission of the files so that only the user who starts the instance can access the SQL functions and commands that create
these files. Accordingly, manually configure the same permission mode if the files are restored.
Example
An automatically opening keystore will only open on the computer where it was created.
To disable automatic opening of the keystore, delete keystore.aks.
Note
- To use WebAdmin for recovery, you must enable automatic opening of the keystore.
- Refer to "5.8 Backing Up and Restoring/Recovering the Database" after enabling or reconfiguring encryption to back up the database.
- Specify a different directory from those below as the keystore storage destination:
- Data storage destination
- Tablespace storage destination
- Transaction log storage destination
- Backup data storage destination
- 151 -
J.5.4 Backing Up and Recovering the Keystore
Back up the keystore at the following times in case it is corrupted or lost. Note that you must store the database and the keystore on separate
data storage media. Storing both on the same data storage medium risks the danger of the encrypted data being deciphered if the medium
is stolen. A passphrase is not required to open an automatically opening keystore, so store this type of keystore in a safe location.
Point
Do not overwrite an old keystore when backing up a keystore. This is because during database recovery, you must restore the keystore to
its state at the time of database backup. When the backup data of the database is no longer required, delete the corresponding keystore.
Example
- Specify the data storage destination in the -D option. If the -D option is omitted, the value of the PGDATA environment variable
is used by default.
- Change the master encryption key, and back up the keystore on May 5, 2017.
> psql -c "SELECT pgx_set_master_key('passphrase')" postgres
> cp -p /key/store/location/keystore.ks /keybackup/keystore_20170505.ks
- Specify the SQL function that sets the master encryption key in the -c option.
- Specify the name of the database to be connected to as the argument.
If the keystore is corrupted or lost, restore the keystore containing the latest master encryption key. If there is no keystore containing the
latest master encryption key, restore the keystore to its state at the time of database backup, and recover the database from the database
backup. This action recovers the keystore to its latest state.
Example
- Restore the keystore containing the latest master encryption key as of May 5, 2017.
> cp -p /keybackup/keystore_20170505.ks /key/store/location/keystore.ks
- If there is no backup of the keystore containing the latest master encryption key, recover the keystore by restoring the keystore that was
backed up along with the database on 1 May 2017.
- Specify the data storage directory in the -D option. If the -D option is omitted, the value of the PGDATA environment variable is
used by default.
- 152 -
- Specify the backup data storage directory in the -B option.
- The --keystore-passphrase option prompts you to enter the passphrase to open the keystore.
If you have restored the keystore, repeat the process of enabling automatic opening of the keystore. This ensures that the contents of the
automatically opening keystore (keystore.aks) are identical to the contents of the restored keystore.
It is recommended that you do not back up the automatically opening keystore file, keystore.aks. If the database backup medium and the
backup medium storing the automatically opening keystore are both stolen, the attacker will be able to read the data even without knowing
the passphrase.
If the automatically opening keystore is corrupted or lost, you must again enable automatic opening. The keystore.aks file will be recreated
from keystore.ks at this time.
See
Refer to "pgx_rcvall" and "pgx_dmpall" in the Reference for information on the pgx_rcvall and pgx_dmpall commands.
Refer to "psql" under "Reference" in the PostgreSQL Documentation for information on the psql command.
Refer to "B.2 Transparent Data Encryption Control Functions" for information on the pgx_set_master_key function.
Refer to "J.5.3 Enabling Automatic Opening of the Keystore" for information on how to enable automatic opening of the keystore.
- 153 -
As the standby server is not active while the primary server is running, this file would not be accessed simultaneously, and therefore,
it can be shared.
To manage the keystore file in a more secure manner, place it on the key management server or the key management storage isolated
in a secure location.
Enable the automatic opening of the keystore on both the primary and standby servers.
Placing a copy of the keystore file
This involves placing a copy of the primary server keystore file on the standby server.
You can do this if you cannot prepare a shared server or disk device that can be accessed from both the primary and standby servers.
However, if you change the master encryption key and the passphrase on the primary server, you must copy the keystore file to the
standby server again.
To manage the keystore file in a more secure manner, prepare the key management server or the key management storage isolated in
a secure location for both the primary and standby servers, and place the keystore files there.
Enable the automatic opening of the keystore on both the primary and standby servers. Note that copying the automatically opening
keystore file (keystore.aks) to the standby server does not enable the automatic opening of the keystore.
Point
To manage the keystore file in a more secure manner, place it on the key management server or the key management storage isolated in a
secure location. A keystore used by both the primary and standby servers can be managed on the same key management server or key
management storage.
However, create different directories for the keystores to be used by the primary server and the standby server. Then copy the keystore for
the primary server to the directory used on the standby server.
- 154 -
Changing the master encryption key and the passphrase
Change the master encryption key and the passphrase on the primary server. You need not copy the keystore from the primary server to the
standby server. You need not even restart the standby server or reopen the keystore. Changes to the master encryption key and the passphrase
are reflected in the keystore on the standby server.
See
Refer to "pgx_rcvall " in the Reference for information on pgx_rcvall command.
Refer to "pg_ctl" under "Reference" in the PostgreSQL Documentation for information on pg_ctl command.
Refer to "pg_basebackup" under "Reference" in the PostgreSQL Documentation for information on pg_basebackup command.
Refer to "High Availability, Load Balancing, and Replication" under "Server Administration" in the PostgreSQL Documentation for
information on how to set up streaming replication.
- 155 -
Index
[A] [E]
Actions in Response to Instance Startup Failure.................... 111 Enabling and Disabling a Masking Policy............................... 42
All user data within the specified tablespace............................22 Enabling Automatic Opening of the Keystore.................. 27,151
Approximate backup time........................................................ 14 Encrypting a Tablespace....................................................25,150
Approximate recovery time...................................................... 84 Encrypting Existing Data...................................................31,153
Automatically opening the keystore.................................. 32,154 Encryption mechanisms............................................................22
Errors in More Than One Storage Disk..................................111
[B]
Backing Up and Recovering the Keystore........................ 27,152 [F]
Backing Up and Restoring/Recovering the Database........28,153 Faster hardware-based encryption/decryption..........................22
Backup/Recovery Using the Copy Command..........................75 File system level backup and restore........................................30
Backup and recovery using the pgx_dmpall and pgx_rcvall
commands.................................................................................29 [H]
backup cycle............................................................................. 15 High-Speed Data Load............................................................. 67
Backup data.............................................................................. 23 HSM Master Key Configuration ............................................. 32
Backup operation......................................................................16
[I]
Backup operation (file backup)................................................ 17
If failure occurred in the data storage disk or the transaction log
Backup status.......................................................................16,17
storage disk............................................................................... 85
Backup using the backup information file................................76
If failure occurred on the backup data storage disk.............86,88
Backup Using the Copy Command.......................................... 78
If failure occurred on the data storage disk or the transaction log
backup_destination (string).................................................... 119
storage directory....................................................................... 86
Building and starting a standby server.............................. 32,154
Importing and Exporting the Database..............................31,153
[C] Installing and Operating the In-memory Feature..................... 58
Changing a Masking Policy......................................................41
[K]
Changing the HSM master key ................................................27
Keystore management.............................................................. 22
Changing the Keystore Passphrase.........................................150
keystore_location (string)....................................................... 118
Changing the Master Encryption Key............................... 26,150
Changing the master encryption key........................................ 33 [L]
Changing the master encryption key and the passphrase....... 155 Logging in to WebAdmin...........................................................2
Changing User Pins.................................................................. 27 log in........................................................................................... 3
Checking an Encrypted Tablespace...................................26,150
Checking backup status............................................................ 79 [M]
Checking the operating status of an instance.......................10,12 Managing the Keystore......................................................26,150
Collecting Failure Investigation Data ....................................148 Masking Condition................................................................... 36
Configuration of the Copy Command...................................... 75 Masking Format........................................................................37
Configuration of the copy command for backup......................77 Masking Policy......................................................................... 35
Configuration of the copy command for recovery................... 77 Masking Target.........................................................................36
Confirming a Masking Policy...................................................41 Masking Type........................................................................... 36
Continuous archiving and point-in-time recovery....................30 Monitoring Database Activity.................................................. 49
Copy Command for Backup..................................................... 80
[O]
Copy Command for Recovery..................................................82
Opening the Keystore........................................................ 24,149
Copy Command Interface.........................................................80
Operating FUJITSU Enterprise Postgres....................................1
copy procedure for the opencryptoki token directory.............. 32
Operation of Transparent data Encryption in File-based
core_contents (string)............................................................. 118
Keystores................................................................................ 149
core_directory (string)............................................................ 118
Creating a Masking Policy....................................................... 40 [P]
Cyclic usage of the backup area............................................... 75 Parallel Query........................................................................... 66
Performing backup................................................................... 78
[D]
Perform recovery...................................................................... 79
Data Masking............................................................................35
Periodic Backup........................................................................15
Data Types for Masking........................................................... 43
pgx_global_metacache (numerical value).............................. 121
Deleting a Masking Policy....................................................... 43
pgx_stat_gmc view................................................................. 139
Determining the backup area of the latest backup....................79
pgx_stat_latch view................................................................ 137
pgx_stat_lwlock view............................................................. 137
- 156 -
pgx_stat_sql view................................................................... 138 WAL Mirroring Control Functions........................................ 122
pgx_stat_walwriter view.........................................................138 WebAdmin Wallet..................................................................145
pgx_tablespaces...................................................................... 137
Placement and automatic opening of the keystore file........... 153
Placing the keystore file.................................................... 32,154
Preparing for backup................................................................ 78
Preparing for HSM Collaboration............................................ 23
[R]
Recovery Using the Copy Command....................................... 79
reserve_buffer_ratio (numerical value).................................. 119
[S]
Scope of encryption.................................................................. 22
search_path (string)................................................................ 119
Security-Related Notes...................................................... 33,155
Security Notes...........................................................................44
Setting a restore point............................................................... 18
Setting the Master Encryption Key................................... 23,149
Starting and Stopping the Web Server Feature of WebAdmin143
Starting an instance................................................................9,11
Startup URL for WebAdmin...................................................... 2
Stopping an instance..............................................................9,12
Streaming replication support...................................................23
Streaming Replication Using WebAdmin................................ 53
Strong encryption algorithms................................................... 22
System Administration Functions.......................................... 122
System Views......................................................................... 137
[T]
tablespace_encryption_algorithm (string).............................. 118
Tables Used by Data Masking ...............................................140
tde_z.IBM_CCA_CSU_DEFAULT_ADAPTER (string)......118
tde_z.IBM_CCA_CSU_DEFAULT_DOMAIN (string) .......118
tde_z.SLOT_ID (string)..........................................................118
tde_z.USER_PIN (string)....................................................... 118
Tips for Installing Built Applications................................33,155
track_gmc (string)...................................................................121
track_sql (string).....................................................................119
track_waits (string)................................................................. 119
Transparent Data Encryption Control Functions....................122
Two-layer encryption key and the keystore............................. 22
[U]
User environment....................................................................... 2
Using Server Commands.......................................................... 11
[V]
vci.control_max_workers (numerical value)..........................120
vci.cost_threshold (numeric).................................................. 119
vci.enable (string)................................................................... 120
vci.log_query (string)............................................................. 120
vci.maintenance_work_mem (numerical value).....................120
vci.max_local_ros (numerical value)..................................... 120
vci.max_parallel_degree (numerical value)........................... 120
vci.shared_work_mem (numerical value).............................. 121
[W]
WAL and temporary files......................................................... 23
- 157 -