Patroni ETCD
Patroni ETCD
HA PostgreSQL
made easy
PostgresConf US
Alexander Kukushkin
Oleksii Kliukin
Zalando SE
16-04-2018
Agenda
Architecture overview
Client connections
Advanced features
Custom extensions
Troubleshooting
2
PostgreSQL High Availability
● Shared storage solutions
○ DRDB + LVM
● Multi-master replication
○ BDR, bucardo
3
Physical single-master replication
Cons
Primary
● No partial replication
● Major versions much match wal
● Missing automatic failover
4
Automatic failover done wrong:
Running just two nodes
Run the health check from the standby and promote that
standby when the health check indicates the primary failure
health check
Primary Standby
wal stream
wal wal
5
Automatic failover done wrong:
running just two nodes
Split-brain!
Primary Primary
wal wal
6
Automatic failover done wrong:
Single witness node
Primary Standby
wal stream
wal wal
healt k
h che
l t h c hec
ck he a
witness
7
Automatic failover done wrong:
Single witness node
Primary Standby
wal stream
wal wal
healt k
h che
l t h c hec
ck he a
witness
8
Automatic failover done wrong:
Single witness node
Or gets partitioned
Primary Standby
wal stream
wal wal
healt k
h che
l t h c hec
ck he a
witness
9
Automatic failover done wrong:
Single witness node
Primary Primary
wal wal
hec k
a l t h c
he
witness
10
Automatic failover done right
Primary Standby
wal wal
Ia e d?
m an g
the ch
lea a d er
de Le
r
Quorum
11
Automatic failover: the right way
12
Bot pattern
13
Bot pattern: master acknowledges its
presence
Primary NODE A
UP
D
pre ATE
vV
alu (“/lea
e= der
”A” ”, “
Su ) A”,
cce ttl=
ss 30
,
Standby NODE B
WATCH (/leader)
/leader: “A”, ttl: 30
der)
CH (/lea
Standby NODE C WAT
14
Bot pattern: master dies, leader key holds
Primary NODE A
Standby NODE B
WATCH (/leader)
/leader: “A”, ttl: 17
der)
CH (/lea
Standby NODE C WAT
15
Bot pattern: leader key expires
Standby
Notify (/leader,
expired=true)
/leader: “A”, ttl: 0
NODE B
a d e r,
t i fy (/le e)
Standby NODE C No d = tru
re
expi
16
Bot pattern: who will be the next master?
Node B:
GET A:8008/patroni -> timeout
GET C:8008/patroni -> wal_position: 100
Standby
NODE B
Standby NODE C
Node C:
GET A:8008/patroni -> timeout
GET C:8008/patroni -> wal_position: 100
17
Bot pattern: leader race among equals
Standby FAIL
de r ”, “B”,
T E ( “/lea F a lse)
EA s =
CR
p r e v Exist /leader: “C”, ttl: 30
ttl=30,
NODE B
SS
SUCCE
Standby r ”, “C”,
/ l e a de alse)
A T E (“ xists=F
CRE , prevE
0
ttl=3
NODE C
18
Bot pattern: promote and continue
replication
Standby
/ lead er)
TC H (
WA /leader: “C”, ttl: 30
NODE B
Primary
promote
NODE C
19
Etcd consistency store
● Distributed key-value store
● Implements RAFT
● Needs more than 2 nodes (optimal: odd number)
http://thesecretlivesofdata.com/raft/
20
Patroni
● Patroni implements bot pattern in Python
● Official successor of Compose Governor
● Developed in the open by Zalando and volunteers all
over the world
https://github.com/zalando/patroni
21
Your First
Patroni cluster
22
Using docker
● install docker
● docker pull kliukin/patroni-training
● docker run -d --name patroni-training kliukin/patroni-training
● docker exec -ti patroni-training bash
postgres@f40a9391f810:~$ ls *.yml
postgres0.yml postgres1.yml postgres2.yml
23
(Optional) using vagrant
● install vagrant
● get the vagrantfile from
https://github.com/alexeyklyukin/patroni-training
● vagrant up
● vagrant ssh
24
Hands on: creating your first cluster with
Patroni
$ patroni postgres0.yml $ patroni postgres1.yml
2018-01-18 13:29:06,714 INFO: Selected new 2018-01-18 13:45:02,479 INFO: Selected new
etcd server http://127.0.0.1:2379 etcd server http://127.0.0.1:2379
2018-01-18 13:29:06,731 INFO: Lock owner: 2018-01-18 13:45:02,488 INFO: Lock owner:
None; I am postgresql0 postgresql0; I am postgresql1
2018-01-18 13:29:06,796 INFO: trying to 2018-01-18 13:45:02,499 INFO: trying to
bootstrap a new cluster bootstrap from leader 'postgresql0'
… 2018-01-18 13:45:04,470 INFO: replica has
Success. You can now start the database been created using basebackup
server using: 2018-01-18 13:45:04,474 INFO: bootstrapped
/usr/local/pgsql/bin/pg_ctl -D from leader 'postgresql0'
data/postgresql0 -l logfile start 2018-01-18 13:45:07,211 INFO: Lock owner:
2018-01-18 13:29:13,115 INFO: initialized a postgresql0; I am postgresql1
new cluster 2018-01-18 13:45:07,212 INFO: does not
2018-01-18 13:29:23,088 INFO: Lock owner: have lock
postgresql0; I am postgresql0 2018-01-18 13:45:07,440 INFO: no action.
2018-01-18 13:29:23,143 INFO: no action. i i am a secondary and i am following a
am the leader with the lock leader
25
Patronictl output on success
26
Automatic failover
Failover happens when primary dies abruptly
We will simulate it by stopping Patroni
$ kill -9 %1
[1]+ Killed: 9 patroni postgres0.yml
27
Replica promotion
2018-01-18 16:04:39,019 INFO: Lock owner: postgresql0; I am postgresql1
2018-01-18 16:04:39,019 INFO: does not have lock
2018-01-18 16:04:39,021 INFO: no action. i am a secondary and i am following a
leader
2018-01-18 16:04:46,358 WARNING: request failed: GET
http://127.0.0.1:8008/patroni (HTTPConnectionPool(host='127.0.0.1', port=8008):
Max retries exceeded with url: /patroni (Caused by
NewConnectionError('<urllib3.connection.HTTPConnection object at 0x109c92898>:
Failed to establish a new connection: [Errno 61] Connection refused',)))
2018-01-18 16:04:46,474 INFO: promoted self to leader by acquiring session lock
server promoting
2018-01-18 16:04:46.506 CET [36202] LOG: received promote request
2018-01-18 16:04:46.506 CET [36209] FATAL: terminating walreceiver process due
to administrator command
2018-01-18 16:04:46.508 CET [36202] LOG: redo done at 0/3000028
2018-01-18 16:04:46.512 CET [36202] LOG: selected new timeline ID: 2
2018-01-18 16:04:46.562 CET [36202] LOG: archive recovery complete
2018-01-18 16:04:46.566 CET [36200] LOG: database system is ready to accept
connections
2018-01-18 16:04:47,537 INFO: Lock owner: postgresql1; I am postgresql1
28
How does Patroni cope with split-brain
29
Resume patroni and rejoin the former master
$ patroni postgres0.yml
2018-01-18 16:04:57,214 INFO: Selected new etcd server http://127.0.0.1:2379
2018-01-18 16:04:57,221 INFO: establishing a new patroni connection to the
postgres cluster
2018-01-18 16:04:57,344 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 16:04:57,344 INFO: does not have lock
2018-01-18 16:04:57.370 CET [36179] LOG: received immediate shutdown request
2018-01-18 16:04:57,384 INFO: demoting self because i do not have the lock and i
was a leader
2018-01-18 16:04:57.666 CET [36339] LOG: entering standby mode
2018-01-18 16:04:57.669 CET [36339] LOG: database system was not properly shut
down; automatic recovery in progress
2018-01-18 16:04:57,777 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 16:04:57,777 INFO: does not have lock
2018-01-18 16:04:58,004 INFO: Local timeline=1 lsn=0/30175C0
2018-01-18 16:04:58,014 INFO: master_timeline=2
2018-01-18 16:04:58,014 INFO: master: history=1 0/3000060 no recovery target
specified
2018-01-18 16:04:58,155 INFO: running pg_rewind from user=postgres host=127.0.0.1
port=5433 dbname=postgres sslmode=prefer sslcompression=1
servers diverged at WAL location 0/3000060 on timeline 1
rewinding from last common checkpoint at 0/2000060 on timeline 1
Done!
2018-01-18 16:04:59,490 INFO: starting as a secondary
30
Patronictl output
31
Peek into etcd
$ etcdctl ls --recursive --sort -p /service/batman
/service/batman/config
/service/batman/history
/service/batman/initialize
/service/batman/leader
/service/batman/members/
/service/batman/members/postgresql0
/service/batman/members/postgresql1
/service/batman/optime/
/service/batman/optime/leader
32
Let’s edit some
configuration
33
Editing configuration with patronictl
$ patronictl -c postgres0.yml edit-config batman
34
Editing configuration with patronictl
2018-01-18 14:19:06,352 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 14:19:06,352 INFO: does not have lock
2018-01-18 14:19:06,360 INFO: no action. i am a secondary and i am
following a leader
2018-01-18 14:19:16,355 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 14:19:16,355 INFO: does not have lock
2018-01-18 14:19:16,368 INFO: no action. i am a secondary and i am
following a leader
server signaled
2018-01-18 14:19:16.451 CET [28996] LOG: received SIGHUP, reloading
configuration files
2018-01-18 14:19:16.461 CET [28996] LOG: parameter "work_mem" changed to
"8MB"
2018-01-18 14:19:26,357 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 14:19:26,357 INFO: does not have lock
2018-01-18 14:19:26,365 INFO: no action. i am a secondary and i am
following a leader
35
Editing configuration with patronictl
$ patronictl edit-config batman
37
Editing configuration with patronictl
$ http http://127.0.0.1:8008
HTTP/1.0 503 Service Unavailable
...
{
"database_system_identifier": "6512366775019348050",
"patroni": {"scope": "batman", "version": "1.4"},
"pending_restart": true,
"postmaster_start_time": "2018-01-18 13:45:04.702 CET",
"role": "replica",
"server_version": 100000,
"state": "running",
"timeline": 2,
"xlog": {
"paused": false,
"received_location": 50331968,
"replayed_location": 50331968,
"replayed_timestamp": null
}
}
38
Editing configuration with patronictl
$ http http://127.0.0.1:8009
HTTP/1.0 200 OK
...
{
"database_system_identifier": "6512366775019348050",
"patroni": {"scope": "batman", "version": "1.4"},
"pending_restart": true,
"postmaster_start_time": "2018-01-18 13:44:44.764 CET",
...
"role": "master",
"server_version": 100000,
"state": "running",
"timeline": 2,
"xlog": {
"location": 50331968
}
}
39
Editing configuration with patronictl
$ patronictl restart batman postgresql0
+---------+-------------+-----------+--------+---------+-----------+-----------------+
| Cluster | Member | Host | Role | State | Lag in MB | Pending restart |
+---------+-------------+-----------+--------+---------+-----------+-----------------+
| batman | postgresql0 | 127.0.0.1 | | running | 0 | * |
| batman | postgresql1 | 127.0.0.1 | Leader | running | 0 | * |
+---------+-------------+-----------+--------+---------+-----------+-----------------+
40
Editing configuration with patronictl
…
$ psql -h localhost -p 5433 -U postgres -tqA \
-c "SHOW max_connections"
100
41
2x
retry_timeout
42
Changing TTL, loop_wait, retry_timeout
ttl >= loop_wait + retry_timeout * 2
$ patronictl edit-config batman
---
+++
@@ -1,9 +1,9 @@
-loop_wait: 10
+loop_wait: 5
maximum_lag_on_failover: 1048576
postgresql:
parameters:
work_mem: 8MB
max_connections: 101
use_pg_rewind: true
-retry_timeout: 10
+retry_timeout: 27
-ttl: 30
+ttl: 60
43
Changing TTL, loop_wait, retry_timeout
2018-01-18 14:31:06,350 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 14:31:06,364 INFO: no action. i am the leader with the lock
2018-01-18 14:31:16,349 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 14:31:16,362 INFO: no action. i am the leader with the lock
2018-01-18 14:31:16,376 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 14:31:16,392 INFO: no action. i am the leader with the lock
2018-01-18 14:31:21,377 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 14:31:21,392 INFO: no action. i am the leader with the lock
2018-01-18 14:31:26,381 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 14:31:26,396 INFO: no action. i am the leader with the lock
44
Changing TTL, loop_wait, retry_timeout
---
+++
@@ -1,4 +1,4 @@
-loop_wait: 5
+loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
parameters:
@@ -6,4 +6,4 @@
max_connections: 101
use_pg_rewind: true
retry_timeout: 27
-ttl: 60
+ttl: 5
45
Changing TTL, loop_wait, retry_timeout
ttl < loop_wait + retry_timeout * 2
2018-01-18 14:35:46,390 INFO: no action. i am the leader with the
lock
2018-01-18 14:35:46,405 INFO: Lock owner: postgresql1; I am
postgresql1
2018-01-18 14:35:46,408 WARNING: Watchdog not supported because
leader TTL 5 is less than 2x loop_wait 10
2018-01-18 14:35:46,418 INFO: no action. i am the leader with the
lock
2018-01-18 14:35:56,418 WARNING: Watchdog not supported because
leader TTL 5 is less than 2x loop_wait 10
2018-01-18 14:35:56,428 INFO: acquired session lock as a leader
2018-01-18 14:36:06,420 WARNING: Watchdog not supported because
leader TTL 5 is less than 2x loop_wait 10
2018-01-18 14:36:06,430 INFO: acquired session lock as a leader
46
Changing TTL, loop_wait, retry_timeout
ttl < loop_wait + retry_timeout * 2
2018-01-18 14:35:46,426 INFO: Lock owner: postgresql1; I am postgresql0
2018-01-18 14:35:46,426 INFO: does not have lock
2018-01-18 14:35:46,429 INFO: no action. i am a secondary and i am
following a leader
2018-01-18 14:35:51,594 INFO: Got response from postgresql1
http://127.0.0.1:8008/patroni: b'{"state": "running",
"postmaster_start_time": "2018-01-18 13:44:44.764 CET", "role": "master",
"server_version": 100000, "xlog": {"location": 50331968}, "timeline": 2,
"replication": [{"usename": "replicator", "application_name": "postgresql1",
"client_addr": "127.0.0.1", "state": "streaming", "sync_state": "async",
"sync_priority": 0}], "database_system_identifier": "6512366775019348050",
"pending_restart": true, "patroni": {"version": "1.4", "scope": "batman"}}'
2018-01-18 14:35:51,680 WARNING: Master (postgresql1) is still alive
2018-01-18 14:35:51,683 INFO: following a different leader because i am not
the healthiest node
47
Change it back to original values
$ patronictl edit-config batman
---
+++
@@ -11,5 +11,5 @@
work_mem: 8MB
max_connections: 101
use_pg_rewind: true
-retry_timeout: 27
+retry_timeout: 10
-ttl: 5
+ttl: 30
48
Cluster-wide and local configuration
etcd: /config -> {"postgresql":{"parameters":{"work_mem":"16MB"}}}
patroni.yaml: postgresql:
parameters:
work_mem: 12MB
49
Cluster-wide and local configuration
1. Patroni takes the contents of the /config key from DCS.
2. Most of the parameters can be redefined locally in the patroni.yaml
postgresql: section. It allows to set parameters for this specific instance. One
can use it to configure Patroni and PostgreSQL correctly on nodes that doesn’t
have the same hardware specification.
3. ALTER SYSTEM SET overrides values set on the previous 2 steps. It is not
recommended, since Patroni will not be aware of that changes and, for
example, will not set the pending_restart flag.
50
Cluster-wide and local configuration
bootstrap: # is used only one-time, when the cluster is created
dcs: # written to DCS /config on successful bootstrap,
# applied on all nodes
loop_wait: 5
postgresql:
max_connections: 142
51
REST API and monitoring
52
REST API endpoints
53
GET /patroni on the master
$ http http://127.0.0.1:8009/patroni
HTTP/1.0 200 OK
{
"database_system_identifier": "6512366775019348050",
"patroni": { "scope": "batman", "version": "1.4" },
"postmaster_start_time": "2018-01-18 13:44:44.764 CET",
"replication": [{
"application_name": "postgresql0",
"client_addr": "127.0.0.1",
"state": "streaming",
"sync_priority": 0,
"sync_state": "async",
"usename": "replicator"
}],
"role": "master",
"server_version": 100000,
"state": "running",
"timeline": 2,
"xlog": { "location": 50331968 }
}
54
GET /patroni on the replica
$ http http://127.0.0.1:8008/patroni
HTTP/1.0 200 OK
{
"database_system_identifier": "6512366775019348050",
"patroni": { "scope": "batman", "version": "1.4" },
"postmaster_start_time": "2018-01-18 14:47:13.034 CET",
"role": "replica",
"server_version": 100000,
"state": "running",
"timeline": 2,
"xlog": {
"paused": false,
"received_location": 50331648,
"replayed_location": 50331968,
"replayed_timestamp": null
}
}
55
Monitoring PostgreSQL health
● PostgreSQL master is running
○ GET /master should return 200 for one and only one node
● PostgreSQL is running
○ GET /patroni should return state:running for every node in the cluster
Patroni API does not provide a way to discover all PostgreSQL nodes. This can be
achieved by looking directly into the DCS, or using some features of the cloud
provider (i.e. AWS labels, see
https://github.com/zalando/patroni/blob/master/patroni/scripts/aws.py).
56
Routing connections from clients
● Using API http status codes:
○ /master - {200: master, 503: replica}
○ /replica - {503: master, 200: replica}
● Using callbacks:
○ on_start, on_stop, on_reload, on_restart, on_role_change,
57
Using callbacks
postgresql:
callbacks:
on_start: /etc/patroni/callback.sh
on_stop: /etc/patroni/callback.sh
on_role_change: /etc/patroni/callback.sh
58
Using callbacks
readonly cb_name=$1
readonly role=$2
readonly scope=$3
function usage() { echo "Usage: $0 <on_start|on_stop|on_role_change> <role> <scope>";
exit 1; }
case $cb_name in
on_stop )
remove_service_ip
;;
on_start|on_role_change )
[[ $role == 'master' ]] && add_service_ip || remove_service_ip
;;
* )
usage
;;
esac
59
Using callbacks
60
Using tags to modify behavior of
individual nodes
● nofailover (true/false) - disable failover/switchover to the given node
(node will not become a master)
62
Switchover and failover
63
Switchover and failover
● Failover: emergency promotion of a given node
○ automatic, when no leader is present in the cluster
○ manual, when automatic failover is not present or cannot decide on
the new master
64
Switchover with patronictl
$ patronictl switchover batman
Master [postgresql1]:
Candidate ['postgresql0'] []:
When should the switchover take place (e.g. 2015-10-01T14:30)
[now]:
Current cluster topology
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | | running | 0 |
| batman | postgresql1 | 127.0.0.1 | Leader | running | 0 |
+---------+-------------+-----------+--------+---------+-----------+
Are you sure you want to switchover cluster batman, demoting current
master postgresql1? [y/N]: y
2018-01-18 16:22:12.21399 Successfully failed over to "postgresql0"
65
Switchover with patronictl (continue)
$ patronictl list batman
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | Leader | running | 0 |
| batman | postgresql1 | 127.0.0.1 | | stopped | unknown |
+---------+-------------+-----------+--------+---------+-----------+
66
Scheduled switchover
$ patronictl switchover batman
Master [postgresql0]:
Candidate ['postgresql1'] []:
When should the switchover take place (e.g. 2015-10-01T14:30) [now]:
2018-01-18T16:27
Current cluster topology
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | Leader | running | 0 |
| batman | postgresql1 | 127.0.0.1 | | running | 0 |
+---------+-------------+-----------+--------+---------+-----------+
Are you sure you want to switchover cluster batman, demoting current master
postgresql0? [y/N]: y
2018-01-18 16:26:35.45274 Switchover scheduled
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | Leader | running | 0 |
| batman | postgresql1 | 127.0.0.1 | | running | 0 |
+---------+-------------+-----------+--------+---------+-----------+
Switchover scheduled at: 2018-01-18T16:27:00+01:00
from: postgresql0
67
Scheduled restarts
$ patronictl restart batman postgresql1
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | | running | 0 |
| batman | postgresql1 | 127.0.0.1 | Leader | running | 0 |
+---------+-------------+-----------+--------+---------+-----------+
Are you sure you want to restart members postgresql1? [y/N]: y
Restart if the PostgreSQL version is less than provided (e.g. 9.5.2) []:
When should the restart take place (e.g. 2015-10-01T14:30) [now]:
2018-01-18T16:31:00
Success: restart scheduled on member postgresql1
68
Scheduled restarts
2018-01-18 16:30:41,497 INFO: Awaiting restart at
2018-01-18T16:31:00+01:00 (in 19 seconds)
2018-01-18 16:30:41,507 INFO: no action. i am the leader with the lock
2018-01-18 16:30:51,497 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 16:31:00,003 INFO: Manual scheduled restart at
2018-01-18T16:31:00+01:00
2018-01-18 16:31:00,024 INFO: restart initiated
2018-01-18 16:31:00.234 CET [37661] LOG: received fast shutdown request
CET
2018-01-18 16:31:00.372 CET [38270] FATAL: the database system is
starting up
2018-01-18 16:31:00.386 CET [38267] LOG: database system is ready to
accept connections
2018-01-18 16:31:00,627 INFO: Lock owner: postgresql1; I am postgresql1
2018-01-18 16:31:00,628 INFO: establishing a new patroni connection to
the postgres cluster
2018-01-18 16:31:00,770 INFO: no action. i am the leader with the lock
69
Reinitialize (don’t repeat GitLab mistake)
$ patronictl reinit batman postgresql0
+---------+-------------+-----------+--------+---------+-----------+
| Cluster | Member | Host | Role | State | Lag in MB |
+---------+-------------+-----------+--------+---------+-----------+
| batman | postgresql0 | 127.0.0.1 | | running | 0.0 |
| batman | postgresql1 | 127.0.0.1 | Leader | running | 0.0 |
+---------+-------------+-----------+--------+---------+-----------+
Are you sure you want to reinitialize members postgresql0? [y/N]: y
Success: reinitialize for member postgresql0
https://about.gitlab.com/2017/02/10/postmortem-of-database
-outage-of-january-31/
70
Pause mode
Pause mode is useful for performing maintenance on the PostgreSQL cluster
or DCS.
However
● New replicas can be created
● Manual switchover/failover works
71
Pause mode
$ patronictl pause batman --wait
'pause' request sent, waiting until it is recognized by all nodes
Success: cluster management is paused
72
Pause mode (promoting another master)
$ pg_ctl -D data/postgresql0 promote
waiting for server to promote.... done
server promoted
73
Pause mode (promoting another master)
$ http http://127.0.0.1:8008/master
HTTP/1.0 503 Service Unavailable
{
"database_system_identifier": "6512774501076700824",
"patroni": {
"scope": "batman",
"version": "1.4"
},
"pause": true,
"postmaster_start_time": "2018-01-19 15:51:31.879 CET",
"role": "master",
"server_version": 100000,
"state": "running",
"timeline": 2,
"xlog": {
"location": 50332016
}
}
74
Pause mode (resuming)
$ patronictl resume batman
Success: cluster management is resumed
● synchronous_mode_strict: true/false
Works the same as a synchronous mode, but if no replicas can be set to
synchronous - the synchronous mode is retained and the master will not
accept any writes (*) until another synchronous replica is available, resulting
in no data loss
76
Synchronous replication
78
Synchronous replication REST endpoints
79
Extensibility
● Callbacks
○ client routing and server monitoring
● post_bootstrap script
○ called after bootstrapping of the new cluster. If they
return non-zero - bootstrap is cancelled. One can
populate a database or create initial users from that
script.
80
Custom replica creation
postgresql:
create_replica_method:
- wal_e
- basebackup
wal_e:
command: /bin/wale_restore
envdir: /etc/env.d/wal-e
threshold_megabytes: 4096
threshold_backup_size_percentage: 30
use_iam: 1
retries: 2
no_master: 1
81
Custom replica creation
wal_e:
command: /bin/wale_restore # script to call
no_master: 1 # whether to call it to
# initialize the replica w/o
# the master
# following arguments are method-specific
envdir: /etc/env.d/wal-e
use_iam: 1
retries: 2
82
Custom replica creation
wal_e: # Replica creation command:
command: /bin/wale_restore /bin/wale_restore \
--scope=batman \
--datadir=/home/postgres/pgdata \
--role=replica \
--connstring=”postgres://postgres@l
ocalhost:5432/postgres” \
no_master: 1 --no_master=1 \
envdir: /etc/env.d/wal-e --envdir=/etc/env.d/wal-e \
use_iam: 1 --use-iam=1 \
retries: 2 --retries=2
83
Custom replica creation
85
initdb with arguments
bootstrap:
initdb:
- encoding: UTF8
- data-checksums
- auth-host: md5
- auth-local: trust
86
Custom bootstrap
bootstrap:
method: clone_with_wale
clone_with_wale:
command: python3 /clone_with_s3.py --envdir
"/etc/env.d/clone/wal-e"
--recovery-target-time="2018-01-19 00:00:18.349 UTC"
recovery_conf:
restore_command: envdir
"/etc/env.d/clone/wal-e" wal-e wal-fetch "%f" "%p"
recovery_target_timeline: latest
recovery_target_action: promote
recovery_target_time: "2018-01-19 00:00:18.349
UTC"
recovery_target_inclusive: false
87
Custom bootstrap
● only one method allowed (initdb or custom)
● on failure - the data directory is wiped out and /initialize lock is released
● if post_boostrap script fails - the actions are the same as when the
bootstrap fails.
88
post_bootstrap
bootstrap:
post_bootstrap: /post_bootstrap.sh
$ cat /post_bootstrap.sh
#!/bin/bash
echo "\c template1
CREATE EXTENSION pg_stat_statements;
CREATE ROLE admin;" \
| psql -d $1 # $1 - connection string to the newly created
master.
89
Patroni configuration
scope: batman # cluster name, must be the same for all node in the given cluster
#namespace: /service/ # namespace (key prefix) in DCS, default value is /service
name: postgresql0 # postgresql node name
restapi:
# restapi configuration
etcd:
# etcd configuration (can also be consul, zoookeeper or kubernetes in
corresponding sections).
bootstrap:
# configuration applied once during the cluster bootstrap
postgresql:
# postgres-related node-local configuration
watchdog:
# how Patroni interacts with the watchdog
tags:
# map of tags: nofailover, noloadbalance, nosync, replicatefrom, clonefrom
90
Restapi configuration
restapi:
listen: 0.0.0.0:8008 # address to listen to for REST API requests
connect_address: 127.0.0.1:8008 # address to connect to this node from other
# nodes, also stored in DCS
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem # certificate for SSL connection
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key # keyfile for SSL connection
# authentication: # username and password for basic auth.
# username: admin # Used for all data modifying operations
# password: secret # (POST, PATCH, PUT)
91
DCS configuration
etcd:
host: 127.0.0.1:2379
# protocol: http
# username: etcd
# password: v4rY$ecRetW0rd
# cacert: /etc/ssl/ca.crt
# cert: /etc/ssl/cert.crt
# key: /etc/ssl/key.key
consul:
host: 127.0.0.1:8500
# scheme: http
# token: abcd1234
# verify: true
# cacert: /etc/ssl/ca.crt
# cert: /etc/ssl/cert.crt
# key: /etc/ssl/key.key
# dc: default
# checks: []
92
DCS configuration
zookeeper:
hosts:
- host1:port1
- host2:port2
- host3:port3
exhibitor:
hosts:
- host1
- host2
- host3
poll_interval: 300 # interval to update topology from Exhibitor
port: 8181 # Exhibitor port (not ZooKeeper!)
93
Bootstrap configuration
bootstrap:
dcs: # this content is written into the `/config` key after bootstrap succeeded
loop_wait: 10
ttl: 30
retry_timeout: 10
maximum_lag_on_failover: 10485760
# master_start_timeout: 300
# synchronous_mode: false
# synchronous_mode_strict: false
postgresql:
use_pg_rewind: true
use_slots: true
# parameters: # These parameters could be changed only globally (via DCS)
# max_connections: 100
# max_wal_senders: 10
# max_prepared_transactions: 0
# max_locks_per_transaction: 64
# max_replication_slots: 10
# max_worker_processes: 8
pg_hba:
- local all all trust
- hostssl all all all md5
- hostssl replication standby all md5
94
Bootstrap configuration (continue)
bootstrap:
method: my_bootstrap_method
my_bootstrap_method:
command: /usr/local/bin/my_bootstrap_script.sh
# recovery_conf:
# restore_command: /usr/local/bin/my_restore_command.sh
# recovery_target_timeline: latest
# recovery_target_action: promote
# recovery_target_time: "2018-01-19 00:00:18.349 UTC"
# recovery_target_inclusive: false
post_bootstrap: /usr/local/bin/my_post_bootstrap_command.sh
95
Postgresql configuration
postgresql:
use_unix_socket: true # how Patroni will connect to the local postgres
listen: 0.0.0.0:5432
connect_address: 127.0.0.1:5432 # how this node can be accessed from outside
data_dir: /home/postgres/pgroot/pgdata
bin_dir: /usr/lib/postgresql/10/bin # where the postgres binaries are located
authentication:
superuser:
username: postgres
password: SeCrEtPaS$WoRd
replication:
username: standby
password: sTaNdByPaS$WoRd
parameters:
shared_buffers: 8GB
unix_socket_directories: /var/run/postgresql
# recovery_conf:
# restore_command: /usr/local/bin/my_restore_command.sh "%f" "%p"
96
Postgresql configuration (continue)
postgresql:
callbacks:
on_start: /usr/local/bin/my_callback.sh
on_stop: /usr/local/bin/my_callback.sh
on_role_change: /usr/local/bin/my_callback.sh
create_replica_method:
- custom_backup
- basebackup
custom_backup:
command: /usr/local/bin/restore_cluster.sh
retries: 2
no_master: 1
97
Watchdog and tags configuration
watchdog:
mode: automatic # Allowed values: off, automatic, required
device: /dev/watchdog
tags:
nofailover: false
noloadbalance: false
clonefrom: true
# nosync: true
# replicatefrom: postgresql1
98
Additional ways of configuring Patrioni
99
Troubleshooting
100
DCS is not accessible
$ patroni postgres0.yml
101
Patroni can’t find PostgreSQL binaries
$ patroni postgres0.yml
102
Not really an error, will disappear after
“loop_wait” seconds
$ patroni postgres1.yml
103
Wrong initdb config options
$ patroni postgres0.yml
--- a/postgres0.yml
+++ b/postgres0.yml
@@ -43,7 +43,7 @@ bootstrap:
# some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- - data-checksums: true
+ - data-checksums
104
Badly formatted yaml
bootstrap: bootstrap:
users: users:
admin: admin:
password: admin password: admin
options: options:
-createrole - createrole
-createdb - createdb
ERROR: DO $$
BEGIN
SET local synchronous_commit = 'local';
PERFORM * FROM pg_authid WHERE rolname = 'admin';
IF FOUND THEN
ALTER ROLE "admin" WITH - C R E A T E R O L E - C R E A T E D B LOGIN PASSWORD
'admin';
ELSE
CREATE ROLE "admin" WITH - C R E A T E R O L E - C R E A T E D B LOGIN PASSWORD
'admin';
END IF;
END;
$$
105
Cluster was initialized during install of
postgres packages
# node1 # node2
106
Useful links
● Patroni - https://github.com/zalando/patroni
107
Thank you!
108