ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • stolon 설치
    PostgreSQL 2019. 10. 14. 16:04

    stolon 시나리오(로컬 환경에서 2개의 agensgraph instance로 진행)

    1. etcd 설치
    2. etcd 설치 확인
    3. etcd 시작
    4. stolon 실행 파일 다운로드
    5. stolon 실행 파일 압축 풀기 
    6. stolon 실행 파일 확인
    7. stolon 클러스터 초기화 
    8. stolon-sentinel 시작
    9. stonlon-keeper 시작 - master
    10. stolon-proxy 시작
    11. proxy를 통해 DB에 접속
    12. stolon-keeper 시작 - standby
    13. stolon 구성 확인
    14. 자동 failover 테스트
    15. 자동 failover 테스트 확인
    16. failback 테스트
    17. failback 테스트 확인

    다음은 stolon 시나리오를 실행한 명령어이다.

    //etcd 설치
    [bylee@localhost ha]$ wget https://github.com/etcd-io/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz
    [bylee@localhost ha]$ tar -vxf etcd-v3.3.9-linux-amd64.tar.gz
    [bylee@localhost ha]$ mv etcd-v3.3.9-linux-amd64 etcd
     
    //etcd 설치 확인
    [bylee@localhost ha]$ cd etcd/
    [bylee@localhost etcd]$ ./etcd --version
    etcd Version: 3.3.9
    Git SHA: fca8add78
    Go Version: go1.10.3
    Go OS/Arch: linux/amd64
     
    //etcd 시작
    [bylee@localhost etcd]$ ./etcd
    2018-09-03 16:24:49.954250 I | etcdmain: etcd Version: 3.3.9
    2018-09-03 16:24:49.954292 I | etcdmain: Git SHA: fca8add78
    2018-09-03 16:24:49.954295 I | etcdmain: Go Version: go1.10.3
    2018-09-03 16:24:49.954297 I | etcdmain: Go OS/Arch: linux/amd64
    2018-09-03 16:24:49.954300 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
    2018-09-03 16:24:49.954305 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
    2018-09-03 16:24:49.954636 N | etcdmain: the server is already initialized as member before, starting as etcd member...
    2018-09-03 16:24:49.957432 I | embed: listening for peers on http://localhost:2380
    2018-09-03 16:24:49.957649 I | embed: listening for client requests on localhost:2379
    2018-09-03 16:24:49.997458 I | etcdserver: recovered store from snapshot at index 700007
    2018-09-03 16:24:50.001203 I | etcdserver: name = default
    2018-09-03 16:24:50.001322 I | etcdserver: data dir = default.etcd
    2018-09-03 16:24:50.001374 I | etcdserver: member dir = default.etcd/member
    2018-09-03 16:24:50.001408 I | etcdserver: heartbeat = 100ms
    2018-09-03 16:24:50.001441 I | etcdserver: election = 1000ms
    2018-09-03 16:24:50.001474 I | etcdserver: snapshot count = 100000
    2018-09-03 16:24:50.001518 I | etcdserver: advertise client URLs = http://localhost:2379
    2018-09-03 16:24:50.467822 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 751407
    2018-09-03 16:24:50.476697 I | raft: 8e9e05c52164694d became follower at term 2
    2018-09-03 16:24:50.476801 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 2, commit: 751407, applied: 700007, lastindex: 751407, lastterm: 2]
    2018-09-03 16:24:50.476910 I | etcdserver/api: enabled capabilities for version 3.3
    2018-09-03 16:24:50.476958 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
    2018-09-03 16:24:50.476996 I | etcdserver/membership: set the cluster version to 3.3 from store
    2018-09-03 16:24:50.490717 W | auth: simple token is not cryptographically signed
    2018-09-03 16:24:50.491408 I | etcdserver: starting server... [version: 3.3.9, cluster version: 3.3]
    2018-09-03 16:24:50.497664 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
    2018-09-03 16:24:51.082331 I | raft: 8e9e05c52164694d is starting a new election at term 2
    2018-09-03 16:24:51.082358 I | raft: 8e9e05c52164694d became candidate at term 3
    2018-09-03 16:24:51.082388 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 3
    2018-09-03 16:24:51.082400 I | raft: 8e9e05c52164694d became leader at term 3
    2018-09-03 16:24:51.082405 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 3
    2018-09-03 16:24:51.083628 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
    2018-09-03 16:24:51.083884 E | etcdmain: forgot to set Type=notify in systemd service file?
    2018-09-03 16:24:51.083921 I | embed: ready to serve client requests
    2018-09-03 16:24:51.103711 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
     
     
    // stolon 실행 파일 다운로드
    [bylee@localhost ha]$ wget https://github.com/sorintlab/stolon/releases/download/v0.12.0/stolon-v0.12.0-linux-amd64.tar.gz
    --2018-09-03 16:20:02--  https://github.com/sorintlab/stolon/releases/download/v0.12.0/stolon-v0.12.0-linux-amd64.tar.gz
    Resolving github.com (github.com)... 192.30.255.112, 192.30.255.113
    Connecting to github.com (github.com)|192.30.255.112|:443... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/43884372/020301c2-7f73-11e8-90ea-94aad6d13245?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180903%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180903T072003Z&X-Amz-Expires=300&X-Amz-Signature=5c4dc01c33231d48fb50da4397f0ffd9201adb851f079dc80f8021545aaaa91d&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dstolon-v0.12.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
    --2018-09-03 16:20:03--  https://github-production-release-asset-2e65be.s3.amazonaws.com/43884372/020301c2-7f73-11e8-90ea-94aad6d13245?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180903%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180903T072003Z&X-Amz-Expires=300&X-Amz-Signature=5c4dc01c33231d48fb50da4397f0ffd9201adb851f079dc80f8021545aaaa91d&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dstolon-v0.12.0-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
    Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.112.184
    Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.112.184|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 31608931 (30M) [application/octet-stream]
    Saving to: 'stolon-v0.12.0-linux-amd64.tar.gz'
     
    stolon-v0.12.0-linux-amd64 100%[=====================================>]  30.14M  1.27MB/s    in 31s    
     
    2018-09-03 16:20:35 (998 KB/s) - 'stolon-v0.12.0-linux-amd64.tar.gz' saved [31608931/31608931]
     
    //stolon 실행 파일 압축 풀기
    [bylee@localhost ha]$ tar -xvf stolon-v0.12.0-linux-amd64.tar.gz
    stolon-v0.12.0-linux-amd64/
    stolon-v0.12.0-linux-amd64/README.md
    stolon-v0.12.0-linux-amd64/examples/
    stolon-v0.12.0-linux-amd64/examples/kubernetes/
    stolon-v0.12.0-linux-amd64/examples/kubernetes/role-binding.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/README.md
    stolon-v0.12.0-linux-amd64/examples/kubernetes/stolon-sentinel.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/postgresql_upgrade.md
    stolon-v0.12.0-linux-amd64/examples/kubernetes/secret.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/stolon-proxy-service.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/role.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/image/
    stolon-v0.12.0-linux-amd64/examples/kubernetes/image/docker/
    stolon-v0.12.0-linux-amd64/examples/kubernetes/image/docker/Dockerfile.template
    stolon-v0.12.0-linux-amd64/examples/kubernetes/image/docker/Makefile
    stolon-v0.12.0-linux-amd64/examples/kubernetes/stolon-proxy.yaml
    stolon-v0.12.0-linux-amd64/examples/kubernetes/stolon-keeper.yaml
    stolon-v0.12.0-linux-amd64/examples/swarm/
    stolon-v0.12.0-linux-amd64/examples/swarm/README.md
    stolon-v0.12.0-linux-amd64/examples/swarm/docker-compose-pg.yml
    stolon-v0.12.0-linux-amd64/examples/swarm/docker-compose-etcd.yml
    stolon-v0.12.0-linux-amd64/examples/swarm/etc/
    stolon-v0.12.0-linux-amd64/examples/swarm/etc/secrets/
    stolon-v0.12.0-linux-amd64/examples/swarm/etc/secrets/pgsql
    stolon-v0.12.0-linux-amd64/examples/swarm/etc/secrets/pgsql_repl
    stolon-v0.12.0-linux-amd64/bin/
    stolon-v0.12.0-linux-amd64/bin/stolon-proxy
    stolon-v0.12.0-linux-amd64/bin/stolonctl
    stolon-v0.12.0-linux-amd64/bin/stolon-keeper
    stolon-v0.12.0-linux-amd64/bin/stolon-sentinel
    stolon-v0.12.0-linux-amd64/doc/
    stolon-v0.12.0-linux-amd64/doc/pg_rewind.md
    stolon-v0.12.0-linux-amd64/doc/initialization.md
    stolon-v0.12.0-linux-amd64/doc/architecture.png
    stolon-v0.12.0-linux-amd64/doc/manual_switchover.md
    stolon-v0.12.0-linux-amd64/doc/README.md
    stolon-v0.12.0-linux-amd64/doc/architecture.svg
    stolon-v0.12.0-linux-amd64/doc/custom_pg_hba_entries.md
    stolon-v0.12.0-linux-amd64/doc/simplecluster.md
    stolon-v0.12.0-linux-amd64/doc/faq.md
    stolon-v0.12.0-linux-amd64/doc/architecture_small.png
    stolon-v0.12.0-linux-amd64/doc/ssl.md
    stolon-v0.12.0-linux-amd64/doc/stolonctl.md
    stolon-v0.12.0-linux-amd64/doc/upgrade.md
    stolon-v0.12.0-linux-amd64/doc/architecture.md
    stolon-v0.12.0-linux-amd64/doc/commands/
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_removekeeper.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_spec.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_promote.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_status.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolon-sentinel.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_clusterdata.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_update.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_init.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolon-proxy.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolon-keeper.md
    stolon-v0.12.0-linux-amd64/doc/commands/stolonctl_version.md
    stolon-v0.12.0-linux-amd64/doc/pitr_wal-e.md
    stolon-v0.12.0-linux-amd64/doc/pitr.md
    stolon-v0.12.0-linux-amd64/doc/postgres_parameters.md
    stolon-v0.12.0-linux-amd64/doc/syncrepl.md
    stolon-v0.12.0-linux-amd64/doc/commands_invocation.md
    stolon-v0.12.0-linux-amd64/doc/standbycluster.md
    stolon-v0.12.0-linux-amd64/doc/cluster_spec.md
     
    //stolon 실행 파일 확인
    [bylee@localhost ha]$ ll
    합계 41896
    drwxrwxr-x. 11 bylee bylee     4096  8월 30 19:12 agensgraph
    drwxrwxr-x.  2 bylee bylee     4096  8월 23 09:46 config
    drwxr-xr-x.  4 bylee bylee     4096  8월 30 20:19 etcd
    -rw-rw-r--.  1 bylee bylee 11254519  7월 25 02:22 etcd-v3.3.9-linux-amd64.tar.gz
    drwxrwxr-x. 15 bylee bylee     4096  8월 30 20:29 patroni
    drwxrwxr-x.  8 bylee bylee     4096  8월 22 15:47 repmgr
    drwxr-xr-x.  5 bylee bylee     4096  7월  4 17:12 stolon-v0.12.0-linux-amd64
    -rw-rw-r--.  1 bylee bylee 31608931  7월  4 17:14 stolon-v0.12.0-linux-amd64.tar.gz
    [bylee@localhost ha]$ cd stolon-v0.12.0-linux-amd64/
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ll
    합계 20
    -rw-r--r--. 1 bylee bylee 4605  7월  4 17:12 README.md
    drwxr-xr-x. 2 bylee bylee 4096  7월  4 17:12 bin
    drwxr-xr-x. 3 bylee bylee 4096  7월  4 17:12 doc
    drwxr-xr-x. 4 bylee bylee 4096  7월  4 17:12 examples
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ cd bin/
    [bylee@localhost bin]$ ll
    합계 105028
    -rwxr-xr-x. 1 bylee bylee 27229184  7월  4 17:12 stolon-keeper
    -rwxr-xr-x. 1 bylee bylee 26703872  7월  4 17:12 stolon-proxy
    -rwxr-xr-x. 1 bylee bylee 27064288  7월  4 17:12 stolon-sentinel
    -rwxr-xr-x. 1 bylee bylee 26543360  7월  4 17:12 stolonctl
     
    [bylee@localhost bin]$ ./stolonctl --version
    stolonctl version v0.12.0
     
    //stolon 클러스터 초기화
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 init
    WARNING: The databases managed by the keepers will be overwritten depending on the provided cluster spec.
    Are you sure you want to continue? [yes/no] yes
     
    //stolon-sentinel 시작
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 init
    WARNING: The databases managed by the keepers will be overwritten depending on the provided cluster spec.
    Are you sure you want to continue? [yes/no] yes
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolon-sentinel --cluster-name=stolon-cluster --store-backend=etcdv3
    2018-09-03T16:28:48.864+0900    INFO    cmd/sentinel.go:1965    sentinel uid    {"uid": "5f54d0f0"}
    2018-09-03T16:28:48.865+0900    INFO    cmd/sentinel.go:96  Trying to acquire sentinels leadership
    2018-09-03T16:28:48.871+0900    INFO    cmd/sentinel.go:103 sentinel leadership acquired
    2018-09-03T16:28:48.874+0900    INFO    cmd/sentinel.go:771 trying to find initial master
    2018-09-03T16:28:48.877+0900    ERROR   cmd/sentinel.go:1906    failed to update cluster data   {"error": "cannot choose initial master: no keepers registered"}
    2018-09-03T16:28:53.898+0900    INFO    cmd/sentinel.go:771 trying to find initial master
    2018-09-03T16:28:53.899+0900    ERROR   cmd/sentinel.go:1906    failed to update cluster data   {"error": "cannot choose initial master: no keepers registered"}
     
    //stonlon-keeper 시작 - master
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolon-keeper --cluster-name=stolon-cluster --store-backend=etcdv3 --uid=agensgraph0 --data-dir=data/agensgraph0 --pg-su-password=supassword --pg-repl-username=repluser --pg-repl-password=replpassword --pg-listen-address=127.0.0.1 --pg-bin-path=/home/bylee/ha/agensgraph/bin --pg-port=5532
    2018-09-03T16:31:11.164+0900    WARN    cmd/keeper.go:1807  provided --pgListenAddress "127.0.0.1" is a loopback ip. This will be advertized to the other components and communication will fail if they are on different hosts
    2018-09-03T16:31:11.165+0900    INFO    cmd/keeper.go:1893  exclusive lock on data dir taken
    2018-09-03T16:31:11.168+0900    INFO    cmd/keeper.go:501   keeper uid  {"uid": "agensgraph0"}
    2018-09-03T16:31:11.177+0900    INFO    cmd/keeper.go:988   our keeper data is not available, waiting for it to appear
    2018-09-03T16:31:16.179+0900    INFO    cmd/keeper.go:988   our keeper data is not available, waiting for it to appear
    2018-09-03T16:31:21.181+0900    INFO    cmd/keeper.go:1049  current db UID different than cluster data db UID   {"db": "", "cdDB": "6b612c86"}
    2018-09-03T16:31:21.181+0900    INFO    cmd/keeper.go:1053  initializing the database cluster
    The files belonging to this database system will be owned by user "bylee".
    This user must also own the server process.
     
    The database cluster will be initialized with locale "ko_KR.UTF-8".
    The default database encoding has accordingly been set to "UTF8".
    initdb: could not find suitable text search configuration for locale "ko_KR.UTF-8"
    The default text search configuration will be set to "simple".
     
    Data page checksums are disabled.
     
    creating directory /home/bylee/ha/stolon-v0.12.0-linux-amd64/data/agensgraph0/postgres ... ok
    creating subdirectories ... ok
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting dynamic shared memory implementation ... posix
    creating configuration files ... ok
    running bootstrap script ... ok
    performing post-bootstrap initialization ... ok
    syncing data to disk ... ok
     
    WARNING: enabling "trust" authentication for local connections
    You can change this by editing pg_hba.conf or using the option -A, or
    --auth-local and --auth-host, the next time you run initdb.
     
    Success. You can now start the database server using:
     
        /home/bylee/ha/agensgraph/bin/ag_ctl -D /home/bylee/ha/stolon-v0.12.0-linux-amd64/data/agensgraph0/postgres -l logfile start
     
    2018-09-03T16:31:22.588+0900    INFO    postgresql/postgresql.go:287    starting database
    2018-09-03 16:31:22.592 KST [6977] LOG:  listening on IPv4 address "127.0.0.1", port 5532
    2018-09-03 16:31:22.593 KST [6977] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
    2018-09-03 16:31:22.593 KST [6977] HINT:  Is another postmaster already running on port 5532? If not, wait a few seconds and retry.
    2018-09-03 16:31:22.593 KST [6977] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5532"
    2018-09-03 16:31:22.625 KST [6978] LOG:  database system was shut down at 2018-09-03 16:31:22 KST
    2018-09-03 16:31:22.638 KST [6977] LOG:  database system is ready to accept connections
    2018-09-03T16:31:22.812+0900    INFO    cmd/keeper.go:1112  setting roles
    2018-09-03T16:31:22.812+0900    INFO    postgresql/postgresql.go:513    setting superuser password
    2018-09-03T16:31:22.815+0900    INFO    postgresql/postgresql.go:517    superuser password set
    2018-09-03T16:31:22.815+0900    INFO    postgresql/postgresql.go:520    creating replication role
    2018-09-03T16:31:22.818+0900    INFO    postgresql/postgresql.go:530    replication role created    {"role": "repluser"}
    2018-09-03T16:31:22.819+0900    INFO    postgresql/postgresql.go:351    stopping database
    2018-09-03 16:31:22.820 KST [6977] LOG:  received fast shutdown request
    waiting for server to shut down....2018-09-03 16:31:22.821 KST [6977] LOG:  aborting any active transactions
    2018-09-03 16:31:22.822 KST [6977] LOG:  worker process: logical replication launcher (PID 6984) exited with exit code 1
    2018-09-03 16:31:22.822 KST [6979] LOG:  shutting down
    2018-09-03 16:31:22.849 KST [6977] LOG:  database system is shut down
     done
    server stopped
    2018-09-03T16:31:22.924+0900    INFO    cmd/keeper.go:1395  our db requested role is master
    2018-09-03T16:31:22.928+0900    INFO    postgresql/postgresql.go:287    starting database
    2018-09-03 16:31:22.943 KST [6995] LOG:  listening on IPv4 address "127.0.0.1", port 5532
    2018-09-03 16:31:22.943 KST [6995] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
    2018-09-03 16:31:22.943 KST [6995] HINT:  Is another postmaster already running on port 5532? If not, wait a few seconds and retry.
    2018-09-03 16:31:22.943 KST [6995] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5532"
    2018-09-03 16:31:22.966 KST [6996] LOG:  database system was shut down at 2018-09-03 16:31:22 KST
    2018-09-03 16:31:22.970 KST [6995] LOG:  database system is ready to accept connections
    2018-09-03T16:31:23.145+0900    INFO    cmd/keeper.go:1423  already master
    2018-09-03T16:31:23.173+0900    INFO    cmd/keeper.go:1585  postgres parameters not changed
    2018-09-03T16:31:23.182+0900    INFO    cmd/keeper.go:1596  postgres hba entries not changed
    2018-09-03T16:31:28.186+0900    INFO    cmd/keeper.go:1395  our db requested role is master
    2018-09-03T16:31:28.187+0900    INFO    cmd/keeper.go:1423  already master
     
    //stolon-proxy 시작
    [bylee@localhost stolon-v012.0-linux-amd64]$ ./bin/stolon-proxy --cluster-name=stolon-cluster --store-backend=etcdv3 --port=5632
    2018-09-03T16:33:26.621+0900    INFO    cmd/proxy.go:383    proxy uid   {"uid": "5553daed"}
    2018-09-03T16:33:26.636+0900    INFO    cmd/proxy.go:132    Starting proxying
    2018-09-03T16:33:26.637+0900    INFO    cmd/proxy.go:251    master address  {"address": "127.0.0.1:5532"}
    2018-09-03T16:33:26.662+0900    INFO    cmd/proxy.go:266    not proxying to master address since we aren't in the enabled proxies list  {"address": "127.0.0.1:5532"}
    2018-09-03T16:33:31.667+0900    INFO    cmd/proxy.go:251    master address  {"address": "127.0.0.1:5532"}
    2018-09-03T16:33:31.673+0900    INFO    cmd/proxy.go:263    proxying to master address  {"address": "127.0.0.1:5532"}
     
    // proxy를 통해 DB에 접속
    [bylee@localhost bin]$ ./agens -p 5632 -d postgres
    agens: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5632"?
    [bylee@localhost bin]$ ./agens -h localhost -p 5632 -d postgres
    Password: supassword
    agens (AgensGraph 1.4devel, based on PostgreSQL 10.4)
    Type "help" for help.
     
    postgres=# create graph p;
    CREATE GRAPH
    postgres=#
     
    //stolon-keeper 시작 - standby
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolon-keeper --cluster-name=stolon-cluster --store-backend=etcdv3 --uid=agensgraph1 --data-dir=data/agensgraph1 --pg-su-password=supassword --pg-repl-username=repluser --pg-repl-password=replpassword --pg-listen-address=127.0.0.1 --pg-bin-path=/home/bylee/ha/agensgraph/bin --pg-port=5533
    2018-09-03T16:39:44.813+0900    WARN    cmd/keeper.go:1807  provided --pgListenAddress "127.0.0.1" is a loopback ip. This will be advertized to the other components and communication will fail if they are on different hosts
    2018-09-03T16:39:44.813+0900    INFO    cmd/keeper.go:1893  exclusive lock on data dir taken
    2018-09-03T16:39:44.813+0900    INFO    cmd/keeper.go:501   keeper uid  {"uid": "agensgraph1"}
    2018-09-03T16:39:44.818+0900    INFO    cmd/keeper.go:1002  our db boot UID is different than the cluster data one, waiting for it to be updated    {"bootUUID": "841c31ae-6655-4224-a9d6-9abb142eb833", "clusterBootUUID": "c05feec2-1ef0-46d0-ac21-4a5d0b4a6ec5"}
    2018-09-03T16:39:49.821+0900    ERROR   cmd/keeper.go:1018  db failed to initialize or resync
    2018-09-03T16:39:49.839+0900    INFO    cmd/keeper.go:1049  current db UID different than cluster data db UID   {"db": "", "cdDB": "14ccffc8"}
    2018-09-03T16:39:49.839+0900    INFO    cmd/keeper.go:1196  resyncing the database cluster
    2018-09-03T16:39:49.842+0900    INFO    cmd/keeper.go:1221  database cluster not initialized
    2018-09-03T16:39:49.844+0900    INFO    cmd/keeper.go:838   syncing from followed db    {"followedDB": "6b612c86", "keeper": "agensgraph0"}
    2018-09-03T16:39:49.844+0900    INFO    postgresql/postgresql.go:852    running pg_basebackup
    WARNING:  skipping special file "./postgresql.auto.conf"
    2018-09-03T16:39:58.873+0900    INFO    cmd/keeper.go:844   sync succeeded
    2018-09-03T16:39:58.877+0900    INFO    postgresql/postgresql.go:287    starting database
    2018-09-03 16:39:58.904 KST [8512] LOG:  listening on IPv4 address "127.0.0.1", port 5533
    2018-09-03 16:39:58.906 KST [8512] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
    2018-09-03 16:39:58.906 KST [8512] HINT:  Is another postmaster already running on port 5533? If not, wait a few seconds and retry.
    2018-09-03 16:39:58.907 KST [8512] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5533"
    2018-09-03 16:39:58.927 KST [8513] LOG:  database system was interrupted; last known up at 2018-09-03 16:39:58 KST
    2018-09-03T16:39:59.083+0900    INFO    cmd/keeper.go:1437  our db requested role is standby    {"followedDB": "6b612c86"}
    2018-09-03T16:39:59.083+0900    INFO    cmd/keeper.go:1456  already standby
    2018-09-03 16:39:59.088 KST [8517] FATAL:  the database system is starting up
    2018-09-03T16:39:59.089+0900    ERROR   cmd/keeper.go:936   failed to get replication slots {"error": "pq: the database system is starting up"}
    2018-09-03T16:39:59.091+0900    ERROR   cmd/keeper.go:1510  error updating replication slots    {"error": "pq: the database system is starting up"}
    2018-09-03T16:39:59.094+0900    INFO    cmd/keeper.go:1585  postgres parameters not changed
    2018-09-03T16:39:59.107+0900    INFO    cmd/keeper.go:1596  postgres hba entries not changed
    2018-09-03 16:39:59.296 KST [8513] LOG:  entering standby mode
    2018-09-03 16:39:59.299 KST [8513] LOG:  redo starts at 0/2000028
    2018-09-03 16:39:59.300 KST [8513] LOG:  consistent recovery state reached at 0/2000130
    2018-09-03 16:39:59.300 KST [8512] LOG:  database system is ready to accept read only connections
    2018-09-03 16:39:59.314 KST [8522] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
    2018-09-03T16:40:04.122+0900    INFO    cmd/keeper.go:1437  our db requested role is standby    {"followedDB": "6b612c86"}
    2018-09-03T16:40:04.124+0900    INFO    cmd/keeper.go:1456  already standby
     
     
    //stolon 구성 확인
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 status
    === Active sentinels ===
     
    ID      LEADER
    5f54d0f0    true
     
    === Active proxies ===
     
    ID
    5553daed
     
    === Keepers ===
     
    UID     HEALTHY PG LISTENADDRESS    PG HEALTHY  PG WANTEDGENERATION PG CURRENTGENERATION
    agensgraph0 true    127.0.0.1:5532      true        3           3  
    agensgraph1 true    127.0.0.1:5533      true        2           2  
     
    === Cluster Info ===
     
    Master: agensgraph0
     
    ===== Keepers/DB tree =====
     
    agensgraph0 (master)
    └─agensgraph1
     
    [bylee@localhost stolon-v0.12.0-linux-amd64]$
     
     
    //자동 failover 테스트
    //master 정지
    2018-09-03T16:43:12.982+0900    INFO    cmd/keeper.go:1423  already master
    2018-09-03T16:43:12.987+0900    INFO    cmd/keeper.go:1585  postgres parameters not changed
    2018-09-03T16:43:12.987+0900    INFO    cmd/keeper.go:1596  postgres hba entries not changed
    ^C2018-09-03 16:43:16.584 KST [6995] LOG:  received fast shutdown request
    2018-09-03 16:43:16.589 KST [6995] LOG:  aborting any active transactions
    2018-09-03 16:43:16.590 KST [7792] FATAL:  terminating connection due to administrator command
    2018-09-03 16:43:16.596 KST [6995] LOG:  worker process: logical replication launcher (PID 7002) exited with exit code 1
    2018-09-03 16:43:16.604 KST [6997] LOG:  shutting down
    2018-09-03T16:43:16.604+0900    INFO    postgresql/postgresql.go:351    stopping database
    waiting for server to shut down....2018-09-03 16:43:16.630 KST [6995] LOG:  database system is shut down
     done
    server stopped
    [bylee@localhost stolon-v0.12.0-linux-amd64]$
     
     
    //자동 failover 테스트 확인
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 status
    === Active sentinels ===
     
    ID      LEADER
    5f54d0f0    true
     
    === Active proxies ===
     
    ID
    5553daed
     
    === Keepers ===
     
    UID     HEALTHY PG LISTENADDRESS    PG HEALTHY  PG WANTEDGENERATION PG CURRENTGENERATION
    agensgraph0 false   (no db assigned)                           
    agensgraph1 true    127.0.0.1:5533      true        3           3  
     
    === Cluster Info ===
     
    Master: agensgraph1
     
    ===== Keepers/DB tree =====
     
    agensgraph1 (master)
     
    [bylee@localhost stolon-v0.12.0-linux-amd64]$
     
     
    //failback 테스트
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 status
    === Active sentinels ===
     
    ID      LEADER
    5f54d0f0    true
     
    === Active proxies ===
     
    ID
    5553daed
     
    === Keepers ===
     
    UID     HEALTHY PG LISTENADDRESS    PG HEALTHY  PG WANTEDGENERATION PG CURRENTGENERATION
    agensgraph0 true    127.0.0.1:5532      true        2           1  
    agensgraph1 true    127.0.0.1:5533      true        4           4  
     
    === Cluster Info ===
     
    Master: agensgraph1
     
    ===== Keepers/DB tree =====
     
    agensgraph1 (master)
    └─agensgraph0
     
    //트랜잭션 손실이 발생하지 않도록 하기 위해 synchronous replication 설정
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 update --patch '{ "synchronousReplication" : true }'
     
    //master와 standby를 서로 변경
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 init '{ "initMode": "existing", "existingConfig": { "keeperUID": "agensgraph0" } }'
    WARNING: The current cluster data will be removed
    WARNING: The databases managed by the keepers will be overwritten depending on the provided cluster spec.
    Are you sure you want to continue? [yes/no] yes
     
     
    //failback 테스트 확인
    [bylee@localhost stolon-v0.12.0-linux-amd64]$ ./bin/stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 status
    === Active sentinels ===
     
    ID      LEADER
    5f54d0f0    true
     
    === Active proxies ===
     
    ID
    5553daed
     
    === Keepers ===
     
    UID     HEALTHY PG LISTENADDRESS    PG HEALTHY  PG WANTEDGENERATION PG CURRENTGENERATION
    agensgraph0 true    127.0.0.1:5532      true        3           3  
    agensgraph1 true    127.0.0.1:5533      true        2           2  
     
    === Cluster Info ===
     
    Master: agensgraph0
     
    ===== Keepers/DB tree =====
     
    agensgraph0 (master)
    └─agensgraph1

    'PostgreSQL' 카테고리의 다른 글

    audit 제품 비교  (0) 2019.11.08
    PostgreSQL HA 도구 비교  (0) 2019.10.14
    patroni 설치  (0) 2019.10.14
    repmgr 설치  (0) 2019.10.14
    pgbouncer와 pgpool-II 비교  (0) 2019.10.11
Designed by Tistory.