9.0 Streaming Replication vs Slony

Posted: October 9, 2010 in postgresql
Tags: , ,

Slony 2.0.5 was just released. This version is the result of a lot of testing of the 2.0.x version and I feel that 2.0.5 is now more stable and a better choice than 1.2.x for most deployments.

PostgreSQL 9.0 (including streaming replication) is also now out.

Some people are asking if slony or streaming replication is a better choice for them. Here is my take on the issue.

Both the streaming replication and Slony are asynchronous. If you need synchronous replication then you will have to wait at least until 9.1. However, if your looking for asynchronous replication then ask your self the following questions:

  1. Are my master and slave running the same version of PostgreSQL on the same platform?
  2. Does my slave only need to perform failover or read-only queries?
  3. Do I only need one slave? (Multiple slaves can consume the same WAL segments but it is unclear to me how you will be able to keep the second slave after failing over)?
  4. Do I want my slave to be identical to the master(no extra tables, no extra databases, no extra indices)?

If the answer to all of the above questions is yes then streaming replication in 9.0 might be a good choice for you.

However if the answer to any of the following questions are yes:

  1. My master and slave are on different hardware platforms?
  2. I want to add some tables for reporting on my slave?
  3. I have multiple databases on my master but only want to replicate some of them to the slave?
  4. For security reasons I want tables to have different permissions on my slave then my master?
  5. I want to be able to take my master down for hardware maintenance but after I’m done I want to have the master take over from the slave without having to re-copy my entire database?
  6. I want to replicate from A==>B and then have B replicate to C and D?
  7. I can live without automatic DDL replication

then Slony (or another trigger based replication system) might be a better choice for you. It is unlikely that the WAL based replication in PostgreSQL will ever be able to deal with a lot of these use-cases. I see many situations where trigger based replication is appropriate and I don’t see this changing with 9.0 or 9.1

Comments
  1. Baoli says:

    Great, copied for later reference.

    BMA

  2. Venu says:

    Good analysis, very useful to PostgreSQL DBAs

  3. Venu says:

    Can we configure same database with Streaming Replication to Cold Stand by server and Slony1 to replicate few tables to 14 slaves?

  4. yesudas says:

    hi,

    thanks for the info that you have given, need to know is there any way in streaming replication i dont want to replicate the delete of table in master to slave, in streaming replication what ever is written in the master WAL file will be replicated in the slave. any way out ?

    • That is not possible with streaming replication. If you don’t want to replicate delete’s to a slave then you should use a trigger based replication solution such as Slony.

      • yesudas says:

        thanks for the reply, but for my application in slony’s (ie trigger based ) performance is very low, it is not handling load…. and i am able to get the performance with streaming replication. any way thanks.

  5. […] replication‘ where I will give an overview of built in replication and Slony. My blog post comparing Slony and 9.0 replication is by far the most popular post on this blog, and the talk will […]

  6. Elyes says:

    Indeed it is a good analysis. However, I am still struggling with the issue that drew me to this page.
    I have two Datacenters DC1 and DC2.
    In DC1, I need to have 3 instances of the same DB (DBDC1-1, DBDC1-2 and DBDC1-3) and I need them to be all the time at the same level. So far I considered using pgpool with the 3 DB being masters.
    This DB needs to be replicated on to DC2 on 3 instances as well (DBDC2-1, DBDC2-2, and DBDC2-3) and of course these 3 instances need to be at the same level all the time.
    Does anybody have a solution on how to do this ?
    Cheers

  7. if i want to doing DDl replication, what tools i can use?

  8. igorS says:

    Thank you, scanningpages, for this interesting article.

    Can you please help with advice?
    I need to design a solution for a database which will grow and will require a horizontal split at some moment.
    Here is how I am planning to do it: Every record in every table has a shard number. After the database becomes too large (gets too many requests), the tables need to be horizontally split. It means that for every table all records with some specific shard numbers need to be moved to a new database.
    My plan is to (1) create a copy of the database on a new server (backup/restore?), (2) synchronize the databases (how?), and then (3) forward all requests for the moved shards to the new database. (I will get rid of the unneeded shards later). An important requirement: there should be no outage for the users.

    What are the ways to achieve it?

    Thank you in advance,
    -igorS

  9. igorS says:

    Thank you, scanningpages, for the links.

    Basically my question boils down to this one:
    How to make a copy of postgers database, start it on another server, synchronize, and switch some of the users to the new database?
    Can the streaming replication help there?

    It can be easily done with MSSQL server, but unfortunately i am not that familiar with postgres…

    Thank you once again,
    -igorS

    • Streaming replication will allow you to copy the initial database and keep the copy synchronized but to turn the copy into a database that you can make changes on you will need to shut it down and restart it. You might be able to minimize the downtime but it isn’t possible to do this with no outage. You will need to stop write activity to the old database (at least for the users in the shard being moved) then let the replication slave catch up, then turn promote the copy to be a writable master, only then will you be able to resume edits on the users in the moved shard.

  10. igorS says:

    @scanningpages: thanks a lot 🙂 It totally makes sense. Or i can try something like Londiste… But that might be slower… I did not study it yet.

    Best wishes, and thank you again,
    -igorS

  11. Eduardo says:

    IgorS, you can use pg_basebackup for copy whole database to another server and automatic sync, check http://www.postgresql.org/docs/9.1/static/app-pgbasebackup.html

  12. Is there any issues with using Slony AND WAL at the same time on the same SQL Server? We would like to use Slony, but the server already has WAL running and our client is nervous about any possible interference.
    You reply would be highly appreciated.

    • By WAL I assume you mean replication via WAL shipping/hot-standby.
      You can use both of them together but if you have to failover from your master to a standby populated from WAL shipping/hot-standby you might not be able to continue using that as a slony master. If the slony replica node is further ahead than the wal shipping standby node (which can happen) then you will have problems after failover.
      If your failover procedure is to failover to the WAL shipping node and then uninstall/reinstall slony you should be fine.
      You also don’t have this problem with synchronous built-in replication

  13. Brad says:

    Hi. I don’t understand your ‘yes question #5’ –

    “5.I want to be able to take my master down for hardware maintenance but after I’m done I want to have the master take over from the slave without having to re-copy my entire database?”

    The Slony 2.1 documentation seems adamant that a failed master has to be rebuilt from scratch?

    In section 3.4.6 of the documentation it states:

    – Nonetheless, once communications are restored, the fact of the FAIL OVER makes it mandatory that the failed node be abandoned.

    and

    – Therefore, the only way to get node 1 back and transfer the origin role back to it is to rebuild it from scratch as a subscriber, let it catch up, and then follow the switchover procedure.

    This restriction is what’s going to stop me from using Slony; I’m looking for the ability to easily and quickly swtich a failed master back into the system of replicated databases. So I’d very much appreciate your giving us some more details as to why you say ‘yes’ to your question #5, if you would be so kind. Thanks!

    • You don’t want to ‘FAIL’ your master. If your are taking it down for maintenance you want to use the ‘MOVE SET’ command to move your sets from the first node to the second node. This requires both nodes still be operating. Then you can take the first node down because it is no longer a master it is just a slave. Once your done you can use the MOVE SET command again to move the sets back to your first node.

      • Brad says:

        In this scenario I’m looking at using Slony to provide a ‘fault tolerant’ system of a master and a slave, with the slave being promoted to master if the original master experiences an unscheduled failure of some sort. So I’m then faced with how easy it is to get the original master back on-line with Slony.

        From what you and the documentation say it seems that the original master will have to be rebuilt from scratch?

        Thanks for your reply.

      • That is correct if you have an unscheduled failure of the master then that old master will need to be rebuilt from scratch before it can be added back to the cluster

  14. Alex says:

    With Slony, can I just replicate specific tables within a PG DB? For example, I have 10 tables in public in two databases on similar hardware (AWS EC2/CentOS) I just want to replicate the user and sessions tables from A -> B, while leaving the other 8 tables alone on both databases…is this possible?

Leave a reply to scanningpages Cancel reply