New Features in Slony 2.1

Posted: July 21, 2011 in postgresql
Tags: ,

Last week the Slony team released beta3 of Slony 2.1.0. I thought it would be a good idea to blog about some of the changes we have made in Slony 2.1. My personal theme for this release has been usability. I have overheard people complaining about the usability of Slony and hope that this changes go towards improving it.

Bulk Adding of Tables
In previous Slony versions you had to issue a ‘set add table’ or ‘set add sequence’ command for every table you wanted to replicate. We have added the ability to use regular expressions to specify a set of tables (or sequences). For example

set add table( set id=1, tables='public.*');
set add sequences(set id=1, tables='public.foo_.*seq');
set add tables(set id=1, tables='public.foo_[1234]',add sequences=true);

Improved Monitoring
A new table sl_component has been added to Slony that can be used to get insight into the current activities of your slon process.
On each node that a slon is running on sl_component will contain a row for each active thread. As the components of slon do interesting things they will update sl_component.

Implicit Wait For
Properly written slonik scripts are filled with this ‘wait for event(…)‘ commands. The problem is that few people understand when you need them. As a result a lot of people leave out ‘wait for event‘ commands when they need them. Slony 2.1.0 will implicitly perform a ‘wait for event’ in between other commands within a slonik script where they are required. When using slonik you should keep a few things in mind

  • Having multiple slonik scripts running at the same time is a bad idea and will confuse the implicit wait for behaviours
  • Slonik will perform an implicit wait for in between slonik commands whenever the event node changes
  • Before slonik submits a SUBSCRIBE SET command it will wait until the provider node is caught up with all other nodes in the cluster
  • Before slonik submits a DROP NODE event it will wait until all other nodes are caught up.
  • Before slonik submits a CLONE NODE command slonik will wait until the node being cloned is caught up
  • Before a CREATE SET command slonik will wait until any outstanding DROP SET commands have been confirmed by the entire cluster
  • wait for event commands have never worked in a TRY block. If you have commands inside of a try block that require an wait for event then slonik will now abort your script from inside the TRY block. This means that some scripts that worked in previous versions of Slony will now abort.

What this means is that in the above situations slonik will wait until your cluster is caught up before submitting commands. For example if you have a node that is a few hours behind and want to drop a different node, that drop node command won’t happen until your behind node has caught up. The implicit ‘wait for’ behaviour can be disabled with a slonik command line switch but remember we made these changes to avoid configuration race conditions.

TRUNCATE TABLE support
If your running against PostgreSQL 8.4 or higher Slony will now replicate TRUNCATE commands

Better performance when Slony is behind
Many people have experienced situations where a node in their cluster gets behind. When this happens the sl_log tables grow and pulling data from a provider to a subscriber to takes so long that the cluster can’t catch up. The query to pull data from sl_log_1 or sl_log_2 was causing a sequential scan of a large table. The queries used for pulling data from the sl_log tables have been modified, they should now use the index when backlogged resulting in better performance.

These are not the only changes in Slony 2.1.0 just some of my favourite picks. Now is a great time to try the beta out and let us know what you think. The more people that try 2.1.0 while it is a beta or release candidate means the less change there will be of bugs making into the 2.1.0 release.

I look forward to reading reports of both successful tests and bugs found during testing.

Note: The regular expressions in my ‘set add table’ examples don’t do exactly what most people intend. There is a ‘bug’ in my example, fixing it might expose a bug in 2.1.0 beta3. Can anyone spot it by testing?

Advertisements
Comments
  1. Radovan says:

    NIce overview

  2. sid says:

    Where can I get a tutorial for Implementing Failover with Slony? (Masterdb 8.4 & Slave 9.0)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s