A few people (myself included) have recently commented on how the slon on x64 linux systems tends to use a lot more memory than one would exepect. Slony clusters with 3-5 nodes seem to have slon proceses consuming 100-180 megs of memory (VmData).
Slon is a multi threaded process that has a set of threads managing the local database (cleanupThread, localListener, syncThread and the main thread) and a pair of threads (a remoteWorker and a remoteListener) for each remote database that slon has a path to. I should write another post explaining what each of these does.
Each thread created in a multi-threaded process has memory set side aside for that threads stack. A programs memory is divided into a into a stack (well one stack per thread) and a heap. Variables declared at a function scope level go on the stack, when a function calls another function the stack is ‘pushed down’ so the new function can put its variables on the top of the stack. Dynamically allocated memory goes on the heap. Good programming practice indicates that you try to only put small things on the stack, such as pointers to things on the heap, or single numeric values, or maybe a small string. Larger variables should go on the heap.
In my early days of programming I remember writing programs on MS-DOS that would occasionally run out of stack space. The program would exit with an error ‘stack space exceeded’ or some such thing and you would need to figure out what was going on. Often these errors were because of infinite recursion where a function keeps calling itself (which in turn calls itself) over and over again. Sometimes it was because I tried to allocate lots of large arrays on the stack. One learned not to allocate your larger objects on the stack. I think the default stack size was in the range of 8k.
So how much stack memory is each of my slon threads actually consuming? pthreads allows you to adjust this through the API but the default amount on Linux is coverned by the RLIMIT_STACK setting. So what is RLIMIT_STACK on my x64 Ubuntu laptop system?
8*(4(local threads) + 2*4(remote threads)=96 megs
That is just stack space which is mostly unused since most data in slony is allocated on the heap. Why is my Ubuntu system giving me a 8 meg default per thread stack? In a single threaded process RLIMIT_STACK specifies the maximum size that the stack can grow to. Programs that won’t need stacks this large won’t have stacks that large. The problem is that in a multi-threaded process pthreads needs to give each thread a continuous stack when it creates the thread. The stack for each thread can’t grow because it might grow into the next one. Properly written multi-threaded programs tend to not use large stacks. Good C programmers know not to put large things on the stack in a multi-threaded program and the higher level languages such as Perl, Java, and Python go to great pains to implement memory management for heap data.
I question the rational behind making the default RLIMIT_STACK setting equal to 8 megabytes, both my Ubuntu and Debian systems have this. If the RLIMIT_STACK where unlimited then the Linux implementation of pthreads would give me 2 megabytes of stack per thread. I agree it makes sense to have a limited stack size by default (think of programs that have infinite recursion bugs) but the challenge is figuring out what the stack size should be for a multi-threaded program. The problem is made worse because stack overflows in a multi-threaded program don’t generate exceptions but instead silently write into some other part of memory. Setting the default stack size to a high value like 8 megabytes seems like a cope-out that encourages developers to not pay attention to their stack usage.
So if your slon processes are using more memory than you feel they should then check what your stack segment size is configured as. On unix systems you can adjust this with the ulimit -s command. You should also keep in mind that the stack pages that aren’t actually being used by slony will tend to not occupy space in the resident set (use up real memory). In a future version of slony we should look at setting per-thread stack size that is much lower and closer but we need to first do some analysis to get a sense of how small of a stack we can get away with.