3

I'm runnung 64-bit PostgreSQL 9.1 on Windows Server. I'm trying to improve its performanace especially for handling heavy writing. I used to increase shared_buffer to %25 of RAM, and scine I got 32GB RAM I decided to set shared_buffers to 8GB. While I'm searching for more info I came across this post: http://rhaas.blogspot.com/2012/03/tuning-sharedbuffers-and-walbuffers.html

It says: but not more than about 8GB on Linux or 512MB on Windows, and sometimes less.

Now I'm confused. What's the point of increasing RAM if it won't help improving PostgreSQL performance?!

The other values will be as follows:

work_mem: 160MB
maintenance_work_mem = 1920MB
checkpoint_segments = 100
checkpoint_completion_target = 0.9
checkpoint_timeout = 1h
wal_buffers = 64MB
effective_cache_size = 22GB
Shad
  • 195
  • 2
  • 10
  • 3
    The general assumption is, that the management overhead of a large shared buffers can make things slower. You may also want to read this: http://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/ and this http://evol-monkey.blogspot.de/2013/08/setting-sharedbuffers-hard-way.html and this http://www.cybertec.at/postgresql-9-3-shared-buffers-performance-1/ You also might wan to upgrade to a more up-to-date version, as there were some improvements in the memory management in 9.3 – a_horse_with_no_name Jan 04 '15 at 08:43
  • 1
    `shared_buffers` is also just not as helpful for write loads. You want to focus more on checkpoint spreading, on effective batching in the application, on turning off `synchronous_commit` where safe (preferably at the app level per-transaction), etc. There's no magic "go faster" config option and appropriate configuration depends on the workload. For inserting and bulk loading see http://stackoverflow.com/questions/12206600/how-to-speed-up-insertion-performance-in-postgresql – Craig Ringer Jan 04 '15 at 10:29

1 Answers1

0

For a write-heavy Windows server, the most important setting is to adjust checkpoint_segments. Your value is fairly high already, but you may want to experiment with values up to 256.

From the postgresql performance tuning guide (found here):

PostgreSQL writes new transactions to the database in files called WAL segments that are 16MB in size. Every time checkpoint_segments worth of these files have been written, by default 3, a checkpoint occurs. Checkpoints can be resource intensive, and on a modern system doing one every 48MB will be a serious performance bottleneck. Setting checkpoint_segments to a much larger value improves that. Unless you're running on a very small configuration, you'll almost certainly be better setting this to at least 10, which also allows usefully increasing the completion target.

For more write-heavy systems, values from 32 (checkpoint every 512MB) to 256 (every 4GB) are popular nowadays. Very large settings use a lot more disk and will cause your database to take longer to recover, so make sure you're comfortable with both those things before large increases. Normally the large settings (>64/1GB) are only used for bulk loading. Note that whatever you choose for the segments, you'll still get a checkpoint at least every 5 minutes unless you also increase checkpoint_timeout (which isn't necessary on most systems).

Haydentech
  • 799
  • 1
  • 7
  • 17