Is it possible to setup a Postgres Database to perform 1000 writes per second in development on my local machine?
Try it and see. If you've got an SSD, or don't require crash safety, then you almost certainly can.
You will find that with anything you choose, you have to make trade-offs with durability and write latencies.
If you want to commit each record individually, in strict order, you should be able to achieve that on a laptop with a decent SSD. You will not possibly get it on something like a cheap AWS instance, a server with a spinning rust hard drive, etc though, as they don't have good enough disk flush rates. (pg_test_fsync
is a handy tool for looking at this). This will be true of anything that's doing genuine atomic commits of individual records to durable storage, not just PostgreSQL - about the best rate you're going to get is the max disk flush rate / 2 unless it's a purely append-only system, in which case the commit rate can be equal to the disk flush rate.
If you want to get higher throughput, you'll need to batch writes together and commit them in groups to spread the disk sync overhead. In the case of PostgreSQL, the commit_delay
option can be useful to batch commits together. Better still, buffer a few changes client-side and do multi-valued inserts. Turning off synchronous_commit
for a transaction if you don't need a hard guarantee it's committed before returning control to your program.
I haven't tested it, but expect Heroku will allow you to set both these params on your sessions using SET synchronous_commit = off
or SET commit_delay = ...
. You should test it and see. In fact, you should do a simulated workload benchmark and see if you can make it go fast enough for your needs.
If you can't, you'll be able to use alternate hosting that will with appropriate configuration.
See also: How to speed up insertion performance in PostgreSQL