38

I'm going to store large amount of data (logs) in fragmented PostgreSQL tables (table per day). I would like to compress some of them to save some space on my discs, but I don't want to lose the ability to query them in the usual manner.

Does PostgreSQL support such a transparent compression and where can I read about it in more detail? I think there should be some well-known magic name for such a feature.

Erwin Brandstetter
  • 479,275
  • 111
  • 893
  • 1,042
Prikrutil
  • 2,542
  • 3
  • 16
  • 12
  • Also not specifically what you asked for but may be of some help with large data sets is that PostgreSQL supports table inheritance, allowing you to partition your data, for example, by date ranges. Or some other logic. – Eric Sep 14 '09 at 16:39

2 Answers2

51

Yes, PostgreSQL will do this automatically for you when they go above a certain size. Compression is applied at each individual data value though - not at the full table level. Meaning that if you have a billion rows that are very narrow, they won't get compressed. Or if you have very many columns each with only a small value in it, they won't get compressed. Details about this scheme in the manual.

If you need it on the full table level, a solution is to create a TABLESPACE for those tables that you want compressed, and point it to a compressed filesystem. As long as the filesystem still obeys fsync() and standard POSIX semantics, this should be perfectly safe. Details about this in the manual.

pd40
  • 3,137
  • 3
  • 18
  • 29
Magnus Hagander
  • 20,794
  • 3
  • 50
  • 41
9

Probably not what you have in mind but still useful info - Chapter 53. Database Physical Storage of the fine manual. The TOAST section warrants further attention.

Milen A. Radev
  • 54,001
  • 19
  • 99
  • 105