10

I'm developing a Rails 3 app that uses Postgres as its database. I've got the table shown below:

             Table "public.test"
    Column     |  Type   | Modifiers
---------------+---------+-----------
 id            | integer | not null
 some_other_id | integer |
Indexes:
    "test_pkey" PRIMARY KEY, btree (id)
    "some_other_id_key" UNIQUE CONSTRAINT, btree (some_other_id)

This has two columns:

  • id, which is the primary key (automatically created by rails)
  • some_other_id, which contains keys generated by another system. This id needs to be unique, so I've added a unique key constraint to the table.

Now if I try to insert a row with a duplicate some_other_id, it fails (good) and I get the following output in my Postgres logs:

ERROR:  duplicate key value violates unique constraint "some_other_id_key"

The problem is that it's completely mainline for my app to try and add the same ID twice, and my logs are being spammed with this "ERROR" message, which causes various problems: files take a lot of disk space, diagnostics get lost in the noise, Postgres has to throw away diags to keep the log files within size limits, etc.

Does anyone know how I can either:

  • Suppress the log, either by suppressing all logs about this key, or perhaps by specifying something on the transaction that tries to do the INSERT.
  • Use some other Postgres feature to spot the duplicate key and not try the INSERT. I've heard of rules and triggers but I can't get either to work (though I'm no Postgres expert).

Note that any solution needs to work with Rails, which does its inserts like this:

INSERT INTO test (some_other_id) VALUES (123) RETURNING id;
Erwin Brandstetter
  • 479,275
  • 111
  • 893
  • 1,042
Alex Hockey
  • 155
  • 1
  • 2
  • 6
  • 2
    "The problem is that it's completely mainline for my app to try and ad the same ID twice," This seems strange to me. Why would the app try to insert duplicate values, and why do you think it's normal? – wildplasser Sep 12 '12 at 10:21
  • BTW your insert `INSERT INTO test (some_other_id) VALUES (123) RETURNING id;` does not supply a value for id (which is NOT NULL), so it should **also violate the NOT NULL constraint** for id. Did you give us the real table definition? is id a serial, by any chance? – wildplasser Sep 12 '12 at 10:37

3 Answers3

11

To avoid the duplicate key error to begin with:

INSERT INTO test (some_other_id)
SELECT 123
WHERE  NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = 123)
RETURNING id;

I am assuming id is a serial column that gets its value automatically.

This is subject to a very tiny race condition (in the time slot between the SELECT and the INSERT). But the worst that can happen is that you get a duplicate key error after all and this will hardly ever occur and shouldn't be a problem in your case.

You can always use raw SQL if your framework restricts your options to use proper syntax.

Or you can create a UDF (user defined function) for the purpose:

CREATE FUNCTION f_my_insert(int)
 RETURNS int LANGUAGE SQL AS
$func$
INSERT INTO test (some_other_id)
SELECT $1
WHERE  NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = $1)
RETURNING id;
$func$

Call:

SELECT f_my_insert(123);

Or, to default to an already existing id:

CREATE FUNCTION f_my_insert(int)
 RETURNS int LANGUAGE plpgsql AS
$func$
BEGIN;

RETURN QUERY
SELECT id FROM test WHERE some_other_id = $1;

IF NOT FOUND THEN
   INSERT INTO test (some_other_id)
   VALUES ($1)
   RETURNING id;
END IF;

END
$func$

Again, that leaves a minimal chance for a race condition. You can eliminate that at the cost of slower performance:

Community
  • 1
  • 1
Erwin Brandstetter
  • 479,275
  • 111
  • 893
  • 1,042
  • I think that's exactly what I need, thanks! I don't care too much about the race because the unique key constraints will sort it out. – Alex Hockey Sep 12 '12 at 16:03
  • @AlexHockey: Just remember that you get an exception instead of an `id` if the race condition strikes with the plpgsql function. – Erwin Brandstetter Sep 12 '12 at 16:10
3

You can disable logging of error messages for the session (or globally actually) but it requires superuser privileges:

By running:

set log_min_messages=fatal;

only fatal errors are logged until the session (=connection) is ended or you issue a new set statement to reset the value.

But as only a superuser is allowed to change this, it is probably not a good solution as it would required your application user to have that privilege which is a major security problem.

a_horse_with_no_name
  • 440,273
  • 77
  • 685
  • 758
  • This can also be set in `postgresql.conf`, but setting it there is probably a bad idea as it'll mask important messages about configuration problems, etc. `ALTER USER my_rails_user SET log_min_messages = fatal;` may be less drastic, but can still masks important messages. Better to fix your app not to insert duplicates. – Craig Ringer Sep 12 '12 at 10:32
0

If you just want to suppress those errors while working in psql, you can do

SET client_min_messages TO fatal

which will last for the rest of your session.

Randall
  • 2,454
  • 1
  • 17
  • 21