73

I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value.

But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle?

Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

Jonas
  • 97,987
  • 90
  • 271
  • 355

6 Answers6

53

Jim Martin's comment from §3.6.9. "Using AUTO_INCREMENT" of the MySQL documentation:

Just in case there's any question, the AUTO_INCREMENT field /DOES NOT WRAP/. Once you hit the limit for the field size, INSERTs generate an error. (As per Jeremy Cole)

A quick test with MySQL 5.1.45 results in an error of:

ERROR 1467 (HY000): Failed to read auto-increment value from storage engine

You could test for that error on insert and take appropriate action.

outis
  • 68,704
  • 19
  • 132
  • 197
  • As I see in my tests, the increment number generated is the same each time (the maximum value) after the limit is reached. I do not receive a SQL error as I was expected. – Victor Feb 12 '19 at 09:24
53

Just to calm the nerves, consider this:

Suppose you have a database that inserts a new value for every time a user executes some sort of transaction on your website.

With a 64 bit integer as an ID then this is the condition for overflow: With a world population of 6 billion then if every human on earth executes a transaction once per second every day and every year (without rest) it would take more than 80 years for your id to wrap around.

Ie, only google needs to vaguely consider this problem occasionally during a coffee break.

Justin
  • 565
  • 4
  • 2
  • 4
    Sorry, just to be correct it would take some more than 9 years :) After 1 min, 360 bill tnx will happen. After 1h, 21600 bill. In 1 day, 518400 billion tnx will be verified. 1 year: 1892160 thousands of billions. After 8 years, 15,137,280 thousands of billions will be saved in the db. Limit of UNSIGNED BIGINT is 18,446,744 thousands of billions. – tobia.zanarella Jul 03 '12 at 08:35
  • 2
    you are getting that one zero extra for 1 year. so it would be 97 years approx. – lockedscope Jun 27 '13 at 16:22
  • 1
    And what if i have 2000 proxies making an insert attack launching multiple thread/requests? . Or for example i have a History Upload Function with (60.000 entries ) so uploading that history will require less requests. – Anestis Kivranoglou Jan 24 '14 at 13:04
  • I've been wondering about this for a long time now (like, 2 years, LOL). What's the solution to this if it happens? What if the supposedly 1 billion active users of facebook comments at least 3 times a day, and these comments are only in one table, with its ID set as int(11), what's the solution if it reaches the max value of the column? It seems like a problem to think about, for facebook at least. – christianleroy Jan 25 '16 at 02:11
  • well for me, `int(10) unsigned`, the id column has reached `237836414` in just 2 months or 5.54% of uint_max. So, it's a problem. – majidarif Jun 07 '17 at 20:32
  • @majidarif at this rate, if you use a BigInt instead it would take you an extra 77,560,638,270 months to use all the possible values or 6.5 billion years give or take couple millions of years – Franck Sep 23 '17 at 11:02
  • Bottom line: When the upper bound is not knowable, just use BIGINT (or 64-bit ints for programmers). That's a basic rule of thumb for everything in computing. Still bound, but realistically irrelevant. – William T Froggard Nov 25 '17 at 23:56
  • A best option would be to use unique strings like `'6e894c6a-a02a-46ba-b2aa-de0d66d13293'`, `'446a571b-d61f-4dae-bc6d-df1cc2ab52c2'`. In case of python, **uuid** module allows us to generate this, i.e. using `str(uuid.uuid4())`. – hygull Aug 01 '19 at 04:49
11

You will know when it's going to overflow by looking at the largest ID. You should change it well before any exception even comes close to being thrown.

In fact, you should design with a large enough datatype to begin with. Your database performance is not going to suffer even if you use a 64 bit ID from the beginning.

Matti Virkkunen
  • 58,926
  • 7
  • 111
  • 152
8

The answers here state what happens, but only one answer says how to detect the problem (and then only after the error has happened). Generally, it is helpful to be able to detect these things before they become a production issue, so I wrote a query to detect when an overflow is about to happen:

SELECT
  c.TABLE_CATALOG,
  c.TABLE_SCHEMA,
  c.TABLE_NAME,
  c.COLUMN_NAME
FROM information_schema.COLUMNS AS c
JOIN information_schema.TABLES AS t USING (TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME)
WHERE c.EXTRA LIKE '%auto_increment%'
  AND t.AUTO_INCREMENT / CASE c.DATA_TYPE
      WHEN 'TINYINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 255, 127)
      WHEN 'SMALLINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 65535, 32767)
      WHEN 'MEDIUMINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 16777215, 8388607)
      WHEN 'INT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 4294967295, 2147483647)
      WHEN 'BIGINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', '18446744073709551615', 9223372036854775807) # need to quote because column type defaults to unsigned.
      ELSE 0
    END > .9; # 10% buffer

Hope this helps someone somewhere.

CSTobey
  • 1,371
  • 12
  • 9
0

For MySQL 5.6 , 3.6.9 Using AUTO_INCREMENT in says:

Use the smallest integer data type for the AUTO_INCREMENT column that is large enough to hold the maximum sequence value you will need. When the column reaches the upper limit of the data type, the next attempt to generate a sequence number fails.

Jingguo Yao
  • 5,523
  • 2
  • 40
  • 54
-1

I would like to share a personal experience I just had about this. Using Nagios + Check_MK + NDOUtils. NDOUtils stores all the checks in a table called nagios_servicechecks. The primary key is a auto_increment int signed. What happens with MySQL when this limit is ranged? Well, in my case, MySQL delete all the records but the last one. The table is now almost empty. Everytime a new record is inserted the old one is deleted. Don't why this happen, but the fact is that I lost all my records. IDOUtils, used with Icinga (not Nagios), fixed this issue changing int by a bigint. It didn't generate a error.