-3

I am not that much familiar with database things. But I have a problem. I am developing an application for one of my client and that uses SQL Server 2008 R2, one table has an INT IDENTITY as its primary key.

Since, the data has started flooding in this table i have to change INT to Uniqueidentifier, as int has limitation, certain range. I want to take care of this before hand. Please suggest me the way through which I can change that.

Challenges :

  1. I am using Entity Framework, which uses that table however, ID (INT,AUTO INCREMENT,PK) has never been used. This is just in database.
  2. Database is in production hence can't afford loosing data.

I have few questions:

  1. Up to what range int will start throwing error in database?
  2. If we keep deleting the older records will it affect the int range Or in other word will it be fine if we keep deleting older records?

Please help me with this.

Thanks in advance

Amod

marc_s
  • 675,133
  • 158
  • 1,253
  • 1,388
Amod Vardhan
  • 19
  • 1
  • 2
  • 10

3 Answers3

0

In SQL Server MAX number for the INT data type is 2,147,483,647. The highest number you can store using the BIGINT data type is 9,223,372,036,854,775,807.So try this.

ALTER TABLE tablename ALTER COLUMN ID BIGINT

But columns should have no constraints against them (like a foreign key, index, default, rule, etc).

Another option i Googled is

1 - create a new bigint column in the table
2 - update that new column with the values from the int column
3 - delete the int column
4 - rename the bigint column
itzMEonTV
  • 17,660
  • 3
  • 31
  • 40
0

You define a column of type INT (or SMALLINT, TINYINT, BIGINT) with the IDENTITY attribute:

CREATE TABLE dbo.YourTable( ID INT IDENTITY(1,1) ......

With this setup, SQL Server will automatically generate consecutive ID's for your table when rows are being inserted into your table.

With a type INT, starting at 1, you get over 2 billion possible rows - that should be more than sufficient for the vast majority of cases. With BIGINT, you get roughly 922 quadrillion (922 with 15 zeros - 9'220'000 billions) - enough for you??

If you use an INT IDENTITY starting at 1, and you insert a row every second, you need 66.5 years before you hit the 2 billion limit ....

If you use a BIGINT IDENTITY starting at 1, and you insert one thousand rows every second, you need a mind-boggling 292 million years before you hit the 922 quadrillion limit ....

Read more about it (with all the options there are) in the MSDN Books Online.

UNIQUEIDENTIFIER is a "handy" type for programmers - but a nightmare for your database, your DBA, and your database performance, especially if used as the primary key (and therefore the clustering index of your table). Thinks at least TWICE whether you want to do that switch!!

marc_s
  • 675,133
  • 158
  • 1,253
  • 1,388
0

In fact this is big topic. One of the main aspect in choosing appropriate type for your data is size. INT datatype needs 4Bytes, BIGINT 8Bytes, UNIQUEIDENTIFIER requires 16Bytes.

While INT may be not sufficient for some large projects you can consider starting an IDENTITY not from 1, but from -2,147,483,648. This will double your identity values.

If INT is not sufficient you can consider BIGINT datatype and I can't think of any project that will not be satisfied with such a huge range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.

It worth to mention that if you create clustered index on some column(say Col1), then every nonclustered index(say on column Col2) will be created including column Col1. So this will be even bigger overhead.

But UNIQUEIDENTIFIERS are very useful in distributed solutions, where you can get guaranteed unique values across multiple solutions. Also you can generate GUID everywhere and you know that the value generated is unique across time and space.

As a best practice you should assign the smallest possible datatype to your columns.

Giorgi Nakeuri
  • 33,698
  • 8
  • 33
  • 67