Will You Run Out Of INT And BIGINT Identities In SQL Server?

In the war over Identities vs GUIDs as clustered indexes, one of the arguments in favor of Unique Identifiers is that you may run out of numbers when using a numeric Identity column (such as INT or BIGINT) for a clustered index.  Is this a realistic concern? The short answer is yes, it is possible, but not as likely as you might think.

Paul Randal did the math in this blog post, and given that he actually worked on and built parts of the SQL Sever storage engine, I tend to believe his numbers.  Here is what he had to say:

INT Identity:

Imagine that you have a theoretical system that can create a thousand data rows per second. Using an int identity value increasing by 1 and starting at 1, you’ll run out of values when the value hits 2^31-1 and tries to insert the next value. Let’s simplify the math by just saying that 2^31 is the limit. With a thousand values per second, that would mean 2^31 / 1,000 = 2.15 million seconds or just under 25 days. While many of you don’t sustain 1,000 rows per second, this is still a very problematic limitation.

source:
https://www.sqlskills.com/blogs/paul/sqlskills-sql101-running-out-of-ints-and-bigints/

So, on a system that is adding 1,000 rows per second, 24 hours a day, 7 days a week, you would run out of INT values in just 25 days, IF you started the seed value at 1 and increased by 1 (which is pretty typical).  You could actually start the seed at -2,147,483,648 and have it increase by 1, which would give you about 50 days before you ran out, which still isn’t that great. However, as Paul notes, most SQL Servers are not going to be getting 1,000 inserts per second non-stop.

BIGINT Identity:

His calculations here are for a system that is inserting 1 million rows per second, 24 hours a day, 7 days a week, which I would consider much busier than what most SQL Servers are truly doing.

Every so often someone asks whether it’s possible to run out of bigint values. My answer is no. Well, technically yes, there is a limit, but in practical terms the answer is no.

Now imagine that you have a theoretical system that can create a million data rows per second, with a bigint identity value increasing by 1 and starting at 1. You’ll run out of values when the value hits 2^63-1 and tries to insert the next value. With a million values per second, that would mean 2^63 / 10^6 = 9.2 trillion seconds or approximately 292.5 thousand years. And by then it’s someone else’s problem… 🙂 And that’s only for half the possible range of bigint values.

Source:
https://www.sqlskills.com/blogs/paul/sqlskills-sql101-running-out-of-ints-and-bigints/

He goes on to say:

So with 1 million rows per second, you’ll be generating 1 million x 3,600 (seconds in an hour) x 24 (hours in a day) = 86.4 billion rows per day

Source:
https://www.sqlskills.com/blogs/paul/sqlskills-sql101-running-out-of-ints-and-bigints/

That is a lot of data: 86.4 billion rows per day, 24/7, every day, for 292.5 thousand years before running out of BIGINTs. And, just like with the INT example, that is only for half of the range of total allowable BIGINT values. If you started your BIGINT seed at -9,223,372,036,854,775,808 and incremented by 1, you would double the range of values that Paul used in this calculation.

He then goes on to show that if you were to use a use a DECIMAL or NUMERIC data type instead of INT or BIGINT, inserting 1 million rows per second would take roughly 3,170 billion billion years before you ran out of values!

So, while it is theoretically possible to run out of values for a numeric IDENTITY, and may be a legitimate concern for a high volume table using INTs as clustered indexes, few SQL Servers are going to be generating the constant volume of inserts needed to run out of BIGINTs, DECIMAL, or NUMERIC values.

(Visited 1,849 times, 1 visits today)