362

In C#, int and Int32 are the same thing, but I've read a number of times that int is preferred over Int32 with no reason given. Is there a reason, and should I care?

Cole Johnson
  • 8,415
  • 15
  • 44
  • 66
Graham
  • 7,387
  • 4
  • 34
  • 42
  • [Tweet](http://twitter.com/jonskeet/status/37210420196409344) by the Skeet about this where he favors Int32 over int when programming API's. – comecme Mar 14 '11 at 07:50
  • @JohnBubriski: and let's not forget that it requires fewer using statements to use it (or you'd be typing `System.Int32`) – sehe May 31 '11 at 12:11
  • i have Question: we are not using CLR type directly but why we need them?? – AminM Jun 28 '13 at 05:39
  • @JohnBubriski Facebook status update is easier to type than a piece of code. Bad thinking there! Easier to read and understand is far more important than easier to type. `When something can be read without effort, great effort has gone into its writing.` `Easy writing is hard reading` – 7hi4g0 Apr 11 '14 at 13:29
  • Possible duplicate of [What is the difference between String and string in C#?](https://stackoverflow.com/questions/7074/what-is-the-difference-between-string-and-string-in-c) – Cole Johnson May 28 '17 at 18:07

31 Answers31

278

The two are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32s in the same way.

The resulting code will be identical: the difference is purely one of readability or code appearance.

Roman Starkov
  • 52,420
  • 33
  • 225
  • 300
James Sutherland
  • 797
  • 2
  • 8
  • 7
  • 67
    People reading your code should know that int is an alias for System.Int32. As regards readability, consistency is far more important. – Troels Thomsen Nov 20 '08 at 15:08
  • 11
    For those of you with the old C++ mindset, IntPtr is designed to be 32 bits on a 32 bit OS and 64 bits on a 64 bit OS. This behavior is specifically mentioned in its summary tag. http://msdn.microsoft.com/en-us/library/system.intptr(VS.71).aspx – diadem Jul 08 '10 at 14:46
  • I also believed future-proofing is the reason to prefer `int` but the definitions at the link make no reference to 32 and 64 bit machines so presumably when 128 bit machines are available the definition will not change. https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/integral-numeric-types I still prefer `int` but it seems future-proofing should not be one of the reasons. – H2ONaCl Jun 27 '20 at 21:19
139

ECMA-334:2006 C# Language Specification (p18):

Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.

Cole Johnson
  • 8,415
  • 15
  • 44
  • 66
HasaniH
  • 7,514
  • 5
  • 36
  • 55
  • It's correct only for the 3rd and 4th editions of C# Language Specification. For some uncertain reason, the most recent [5th edition](https://www.ecma-international.org/publications/files/ECMA-ST/ECMA-334.pdf) doesn't contain this recommendation. They rewrote the paragraph about aliases of simple types. So, from today perspective it is unclear what use is favored. – SergICE Aug 19 '20 at 13:12
88

They both declare 32 bit integers, and as other posters stated, which one you use is mostly a matter of syntactic style. However they don't always behave the same way. For instance, the C# compiler won't allow this:

public enum MyEnum : Int32
{
    member1 = 0
}

but it will allow this:

public enum MyEnum : int
{
    member1 = 0
}

Go figure.

raven
  • 17,276
  • 15
  • 76
  • 110
  • 9
    If you use Reflector to examine the System.Int32 type you will find that it is a struct and not a class. The code looks like this: [Serializable, StructLayout(LayoutKind.Sequential), ComVisible(true)] public struct Int32 : IComparable, IFormattable, IConvertible, IComparable, IEquatable { public const int MaxValue = 0x7fffffff; ... You cannot derive a type from a struct. At the very least you'll get an error that tells you so. However, the enum behavior is a bit different, which I'll comment on next. – raddevus Dec 03 '10 at 21:13
  • 16
    Inability to derive an enum from the Int32 is a designed behavior, which can also be seen by looking at the .NET code : [Serializable, ComVisible(true)] public abstract class Enum : ValueType, IComparable, IFormattable, Notice that Enum is derived from ValueType? If you attempt to derive an enum from something else besides a intrinsic data type (int, byte, etc.) you will receive an error that looks like: Type byte, sbyte, short, ushort, int, uint, long, or ulong expected. – raddevus Dec 03 '10 at 21:17
  • 1
    @Ken: answered here: http://stackoverflow.com/questions/1813408/c-sharp-int-int32-and-enums – Jeroen Wiert Pluimers Mar 06 '13 at 14:31
  • 3
    @daylight note that specifying an `enum` to use `int` is not a `derive`, but specifying an `underlying type`; see http://msdn.microsoft.com/en-us/library/sbbt4032.aspx#code-snippet-2 – Jeroen Wiert Pluimers Mar 06 '13 at 14:32
  • 2
    @JeroenWiertPluimers However, it is still interesting why they have chosen to literally check the underlying type and throw [CS1008](http://msdn.microsoft.com/en-us/library/sab1z73a%28v=vs.90%29.aspx), as the underlying type is just the type of the constants in the enum, so it doesn't really matter when compiled. – IS4 Nov 02 '14 at 19:40
  • 7
    @IllidanS4, with new compiler Roslyn - this was fixed, and both variant valid – Grundy Feb 23 '16 at 08:11
51

I always use the system types - e.g., Int32 instead of int. I adopted this practice after reading Applied .NET Framework Programming - author Jeffrey Richter makes a good case for using the full type names. Here are the two points that stuck with me:

  1. Type names can vary between .NET languages. For example, in C#, long maps to System.Int64 while in C++ with managed extensions, long maps to Int32. Since languages can be mixed-and-matched while using .NET, you can be sure that using the explicit class name will always be clearer, no matter the reader's preferred language.

  2. Many framework methods have type names as part of their method names:

    BinaryReader br = new BinaryReader( /* ... */ );
    float val = br.ReadSingle();     // OK, but it looks a little odd...
    Single val = br.ReadSingle();    // OK, and is easier to read
    
phuclv
  • 27,258
  • 11
  • 104
  • 360
Remi Despres-Smyth
  • 3,883
  • 3
  • 32
  • 44
  • An issue with this is that Visual Studio auto-complete still uses int. So if you make a `List> test = new` , Visual Studio will now insert `List>()`. Do you know of a way to change these auto-completes? – MrFox Dec 18 '19 at 13:51
  • 2
    Yes, it is an issue; no, I don't know how to change them offhand. Point #2 isn't an issue anymore for me, as I tend to use `var` as much as possible to reduce the wordiness of the code. In those occasional spots where autocomplete comes in and spits on my floor, I adjust manually - it's literally a second or two of my time. – Remi Despres-Smyth Dec 18 '19 at 15:37
20

int is a C# keyword and is unambiguous.

Most of the time it doesn't matter but two things that go against Int32:

  • You need to have a "using System;" statement. using "int" requires no using statement.
  • It is possible to define your own class called Int32 (which would be silly and confusing). int always means int.
Brownie
  • 7,570
  • 5
  • 25
  • 38
  • It's also possible to create your own 'var' class, but that doesn't discourage people from using it. – Neme Jan 02 '18 at 09:55
  • Every keyword is a C# keyword. int was already used in C and C++. So there's nothing specifically C# about it. – MrFox Dec 19 '19 at 16:55
12

As already stated, int = Int32. To be safe, be sure to always use int.MinValue/int.MaxValue when implementing anything that cares about the data type boundaries. Suppose .NET decided that int would now be Int64, your code would be less dependent on the bounds.

spoulson
  • 20,523
  • 14
  • 72
  • 101
  • 8
    @spoulson: Comment error on line 1: Assignment forbidden between equal types. Yes, a bad joke. – Johann Gerell Jan 19 '10 at 09:26
  • 24
    If the C# spec (it's a C#, not .NET decision) ever decided to change to make `int` 64 bits, that would be *such* a breaking change that I don't believe it's possible (or certainly sensible) to code defensively against such eventualities. – Jon Skeet Jan 06 '12 at 07:07
10

Byte size for types is not too interesting when you only have to deal with a single language (and for code which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another, C# to COM object, etc., or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data.

In practice, I usually use Int32 just to remind myself what size they are because I do write managed C++ (to bridge to C# for example) as well as unmanaged/native C++.

Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so.

With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer").

In F# (although I've never used it), they define int, int32, and nativeint. The same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities.

I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc. to justify when to use Int32.

Addendum: Incidentally, since I've answered this question few years ago, I've started using both F# and Rust. F#, it's all about type-inferences, and bridging/InterOp'ing between C# and F#, the native types matches, so no concern; I've rarely had to explicitly define types in F# (it's almost a sin if you don't use type-inferences). In Rust, they completely have removed such ambiguities and you'd have to use i32 vs u32; all in all, reducing ambiguities helps reduce bugs.

HidekiAI
  • 2,783
  • 3
  • 17
  • 22
  • Doesn't that invalidate the purpose of .net? What is F# anyways, an idea Gates had that went away with his retirement... – Nick Turner Apr 01 '13 at 14:30
8

There is no difference between int and Int32, but as int is a language keyword many people prefer it stylistically (just as with string vs String).

Michał Powaga
  • 20,726
  • 7
  • 45
  • 60
Simon Steele
  • 11,288
  • 3
  • 42
  • 67
7

I always use the aliased types (int, string, etc.) when defining a variable and use the real name when accessing a static method:

int x, y;
...
String.Format ("{0}x{1}", x, y);

It just seems ugly to see something like int.TryParse(). There's no other reason I do this other than style.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Mark A. Nicolosi
  • 72,599
  • 10
  • 41
  • 46
7

In my experience it's been a convention thing. I'm not aware of any technical reason to use int over Int32, but it's:

  1. Quicker to type.
  2. More familiar to the typical C# developer.
  3. A different color in the default visual studio syntax highlighting.

I'm especially fond of that last one. :)

Greg D
  • 41,086
  • 13
  • 81
  • 115
5

I know that the best practice is to use int, and all MSDN code uses int. However, there's not a reason beyond standardisation and consistency as far as I know.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Raithlin
  • 1,729
  • 11
  • 18
5

Though they are (mostly) identical (see below for the one [bug] difference), you definitely should care and you should use Int32.

  • The name for a 16-bit integer is Int16. For a 64 bit integer it's Int64, and for a 32-bit integer the intuitive choice is: int or Int32?

  • The question of the size of a variable of type Int16, Int32, or Int64 is self-referencing, but the question of the size of a variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point).

  • Using Int32 promotes that the developer is conscious of their choice of type. How big is an int again? Oh yeah, 32. The likelihood that the size of the type will actually be considered is greater when the size is included in the name. Using Int32 also promotes knowledge of the other choices. When people aren't forced to at least recognize there are alternatives it become far too easy for int to become "THE integer type".

  • The class within the framework intended to interact with 32-bit integers is named Int32. Once again, which is: more intuitive, less confusing, lacks an (unnecessary) translation (not a translation in the system, but in the mind of the developer), etc. int lMax = Int32.MaxValue or Int32 lMax = Int32.MaxValue?

  • int isn't a keyword in all .NET languages.

  • Although there are arguments why it's not likely to ever change, int may not always be an Int32.

The drawbacks are two extra characters to type and [bug].

This won't compile

public enum MyEnum : Int32
{
    AEnum = 0
}

But this will:

public enum MyEnum : int
{
    AEnum = 0
}
Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
  • You say "The name for a 16 bit integer is Int16, for a 64 bit integer it's Int64, and for a 32 bit integer the intuitive choice is: int or Int32?", but there are C# keywords for these as well. Int16 = short Int64 = long So point one of your answer is based on an incorrect assumption. – Mel Apr 23 '09 at 17:03
  • "variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point)." Are you kidding me? You work in language that you don't fully understand what's behind the hood. If the developer doesn't understand what the primitive type equates to he should take up the culinary arts. Sounds like a VB developer. using primitives is native to any language and should be preferred. It's fine if you don't like primitives, but don't make up realities. – Nick Turner Apr 01 '13 at 14:39
  • Hmm, I totally disagree with your opinion that I should care... but I didn't know that enums can only inherit from the keywords. A pretty useless fact, but still fun to know :) – Jowen May 29 '13 at 10:56
4

You shouldn't care. You should use int most of the time. It will help the porting of your program to a wider architecture in the future (currently int is an alias to System.Int32 but that could change). Only when the bit width of the variable matters (for instance: to control the layout in memory of a struct) you should use int32 and others (with the associated "using System;").

tgray
  • 8,110
  • 4
  • 34
  • 38
yhdezalvarez
  • 71
  • 1
  • 2
  • 8
  • 1
    You can't be serious...make porting easier? I don't think a find and replace is a big deal. – Razor May 25 '10 at 04:21
  • 2
    _(currently int is an alias to System.Int32 but that could change)_? Oh come one... Are you serious? – Oybek Nov 13 '11 at 13:57
  • Why would you write code in a language that you want to eventually trash? Seems like by management decision. Use int or Int32. Int32 looks like VB – Nick Turner Apr 01 '13 at 14:34
  • What I meant was that MAYBE, (and that's a big MAYBE, I don't really know why the designers did it that way) you should have a way to declare an int that has the same width that the arch you are running on, like C's int/long/... works. This is a mechanism (int to alias int32) that seems designed to do exactly this. And take into consideration Microsoft always recommends using "int" vs "Int32" (just like they would if this was their original intention). I know, that's a big IF... When I wrote this answer, there wasn't any 64 bit .NET framework, so I didn't know what would they do in that case. – Yanko Hernández Alvarez Jun 20 '16 at 13:06
3

I'd recommend using Microsoft's StyleCop.

It is like FxCop, but for style-related issues. The default configuration matches Microsoft's internal style guides, but it can be customised for your project.

It can take a bit to get used to, but it definitely makes your code nicer.

You can include it in your build process to automatically check for violations.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
devstuff
  • 8,097
  • 1
  • 25
  • 31
  • I completely disagree with StyleCop on this one. yes it's good but I prefer to use Int32, why? as to avoid answers like the two downvoted ones. People confuse Int32 with how ints are represented in C – John Demetriou Nov 27 '15 at 08:24
3

int is the same as System.Int32 and when compiled it will turn into the same thing in CIL.

We use int by convention in C# since C# wants to look like C and C++ (and Java) and that is what we use there...

BTW, I do end up using System.Int32 when declaring imports of various Windows API functions. I am not sure if this is a defined convention or not, but it reminds me that I am going to an external DLL...

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Jack Bolding
  • 3,711
  • 3
  • 37
  • 43
3

Once upon a time, the int datatype was pegged to the register size of the machine targeted by the compiler. So, for example, a compiler for a 16-bit system would use a 16-bit integer.

However, we thankfully don't see much 16-bit any more, and when 64-bit started to get popular people were more concerned with making it compatible with older software and 32-bit had been around so long that for most compilers an int is just assumed to be 32 bits.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Joel Coehoorn
  • 362,140
  • 107
  • 528
  • 764
3

int is the C# language's shortcut for System.Int32

Whilst this does mean that Microsoft could change this mapping, a post on FogCreek's discussions stated [source]

"On the 64 bit issue -- Microsoft is indeed working on a 64-bit version of the .NET Framework but I'm pretty sure int will NOT map to 64 bit on that system.

Reasons:

1. The C# ECMA standard specifically says that int is 32 bit and long is 64 bit.

2. Microsoft introduced additional properties & methods in Framework version 1.1 that return long values instead of int values, such as Array.GetLongLength in addition to Array.GetLength.

So I think it's safe to say that all built-in C# types will keep their current mapping."

Ray Hayes
  • 14,395
  • 8
  • 50
  • 76
  • If a 64-bit version is introduced, they'll probably add 'nativeint' to C# (as it is currently used in F#). This just reinforces that introducing the 'int' and defining this as an Int32 was a mistake! And so inconsistent from an API (i.e. ReadInt32 not ReadInt), color (dark vs light blue) and case sensitivity (DateTime vs int) standpoint. i.e. why does the value type 'DateTime' not have an alias like Int32? – Carlo Bos Mar 06 '17 at 16:49
2

You should not care. If size is a concern I would use byte, short, int, then long. The only reason you would use an int larger than int32 is if you need a number higher than 2147483647 or lower than -2147483648.

Other than that I wouldn't care, there are plenty of other items to be concerned with.

David Basarab
  • 67,994
  • 42
  • 125
  • 155
  • I'd add that you can use the keyword "long" instead of System.Int64 – Keith Sep 15 '08 at 13:06
  • 22
    You misunderstood the question. The OP is asking if there is a difference between the declarations "int i" and "Int32 i". – raven Sep 15 '08 at 13:08
2

int and Int32 is the same. int is an alias for Int32.

Michał Powaga
  • 20,726
  • 7
  • 45
  • 60
Jesper Kihlberg
  • 509
  • 1
  • 4
  • 15
  • int is not an alias, it is a keyword. See the other answers. – Timores Apr 04 '10 at 13:00
  • int is definitely a keyword for the language but it can also be called an alias of System.Int32 . In addition, another way to think of this is that you have `using int = System.Int32;` directive for all of your source code files. – uygar donduran Jan 13 '13 at 14:51
2

It makes no difference in practice and in time you will adopt your own convention. I tend to use the keyword when assigning a type, and the class version when using static methods and such:

int total = Int32.Parse("1009");
phuclv
  • 27,258
  • 11
  • 104
  • 360
1

I use int in the event that Microsoft changes the default implementation for an integer to some new fangled version (let's call it Int32b).

Microsoft can then change the int alias to Int32b, and I don't have to change any of my code to take advantage of their new (and hopefully improved) integer implementation.

The same goes for any of the type keywords.

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
1

int is an alias for System.Int32, as defined in this table: Built-In Types Table (C# Reference)

Michał Powaga
  • 20,726
  • 7
  • 45
  • 60
Jim T
  • 11,910
  • 5
  • 26
  • 41
0

According to the Immediate Window in Visual Studio 2012 Int32 is int, Int64 is long. Here is the output:

sizeof(int)
4
sizeof(Int32)
4
sizeof(Int64)
8
Int32
int
    base {System.ValueType}: System.ValueType
    MaxValue: 2147483647
    MinValue: -2147483648
Int64
long
    base {System.ValueType}: System.ValueType
    MaxValue: 9223372036854775807
    MinValue: -9223372036854775808
int
int
    base {System.ValueType}: System.ValueType
    MaxValue: 2147483647
    MinValue: -2147483648
Selim
  • 43
  • 4
0

You should not care in most programming languages, unless you need to write very specific mathematical functions, or code optimized for one specific architecture... Just make sure the size of the type is enough for you (use something bigger than an Int if you know you'll need more than 32-bits for example)

Stacker
  • 111
  • 2
0

It doesn't matter. int is the language keyword and Int32 its actual system type.

See also my answer here to a related question.

Community
  • 1
  • 1
Keith
  • 133,927
  • 68
  • 273
  • 391
0

Using the Int32 type requires a namespace reference to System, or fully qualifying (System.Int32). I tend toward int, because it doesn't require a namespace import, therefore reducing the chance of namespace collision in some cases. When compiled to IL, there is no difference between the two.

Michał Powaga
  • 20,726
  • 7
  • 45
  • 60
Michael Meadows
  • 26,178
  • 4
  • 45
  • 60
0

Also consider Int16. If you need to store an Integer in memory in your application and you are concerned about the amount of memory used, then you could go with Int16 since it uses less memeory and has a smaller min/max range than Int32 (which is what int is.)

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Chris Pietschmann
  • 28,196
  • 35
  • 116
  • 161
0

A while back I was working on a project with Microsoft when we had a visit from someone on the Microsoft .NET CLR product team. This person coded examples and when he defined his variables he used “Int32” vs. “int” and “String” vs. “string”.

I had remembered seeing this style in other example code from Microsoft. So, I did some research and found that everyone says that there is no difference between the “Int32” and “int” except for syntax coloring. In fact, I found a lot of material suggesting you use “Int32” to make your code more readable. So, I adopted the style.

The other day I did find a difference! The compiler doesn’t allow you to type enum using the “Int32”, but it does when you use “int”. Don’t ask me why because I don’t know yet.

Example:

public  enum MyEnum : Int32
{
    AEnum = 0
}

This works.

public enum MyEnum : int
{
    AEnum = 0
}

Taken from: Int32 notation vs. int

Peter Mortensen
  • 28,342
  • 21
  • 95
  • 123
Schmuli
  • 713
  • 1
  • 6
  • 20
0

Use of Int or Int32 are the same Int is just sugar to simplify the code for the reader.

Use the Nullable variant Int? or Int32? when you work with databases on fields containing null. That will save you from a lot of runtime issues.

bovium
  • 2,699
  • 6
  • 23
  • 35
0

Some compilers have different sizes for int on different platforms (not C# specific)

Some coding standards (MISRA C) requires that all types used are size specified (i.e. Int32 and not int).

It is also good to specify prefixes for different type variables (e.g. b for 8 bit byte, w for 16 bit word, and l for 32 bit long word => Int32 lMyVariable)

You should care because it makes your code more portable and more maintainable.

Portable may not be applicable to C# if you are always going to use C# and the C# specification will never change in this regard.

Maintainable ihmo will always be applicable, because the person maintaining your code may not be aware of this particular C# specification, and miss a bug were the int occasionaly becomes more than 2147483647.

In a simple for-loop that counts for example the months of the year, you won't care, but when you use the variable in a context where it could possibly owerflow, you should care.

You should also care if you are going to do bit-wise operations on it.

user11211
  • 1,147
  • 9
  • 7
  • It makes no difference in .Net - int is always Int32 and long is always Int64 – Keith Sep 17 '08 at 11:23
  • `It is also good to specify prefixes for different type variables` Hungarian notation is largely deprecated nowadays and most coding styles discourage the use of it. Internal conventions of software companies also often forbid that notation – phuclv Nov 30 '19 at 06:10
-1

The bytes int can hold depends on what you compiled it for, so when you compile your program for 32 bit processors, it holds numbers from 2^32/2 to -2^32/2+1, while compiled for 64 bit it can hold from 2^64/2 to -2^64/2+1. int32 will always hold 2^32 values.

Edit : Ignore my answer, I didn't see C#. My answer was intended for C and C++. I've never used C#

RobbieGee
  • 1,531
  • 3
  • 15
  • 17