## The Question :

*301 people think this question is useful*

In C, the integer (for 32 bit machine) is 32 bits, and it ranges from -32,768 to +32,767.
In Java, the integer(long) is also 32 bits, but ranges from -2,147,483,648 to +2,147,483,647.

I do not understand how the range is different in Java, even though the number of bits is the same. Can someone explain this?

*The Question Comments :*

## The Answer 1

*403 people think this answer is useful*

**In ***C*, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the `int`

can be 16 bit wide, though usually it is 32 bit.

The only requirement is that `short int`

<= `int`

<= `long int`

by size. Also, there is a recommendation that `int`

should represent the native capacity of the processor.

All types are signed. The `unsigned`

modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).

Here’s a short table of the possible values for the possible data types:

width minimum maximum
signed 8 bit -128 +127
signed 16 bit -32 768 +32 767
signed 32 bit -2 147 483 648 +2 147 483 647
signed 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
unsigned 8 bit 0 +255
unsigned 16 bit 0 +65 535
unsigned 32 bit 0 +4 294 967 295
unsigned 64 bit 0 +18 446 744 073 709 551 615

**In ***Java*, the Java Language Specification determines the representation of the data types.

The order is: `byte`

8 bits, `short`

16 bits, `int`

32 bits, `long`

64 bits. All of these types are *signed*, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).

The character data type `char`

is 16 bits wide, *unsigned*, and holds characters using UTF-16 encoding (however, it is possible to assign a `char`

an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)

width minimum maximum
SIGNED
byte: 8 bit -128 +127
short: 16 bit -32 768 +32 767
int: 32 bit -2 147 483 648 +2 147 483 647
long: 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807
UNSIGNED
char 16 bit 0 +65 535

## The Answer 2

*76 people think this answer is useful*

In C, the integer(for 32 bit machine) is 32 bit and it ranges from -32768 to +32767.

Wrong. 32-bit signed integer in 2’s complement representation has the range -2^{31} to 2^{31}-1 which is equal to -2,147,483,648 to 2,147,483,647.

## The Answer 3

*20 people think this answer is useful*

A 32 bit integer ranges from -2,147,483,648 to 2,147,483,647. However the fact that you are on a 32-bit machine does not mean your `C`

compiler uses 32-bit integers.

## The Answer 4

*15 people think this answer is useful*

The C language definition specifies *minimum* ranges for various data types. For `int`

, this minimum range is -32767 to 32767, meaning an `int`

must be *at least* 16 bits wide. An implementation is free to provide a wider `int`

type with a correspondingly wider range. For example, on the SLES 10 development server I work on, the range is -2147483647 to 2137483647.

There are still some systems out there that use 16-bit `int`

types (All The World Is *Not* A ~~VAX~~ x86), but there are plenty that use 32-bit `int`

types, and maybe a few that use 64-bit.

The C language was designed to run on different architectures. Java was designed to run in a virtual machine that hides those architectural differences.

## The Answer 5

*9 people think this answer is useful*

The strict equivalent of the java `int`

is `long int`

in C.

Edit:
If `int32_t`

is defined, then it is the equivalent in terms of precision. `long int`

guarantee the precision of the java `int`

, because it is guarantee to be at least 32 bits in size.

## The Answer 6

*7 people think this answer is useful*

That’s because in C – integer on 32 bit machine doesn’t mean that 32 bits are used for storing it, it may be 16 bits as well. It depends on the machine (implementation-dependent).

## The Answer 7

*7 people think this answer is useful*

The poster has their java types mixed up.
in java, his C in is a short:
short (16 bit) = -32768 to 32767
int (32 bit) = -2,147,483,648 to 2,147,483,647

http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

## The Answer 8

*4 people think this answer is useful*

Actually the size in bits of the `int`

, `short`

, `long`

depends on the compiler implementation.

E.g. on my Ubuntu 64 bit I have `short`

in `32`

bits, when on another one 32bit Ubuntu version it is `16`

bit.

## The Answer 9

*1 people think this answer is useful*

In C range for __int32 is –2147483648 to 2147483647. See here for full ranges.

unsigned short 0 to 65535
signed short –32768 to 32767
unsigned long 0 to 4294967295
signed long –2147483648 to 2147483647

There are no guarantees that an ‘int’ will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the ‘Standard Integer Types’.

In Java

The int data type is a 32-bit signed two’s complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive).

## The Answer 10

*1 people think this answer is useful*

It is actually really simple to understand, you can even compute it with the google calculator:
you have 32 bits for an int and computers are binary, therefore you can have 2 values per bit (spot).
if you compute 2^32 you will get the 4,294,967,296. so if you divide this number by 2, (because half of them are negative integers and the other half are positive), then you get 2,147,483,648. and this number is the biggest int that can be represented by 32 bits, although if you pay attention you will notice that 2,147,483,648 is greater than 2,147,483,647 by 1, this is because one of the numbers represents 0 which is right in the middle unfortunately 2^32 is not an odd number therefore you dont have only one number in the middle, so the possitive integers have one less cipher while the negatives get the complete half 2,147,483,648.

And thats it. It depends on the machine not on the language.

## The Answer 11

*0 people think this answer is useful*

in standard C, you can use INT_MAX as the maximum ‘int’ value, this constant must be defined in “limits.h”. Similar constants are defined for other types (http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.5.html), as stated, these constant are implementation-dependent but have a minimum value according to the minimum bits for each type, as specified in the standard.