# c – size_t vs. uintptr_t

## The Question :

250 people think this question is useful

The C standard guarantees that size_t is a type that can hold any array index. This means that, logically, size_t should be able to hold any pointer type. I’ve read on some sites that I found on the Googles that this is legal and/or should always work:

void *v = malloc(10);
size_t s = (size_t) v;



So then in C99, the standard introduced the intptr_t and uintptr_t types, which are signed and unsigned types guaranteed to be able to hold pointers:

uintptr_t p = (size_t) v;



So what is the difference between using size_t and uintptr_t? Both are unsigned, and both should be able to hold any pointer type, so they seem functionally identical. Is there any real compelling reason to use uintptr_t (or better yet, a void *) rather than a size_t, other than clarity? In an opaque structure, where the field will be handled only by internal functions, is there any reason not to do this?

By the same token, ptrdiff_t has been a signed type capable of holding pointer differences, and therefore capable of holding most any pointer, so how is it distinct from intptr_t?

Aren’t all of these types basically serving trivially different versions of the same function? If not, why? What can’t I do with one of them that I can’t do with another? If so, why did C99 add two essentially superfluous types to the language?

I’m willing to disregard function pointers, as they don’t apply to the current problem, but feel free to mention them, as I have a sneaking suspicion they will be central to the “correct” answer.

239 people think this answer is useful

size_t is a type that can hold any array index. This means that, logically, size_t should be able to hold any pointer type

Not necessarily! Hark back to the days of segmented 16-bit architectures for example: an array might be limited to a single segment (so a 16-bit size_t would do) BUT you could have multiple segments (so a 32-bit intptr_t type would be needed to pick the segment as well as the offset within it). I know these things sound weird in these days of uniformly addressable unsegmented architectures, but the standard MUST cater for a wider variety than “what’s normal in 2009”, you know!-)

89 people think this answer is useful

“The C standard guarantees that size_t is a type that can hold any array index. This means that, logically, size_t should be able to hold any pointer type.”

This is actually a fallacy (a misconception resulting from incorrect reasoning)(a). You may think the latter follows from the former but that’s not actually the case.

Pointers and array indexes are not the same thing. It’s quite plausible to envisage a conforming implementation that limits arrays to 65536 elements but allows pointers to address any value into a massive 128-bit address space.

C99 states that the upper limit of a size_t variable is defined by SIZE_MAX and this can be as low as 65535 (see C99 TR3, 7.18.3, unchanged in C11). Pointers would be fairly limited if they were restricted to this range in modern systems.

In practice, you’ll probably find that your assumption holds, but that’s not because the standard guarantees it. Because it actually doesn’t guarantee it.

(a) This is not some form of personal attack by the way, just stating why your statements are erroneous in the context of critical thinking. For example, the following reasoning is also invalid:

All puppies are cute. This thing is cute. Therefore this thing must be a puppy.

The cuteness or otherwise of puppiess has no bearing here, all I’m stating is that the two facts do not lead to the conclusion, because the first two sentences allow for the existance of cute things that are not puppies.

This is similar to your first statement not necessarily mandating the second.

36 people think this answer is useful

I’ll let all the other answers stand for themselves regarding the reasoning with segment limitations, exotic architectures, and so on.

Isn’t the simple difference in names reason enough to use the proper type for the proper thing?

If you’re storing a size, use size_t. If you’re storing a pointer, use intptr_t. A person reading your code will instantly know that “aha, this is a size of something, probably in bytes”, and “oh, here’s a pointer value being stored as an integer, for some reason”.

Otherwise, you could just use unsigned long (or, in these here modern times, unsigned long long) for everything. Size is not everything, type names carry meaning which is useful since it helps describe the program.

12 people think this answer is useful

It’s possible that the size of the largest array is smaller than a pointer. Think of segmented architectures – pointers may be 32-bits, but a single segment may be able to address only 64KB (for example the old real-mode 8086 architecture).

While these aren’t commonly in use in desktop machines anymore, the C standard is intended to support even small, specialized architectures. There are still embedded systems being developed with 8 or 16 bit CPUs for example.

5 people think this answer is useful

I would imagine (and this goes for all type names) that it better conveys your intentions in code.

For example, even though unsigned short and wchar_t are the same size on Windows (I think), using wchar_t instead of unsigned short shows the intention that you will use it to store a wide character, rather than just some arbitrary number.

3 people think this answer is useful

Looking both backwards and forwards, and recalling that various oddball architectures were scattered about the landscape, I’m pretty sure they were trying to wrap all existing systems and also provide for all possible future systems.

So sure, the way things settled out, we have so far needed not so many types.

But even in LP64, a rather common paradigm, we needed size_t and ssize_t for the system call interface. One can imagine a more constrained legacy or future system, where using a full 64-bit type is expensive and they might want to punt on I/O ops larger than 4GB but still have 64-bit pointers.

I think you have to wonder: what might have been developed, what might come in the future. (Perhaps 128-bit distributed-system internet-wide pointers, but no more than 64 bits in a system call, or perhaps even a “legacy” 32-bit limit. 🙂 Image that legacy systems might get new C compilers…

Also, look at what existed around then. Besides the zillion 286 real-mode memory models, how about the CDC 60-bit word / 18-bit pointer mainframes? How about the Cray series? Never mind normal ILP64, LP64, LLP64. (I always thought microsoft was pretensious with LLP64, it should have been P64.) I can certainly imagine a committee trying to cover all bases…

-10 people think this answer is useful
int main(){
int a[4]={0,1,5,3};
int a0 = a[0];
int a1 = *(a+1);
int a2 = *(2+a);
int a3 = 3[a];
return a2;
}



Implying that intptr_t must always substitute for size_t and visa versa.