Unless you are reading this book as part of a class that instructs you otherwise, you should not worry about memorizing the different numeric types and their sizes. It is important to understand that there are
All data needs to be stored as bits (as a sequence of 0βs and 1βs). Those sequences are always a fixed size. For instance, an int is typically 32 bits. This means that an int can store \(2^{32}\) different values. That is enough different values to represent the numbers -2,147,483,648 to 2,147,483,647. (For more on how this actually works, as well as how floating point values are represented in binary, see Welcome to CS Data Representation chapter)
For integer values, the number of bits determines the largest and smallest values that can be stored. Exceeding this range leads to integer overflow or underflow, resulting in incorrect calculations or unexpected behavior as seen in this program:
The program tries to calculate 2 * 2,000,000,000. But that result is larger than the largest possible int, so the number βwraps aroundβ after reaching the largest possible value (2,147,483,647) to the lowest possible value (-2,147,483,648). The result is -294,967,296.
So how do you store a value greater than 2,147,483,647? You use a data type that employs more bits to store an integer value. The following data types can be used instead of int to store integer values:
Try changing the type of the variable x in ListingΒ 4.6.1 to int64_t and see what happens. Since the variable can now safely store 4,000,000,000, the program should produce the correct result.
The smaller types, such as int8_t and int16_t, are useful when you want to save memory or when you know the values will not exceed their range. In a short program storing a few values, there is no real advantage to storing a value as an int8_t instead of an int. But if you were storing millions of values, the smaller types could save a significant amount of memory.
The intXX_t types are relatively new in C++. There are also older types like short, long, and long long. We will avoid these completely as the exact size of each is determined by the compiler - a long can use a different number of bits on different platforms! The same is also true of int, so if you want to guarantee your variables are store with 32 bits, you need to use int32_t.
Sometimes, we donβt care about storing negative values. If so, using some of the representational power of our bits on negative values is a waste. So C++ provides unsigned integer types that only store 0 and positive values. This means that with the same number of bits, you can store a value twice as large.
The unsigned types are useful for squeezing more positive range out of the same number of bits. They are also useful in situations where we want to prevent a negative value from being stored.
For floating point types, the number of bits determines not just the maximum and minimum values we can represent, but the number of digits of accuracy we can count on. A double is generally 64 bits, in which case it can store from about \(1.7 Γ 10^{-308}\) to \(1.7 Γ
10^{308}\) with about 15 digits of precision.
It should be clear that there is much less risk of βrunning out of roomβ when using a double. \(1.7 Γ
10^{308}\) is a very large number. However, a double also has the limit of ~15 digits of precision and that might be a concern. If you need to represent larger values or have more precision, there is the long double which may, depending on the platform use more bits to represent a floating point value and thus have more accuracy.
Occasionally, if your program is storing large numbers of floating point values (thousands of them) and does not need the full accuracy of a double, you might chose to use singles instead. A single which will generally use 32 bits and provide about 7 digits of accuracy. As with the integer type int8_t, this is mostly useful when storing large numbers of values that are known to be within a limited range.