Integer Basics
An Integer is a number that can be written without a fractional component. The set of integers includes positive numbers, negative numbers, and zero. Integers are fundamental in mathematics and computer science, represented by $\mathbb{Z}$.
1. Definition and Classification
Integers are broadly classified into three types:
- Positive Integers: Also known as Natural Numbers, these are integers greater than zero: $1, 2, 3, \dots$
- Zero: The neutral integer, which is neither positive nor negative.
- Negative Integers: Integers less than zero: $-1, -2, -3, \dots$
$$ \mathbb{Z} = {\dots, -3, -2, -1, 0, 1, 2, 3, \dots} $$
[Image of Integer number line]
2. Integer Representation in Computers
Computers store all data using Binary (0s and 1s). The way integers are represented depends on their size and whether they are positive or negative.
① Storage Size
Integers are stored in a fixed number of bits (e.g., 8-bit, 16-bit, 32-bit, 64-bit), such as int or long in C. A larger bit size allows for a wider range of values.
② Sign Handling: Two's Complement
Computers use the Two's Complement system to represent negative integers and simplify subtraction operations into addition.
- Principle: The most significant bit (MSB) is used as the sign bit; $0$ for positive and $1$ for negative.
- Advantage: It simplifies the Arithmetic Logic Unit (ALU) design by unifying addition and subtraction.
Example (8-bit): * Positive 5: $\mathbf{0}0000101$ * Negative -5: Two's complement of $00000101$ $\rightarrow \mathbf{1}1111011$
3. Applications of Integers
- Counting: Used for iteration and tracking the number of items.
- Indexing: Used to reference the position of elements in arrays or lists.
- Bitwise Operations: As seen in C programming, integers are manipulated directly at the bit level for tasks like low-level hardware control.