For signed integer arithmetic there exists: sign-and-magnitude, one's complement, two's complement, symmetric complement and excess-N. Sign-and-magnitude functions by set the most significant bit to 0 for a positive number, and set to 1 for a negative number. The remaining bits in the number indicate the magnitude. This makes it easy to calculate the value of the binary and makes certain functions trivial to implement (Absolute value), however it suffers from -0 and a slow cycle time in hardware.

One's complement's positive numbers are the same simple, binary system used by 2's complement and sign-magnitude. Negative values are the bit complement of the corresponding positive value. The largest positive value is characterized by the sign (high-order) bit being off (0) and all other bits being on (1). The smallest negative value is characterized by the sign bit being 1, and all other bits being 0. (eg if 4 = 0100 then -4 = 1011) Thus it is easy to calculate the value of the binary and certain functions fast to implement, however it also suffers from -0 and a slow cycle time in hardware (faster than Sign-and-magnitude)

A two's-complement number system encodes positive and negative numbers in a binary number representation. The bits have a binary radix point and the bits are weighted according to the position of the bit within the array. ( eg. if 1 = 0001 then -1 = 1111) This is the fastest in terms of cycle time and simplest in terms of transistors, however the range of negative numbers is larger than the range of positive numbers and humans have difficulty performing the transformations. Symmetric complement is functionally identical to one's complement, however it allocates -0 to NaN (Not a Number) (eg. if 4 = 0100 then -4 = 1011, 0 = 0000 and NaN = 1111). This is easy to calculate the value, can signal a division by zero without throwing an exception and makes certain functions trivial to implement (Absolute value), however the cycle time is slow in hardware (slowest of them all but not by much)

Excess-N, also called biased representation, uses a pre-specified number N as a biasing value. A value is represented by the unsigned number which is N greater than the intended value. Thus 0 is represented by N, and −N is represented by the all-zeros bit pattern. (eg. if 0 = 001111111 then -127 = 00000000) This makes comparison and scaling logic fast and efficient, which makes it ideal for the exponent of floating-point numbers; however its cycle time and transistor count has relegated it floating point usage only. Given that ALUs are no longer the limiting factor of CPU clock speed and cycle time/transistor count of the ALU has become insignificant, would it not be logical to use symmetric complement which is superior in important ways for software but its hardware costs are negligible?