Home : Library : Tutorials : Understanding Binary and Hex

Understanding Binary and Hex numbers

Computers only understand numbers.

The first thing to understand about computers is that they are nothing more than a powerful, glorified calculator. The only thing they know, the only thing they understand, is numbers. You may see words on the screen when you're chatting with your friend via AOL, or breathtaking graphics while playing your favorite game, but all the computer sees are numbers. Millions and millions of numbers. That is the magic of computers - they can calculate numbers, lots of numbers - really fast.

But why is this? Why do computers only understand numbers? To understand that we need to go deep into the heart of a computer, break it down to its most basic functionality. When you strip away all the layers of fancy software and hardware, what you will find is nothing but a collection of switches. You know the kind, you have them all over your house - light switches. They only have two positions: On or Off. It's the same for computers, only they have millions and millions of the little buggers. Everything a computer does comes down to keeping track of and flipping these millions of switches back and forth between on and off. Everything you type, download, save, listen to or read eventually gets converted to a series of switches in a particular on/off pattern that represents your data.

What does this have to do with Binary and Hexidecimal numbers?

Let's back up for a minute and look at how human beings deal with numbers first. Most people today use the Arabic numbering system, which is known as the decimal, or Base-10, numbering system (dec means ten). What this means is that we have ten digits in our numbering system:

0 1 2 3 4 5 6 7 8 9

We use these ten digits in various combinations to represent any number that we might need. How we combine these numbers follows a very specific set of rules. If you think back to grade school, you can probably remember learning about the ones, tens, hundreds and thousands places:

[Display of decimal place columns]

When counting, you increase each digit in the right-most place column until you reach 9, then you return to zero and increment the next column to the left:

I know this all probably seems very remedial and unimportant, but going back to these basic, simplistic rules is very important when learning to deal with other number formats. Would it surprise you to learn that there other numbering systems that have a different base? Somebody, somewhere, a long time ago decided that having ten digits would work best for us. But there really is no reason why our numbering scheme couldn't have had seven, or eight, or even twelve digits. The number of digits really makes no difference (except for our familiarity with them). The same basic rules apply.

As it turns out, computers have a numbering system with only two digits. Remember all those switches, each of which can only be on or off? Such an arrangement lends itself very nicely to a Base-2 numbering system. Each switch can represent a place-column with two possible digits:

0 1

0 = off, 1 = on. We call such numbers binary numbers (bin means two), and they follow the same basic rules that decimal numbers do: Start with 0, increment to 1, then go back to 0 and increment the next column to the left:

binary decimal
equivelent
00
11
102
113
1004
1015
1106
1117
10008
10019
...

Hexidecimal

Binary numbers are well and good for computers but having only two digits to work with means that your place-columns get very large very fast. As it turns out, there is another numbering scheme that is very common when dealing with computers: Hexidecimal. Hex means six, and recall that dec means ten, so hexidecimal numbers are part of a Base-16 numbering scheme.

Years ago, when computers were still a pretty new-fangled contraption, the people designing them realized that they needed to create a standard for storing information. Since computers can only think in binary numbers, letters, text and other symbols have to be stored as numbers. Not only that, but they had to make sure that the number that represented 'A' was the same number on every computer. To facilitate this the ASCII standard was born. The ASCII Chart listed 128 letters (both upper- and lower-case), punctuation and symbols that could be used and recognized by any computer that conformed to the ASCII standard. It also included non-printable values that aren't displayed but perform some other function, such as a tab placeholder (09), an audible bell (07) or an end-of-line marker (13). The various combinations of only eight binary digits, or bits, could be used to represent any character on the ASCII Chart (28 = 128). (There were also other competing standards at the time, some of which used a different number of bits and defined different charts, but in the end ASCII became the dominant standard.)1

128 characters may have seemed like a lot but it didn't take long to notice that the ASCII Chart lacked many of the special vowels used by latin-based languages other than English, such as ä, é, û and Æ. Also lacking were common mathmatical symbols (±, µ, °, ¼) and monetary symbols other than the dollar sign ($) for United States currency (£, ¥, ¢). To make up for this oversight these symbols and a series of simple graphical shapes, mostly for drawing borders, were assembled as an extension to the original ASCII Chart. These additional 128 characters brought the new total to 256 (216), with the pair of charts being referred to collectively as the Extended ASCII Chart.

Did you notice that the value 256 can be represented as 2 (the base of a binary numbering system) to the 16th power? This brings us back to hexidecimal (Base-16) numbers. It turns out, through the magic of mathmatical relationships, that every character on the Extended ASCII Chart can be represented by the a two-digit hexidecimal number: 00 - FF (0 - 255 decimal).

Whoa! What's up with this FF stuff?

Hexidecimal is a Base-16 numbering system, which means that every places column counts up to sixteen individual digits. The decimal system that we humans are familiar with only has a total of ten unique digits, however, so we need to come up with something to represent each of the remaining six digits. We do this by using the first six letters of the alphabet.2 This means the digits for the hexidecimal numbering system are:

0 1 2 3 4 5 6 7 8 9 A B C D E F

And, of course, hexidecimal numbers follow the same basic rules that decimal and binary numbers do. Count up to the last digit, then return to zero and increment the next column to the left:

hexidecimal decimal
equivelent
00
11
22
...
99
A10
B11
...
E14
F15
1016
1117
...
1925
1A26
...
1F31
2032
...

As you can see, the hexidecimal numbering system doesn't advance through the place-columns as quickly as decimal numbers do - and certainly not at the rate of growth experienced by binary numbers! This, coupled with its relationship to the Extended ASCII Chart and subsequent relationship to various other computer concepts, has made the hexidecimal numbering system, or hex, a standard for computer programmers and engineers the world over. It is common when viewing a raw data dump to use a Hex Viewer - software that displays the hex values of each character. This allows one to see every character in the Extended ASCII Chart, even the ones that are not normally printed or visible.

If you are a programmer, or aspiring to be one, it is also worth noting that the variable type Byte is, depending on the programming language, 8 bits in size. This means that it can be represented by a single digit hexidecimal number (0-F). If you are programming for the Windows platform in C or C++ you have probably noticed the commonly used variable type DWORD (Double-WORD). A WORD is 16 bits (0-FF) in size, which makes a DWORD 32 bits (0-FFFF). If you are an HTML programmer you have probably seen color values that are composed of hex numbers. Colors are represented as a mixture of Red, Green and Blue values (RGB). Each of these three primary colors can have a value from 0-255 (decimal), which translates into three sets of two-digit hexidecimal numbers: 00 1A FF.

This tutorial just touches on the basics of the hexidecimal and binary numbering systems and their importance when working with computers, but I hope that it has provided a good base of understanding from which to start. As always, I welcome feedback, suggestions and corrections. If you have enjoyed this tutorial please be sure to check out others I have written.


Footnotes:

1. While ASCII was the standard of its time, it doesn't even come close to representing the international needs for sharing data. There are many competing standards today which provide support for the various letters and characters of other cultures and countries but Unicode is by far the most common. [return]

2. Whether the letters are upper- or lower-case makes no difference. It is common to see them represented either way. [return]


Home : Library : Tutorials : Understanding Binary and Hex


copyright | about | site map

© Shawn South, 2002. All rights reserved.
Republication, duplication or redistribution of any kind is prohibited without express written permission.