0-1-2-3-4-5-6-7-10

File Under History

After our recent podcast about binary, I was thinking about where the use of octal in computing came from. Octal is the base-8 number system and uses the symbols 0 through 7 to represent numbers. In this blog post, we’ll review octal, then look at some old computers!

What does octal look like? It’s just like decimal for the first eight values, then it takes a hard left turn.

0
1
2
3
4
5
6
7
10
11
…

Yes, 10 comes after 7, but you don’t really say “ten”, it’s common to say “one zero”, so it doesn’t get confused with decimal ten. “One zero” in octal is equivalent to 8 in decimal. It’s a weird system, but it is one of the three standard number systems supported by C (the others being decimal and hexadecimal).

So, where did the use of octal come from? The first thing that you have to understand is, in the beginning, computers were mind-bogglingly expensive and there were no rules. The bit-width of the processors was far from standardized, the use of binary wasn’t even common. For instance, the IBM made Harvard Mark I computer, now housed at the Science Center at Harvard, used 23 decimal digits to represent data, it didn’t even use binary.

The binary computers, in the decades that followed, used a variety of widths, 27-bits, 30-bits, 36-bits, 48-bits. IBM, with their hugely important System /360 went for 32 bits, Digital Equipment Corporation (DEC) used 18 for their PDP-1, Hewlett-Packard used 16 for their 1000 series, and Univac used 36.

But why would they use 27, or 30, or 36 bits? These were the days of computers being used for scientific number crunching and the number of bits used was determined by the number of decimal digits needed for the calculations and the budget available. A 27-bit computer gave just over 8 decimal digits of accuracy. 36-bits gave 10 decimal digits.

For the Harvard Mark I to have 23 decimal digits, that would be the equivalent of a 77-bit binary machine. To calculate how many bits a particular decimal number takes, you need to take the log in base 2. Most calculators won’t do that calculation, but math comes to the rescue.

log2(x) = log10(x) / log10(2)

So, log2(1023) = log10(1023) / log10(2) or 76.4 bits. Then round up to the next integer and we have 23 decimal digits using 77 bits.

How big is a character?

In the early days of computers, there were no standards that said how big a character is. Since computers were just being invented, the computer designers needed to figure out how many bits to allocate for a character. Since they were mostly doing numeric calculations, this decision was based on getting information into the machine and any printed results. If you were only dealing with numbers, a character really only needed to be 4-bits wide. The teletype machines of the day used 5 bits that would give 32 possible characters, a complete upper case alphabet plus null, carriage return, line feed, space, a special command to shift to numbers and punctuation, and another to shift back to alphabetics

With 6-bit characters, they could have 64 characters in their set and not have to shift back and forth from alpha to numerics. And they could fit six 6-bit characters into a 36-bit word. No lower-case though.

But how do you get these characters into the computer? This is the time of punch cards, and the card readers were made to match the machines. The early card punches used the 6-bit Binary Coded Decimal (BCD) character set.

Let’s be up front

These computers also used front panel switches to enter the boot loader, depositing instructions at particular locations, and lights to show what was stored in memory and in registers. To help the humans, the instructions and addresses would be grouped into clusters of multiple bits. Toggling individual switches from code written down in binary would be very tedious, but representing bits into larger groups made it easier to accurately transcribe the values into switch positions

By NASA (Great Images in NASA Description) [Public domain], via Wikimedia Commons

By NASA (Great Images in NASA Description) [Public domain], via Wikimedia Commons

The IBM 704 from 1954 was the first large production computer with floating point support. This was a 36-bit machine, using 6-bit BCD characters, and it was the computer that spawned the first high level computer language, FORTRAN. As you can see from the front panel above, the toggle switches and lights were grouped into threes. In the operator manual, the instruction codes are all in octal. Strangely, the BCD characters in the 704 were described as 2-bits plus 4-bits of binary, not two octal characters. This probably had more to do with compatibility with existing card readers and printers.

By ArnoldReinhold [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

By ArnoldReinhold [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons

The IBM System /360 was a 32-bit machine, they grouped the binary values into eight 4-bit hexadecimal values and the front panel was festooned with 16 position rotary knobs rather than toggle switches and the lights were grouped in 4s. This was a hexadecimal machine.

On the mini-computer side, the Digital Equipment Corporation (DEC) took over as the scientific computer system provider of choice when IBM went for the big-business data processing (payroll) market.

DEC’s first machine, the PDP-1, was an 18-bit machine produced in 1959. Again, using octal for instructions and address representation and 6-bit characters, though it used two octal digits called Concise Code.

By fjarlq / Matt (https://www.flickr.com/photos/fjarlq/147938903/) [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

By fjarlq / Matt (https://www.flickr.com/photos/fjarlq/147938903/) [CC BY 2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

As you can see in the photo, above, DEC grouped the lights and switches in threes. Giving them six 3-bit octal digits.

Later ASCII and EBCDIC encoding increased the character width to the 8-bits we use today, and computers standardized on 8, 16, or 32-bits. The time of hex had begun.

By Digital_PDP11-IMG_1498.jpg: Rama & Musée Boloderivative work: Morn [CC BY-SA 2.0 fr (https://creativecommons.org/licenses/by-sa/2.0/fr/deed.en) or CeCILL (http://www.cecill.info/licences/Licence_CeCILL_V2-en.html)], via Wikimedia Commons

By Digital_PDP11-IMG_1498.jpg: Rama & Musée Boloderivative work: Morn [CC BY-SA 2.0 fr (https://creativecommons.org/licenses/by-sa/2.0/fr/deed.en) or CeCILL (http://www.cecill.info/licences/Licence_CeCILL_V2-en.html)], via Wikimedia Commons

Habits die hard though, DEC’s PDP-11 computers had 16-bit registers and used ASCII 8-bit character encoding, but still grouped data into octal digits, so you get addresses that look like 177770, the high digit can only be 0 or 1.

At my university, the people in Computing Services converted their assemblers to generate hexadecimal data and addresses, then white tape and markers converted the front panel to groups of four switches. They made their PDP-11s into hexadecimal machines. The computers didn’t care, the notation was for the humans.

See C?

Since C was being developed in the early 70s on an 18-bit DEC PDP-7, the authors needed to support the common base notations of the day: decimal, octal, and hexadecimal (for reasons unknown, binary isn’t a part of the C standard). They came up with a prefix notation for values that indicated which base was being used. A prefix of 0 indicated octal, 0x indicated hexadecimal, and no prefix indicated decimal. I’ve seen this create problems when people try and make aligning number patterns like:

uint32_t array[5] = {
	097,
	098,
	099,
	100,
	101
};

The first three numbers are assumed to be octal, the next two are decimal. This code would generate three errors since 9 is not a valid octal digit (0-7).

Collateral Damage

That’s it for octal; it’s a historical holdover from the early days of computing when the register size was a multiple of 3-bits.

If you want more information, here’s a video explaining where the byte came from that I recommend.

Two more things before I shut this down.

In the early days of FORTRAN programming, variable identifiers had to be 6 characters or less. I suspect this restriction was  because FORTRAN was originally written for the IBM 701/704, a 36-bit computer and characters were 6-bits wide. Thus, you could get 6 characters in a 36-bit word. Also, each row of the input punch cards got loaded as binary data into two 36 bit words for a total of 72 bits. The last 8 columns held a sequence number on the card. This is the origin of the 80 character width that you see still hanging around in editors.

Back in the 80s, I had a job where I was working on a computer that commonly used octal. I became pretty comfortable with the number sequences. Then, one day, I was driving down the highway and I looked down at my odometer, 77,776. Oh hey, this is going to be cool, 77,777, just about there. 77,778, aw man! I was fully expecting the odometer to roll over to 100,000, silly car!


This post is part of a series. Please see the other posts here.


Embedded Wednesdays at Otto in Edmonton. Eat, octal, and eight. Om nom.

Embedded Wednesdays at Otto in Edmonton. Eat, octal, and eight. Om nom.

 Music to work by: Some roots raeggae, Skanking With The Upsetter 1971-1974. Lee "Scratch" Perry on Jamaican Recordings.