BACK TO HOME PAGE

Micro-magic!



This is an article I prepared for the PC Users' Group magazine Sixteen Bits. It was published in 1997.


Introduction

Have you ever wondered what makes your computer, well.... compute? How on earth does it do all the fancy things it does - the snappy animated graphics, the sound, the serious number crunching?

The beauty of the modern computer, with its incredible processing power and megabytes of memory is that you don’t have to know! Clever electronic engineers and programmers have hidden most of the messy detail from us. We can simply sit back and enjoy our machines if we want. But in case you are just a little bit curious about what makes your computer work, then read on.

I will mainly focus on the ‘thinking’ part of the modern computer - the microprocessor - although I will also spend a bit of time on that key peripheral - memory. You’ll be surprised to learn how simple the basic structure and function of a microprocessor is.

I became interested in learning more about microprocessors about 7 years ago. In order to really gain a good understanding I decided to learn a little electronics, then actually design, build and program a small computer. And I did it! It now runs the watering system in my garden and is driven by an ancient Z80 microprocessor which I scavenged off some junked circuit boards. Anyway, that’s enough introduction.

The basics

The name ‘microprocessor’ is a very apt description of what it does. It processes information, and it’s tiny. In spite of the incredible things that microprocessors can do, they rely on some very primitive processing to get the job done.

Just what type of things can they do? They can shuffle data from one place to another and they can add and subtract numbers. All sorts of things are possible even with such a basic set of functions. For example, two numbers can be compared simply by subtracting one from the other. If the result of the subtraction is zero, the microprocessor knows that the two numbers were the same. Numbers can be multiplied together just by repeated addition. Microprocessors can also do simple logical operations like AND, OR and XOR.

Most importantly of all, microprocessors can make decisions, but again very simple ones. For example, they can first compare two numbers as outlined above, then make a decision about what to do next based on the result.

You are probably beginning to wonder if microprocessors can doing anything other than manipulate numbers. The simple answer is no. The things we are familiar with in our day-to-day lives like words or images must first be converted to numbers before a microprocessor can handle them - then converted back after being ‘processed’ so we can understand them.

Standards have existed for a long time that specify how real world information should be converted to numbers that computers can handle. One you may have heard of is the ASCII character set. ASCII stands for American Standard Code for Information Interchange. This code specifies unique numerical values for letters of the alphabet, special characters, symbols and digits. As I type these keys on my keyboard, they are being converted into numbers inside the keyboard and sent to the microprocessor in the computer for processing. Likewise, when the microprocessor needs to send output to, say, the video monitor, it sends this information as numbers which are converted back into coloured dots on the screen by the electronics in video card and monitor.

Instructions

You might also be starting to wonder how the microprocessor knows what to do, in what order to do it, and when? Who or what is giving it ‘instructions’? If you haven’t already guessed, the answer is software. In fact, software is nothing more than a long string of instructions for the microprocessor. I use the term 'string of instructions' with good reason. Microprocessors can normally only execute one instruction at a time and so must be fed these one after another, in a long ‘string’.

One interesting implication of the ‘one instruction at a time’ limitation is that only one piece of software can be running on a microprocessor at any one time. If you think your computer already breaks this rule, it has fooled you! Most desktop computers only have one microprocessor. But this microprocessor is very fast and can repeatedly switch between several pieces of software at such a rate that it gives the illusion of having several programs active at once.

Getting wired!

Physically, the microprocessor is just one of the many ‘chips’ or integrated circuits you will find on the main circuit board in your computer. Lots of metal ‘pins’ protrude from it, usually from the base on today’s microprocessors, but on older chips, they are found around the perimeter. These pins are wired to the little chunk of silicon that actually does all the work inside the plastic chip package. The pins connect externally to other chips or devices on the circuit board through thin metal ‘tracks’ on the board. Microprocessors can have anything from only a few dozen pins up to the 300 mark.

Figure 1

Basic microprocessor connection diagramFigure 1 shows the three main types of connections a typical microprocessor has to the world around it. Some are inputs, some are outputs and some carry information both ways. All of them handle binary information. The three main classes of connections are data, address and control.

The data bus

Let’s start with the data connections. These are usually two way meaning ‘data’ can travel either into or out of the microprocessor. The number of data lines depends on the microprocessor. A microprocessor with 32 data lines is said to have a 32 bit data bus. On older microprocessors, the size of the data bus was usually the size of the data ‘words’ that the microprocessor could internally perform calculations with. On newer microprocessors, the data bus is often much wider than the data word for performance reasons. For example the Power PC microprocessors found in some current Macintoshes have 64 bit data buses but can only process 32 bit numbers internally. Don’t worry too much about this. The important thing I want to focus on here is the size of the data word that the microprocessor can perform calculations on internally.



What is binary information?


Each wire or pin on the microprocessor can only transmit (or receive) an on or an off signal - like a simple light switch which either turns the electricity on or off to a light. Because there are only two possible options (on or off), microprocessors are described as operating with ‘binary’ data. The ‘on’ position is usually represented by the number ‘1’ while to off position is given the value ‘0’.

To be of any use, numbers much larger than 1 must be handled by microprocessors. To do this, more than one ‘binary’ line is used. Some simple calculations can show us that two lines would enable numbers up to three to be handled. Eight bits (1 line = 1 bit) can code numbers up to 255.

On many current microprocessors like the Pentium, the data is encoded on 32 lines (ie 32 bits) enabling numbers up to about 4.2 billion to be processed. Hence, the Pentium is called a 32 bit microprocessor. New processors are now coming out that process 64 bit data ‘words’. These can process extremely large numbers.

Some binary number sizes are given special names:
  • 8 bit numbers are called bytes
  • each 4 bit half of a byte is called a nibble

A nibble can encode numbers from 0 to 15. For convenience, engineers and computer programmers like to write a nibble’s value as a single character. To do this they use the numerals 0 to 9 to represent nibble values 0 to 9. For nibble values 10 to 15, the letters A to F are used. This is called hexadecimal notation. The number 11 would be written as ‘B’ in hexadecimal. The number 15 would be ‘F’ and 16 would be written as ‘10’.

Binary numbers are written in the same way as ordinary numbers, except there are only two possible digits in these numbers: 1 and 0. Here are a few examples:

Number     Binary representation      Hexadecimal representation
  0              0000 0000                       0
  4              0000 0100                       4
 15              0000 1111                       F
 33              0010 0001                      21
255              1111 1111                      FF

‘Data’ can consist of any information that is being used by the microprocessor. In fact, a lot of the ‘data’ are actually instructions that tell the microprocessor what to do. Instructions are sometimes called operation codes or op codes for short. The microprocessor always knows the difference between ordinary data and instructions, although to you or I, it would be hard to distinguish. Both are just numbers. The microprocessor relies solely on the order in which the ‘data’ is received to tell the difference. There are a few simple rules it follows to do this:

  • the first word of ‘data’ received when execution starts is always an instruction
  • each instruction can consist of one word by itself or an instruction word followed by one or more words of additional data
  • for each particular instruction the number of following data words is always the same - so on receiving an instruction, the microprocessor first determines which instruction it has received. This tells it how many more data words (if any) to expect before the next instruction.

Microprocessors are extremely reliable machines such that they can receive many billions of ‘data’ words and not loose track of whether the next word will be data or an instruction! You can guess what happens if it goofs - your computer crashes.

Machine language

Each microprocessor model has its own unique instruction set or machine language. For example, the number 118 tells a Z80 microprocessor to stop everything it is doing and go into a ‘suspended animation’ state. The number 118 would tell an Intel 80x86 microprocessor to perform a conditional jump from one location to another depending on the result of the calculation it has just performed. On the Motorola 68000 microprocessor, this number would cause some completely different action.

The address bus

That’s enough on the data lines for the moment. Lets now look at the address connections on the microprocessor. These are crucial to making sure the right data and instructions get to the microprocessor. The address lines are usually one way only - they are used to send information from the microprocessor. The main place this information is sent is to the computer’s memory where the ‘data’ are stored. The address lines are like the data lines in that there are many of them and they are used to transmit large numbers. In the case of the address bus, the numbers transmitted on it are interpreted by the memory circuits to find out exactly which location in memory the microprocessor wants to obtain its ‘data’ from. For example, if the number 2397 was transmitted on the address lines, It would mean that the microprocessor wanted to obtain the data at location 2397 in memory. That’s about all there is to addresses.

The control lines

The control lines on the microprocessor are the most interesting and diverse. The best way to understand how they work is to look at a few of the typical ones - although each microprocessor has its own unique set. Two lines that are common to virtually all microprocessors are the read and write lines. They are used when sending data from or to memory. The microprocessor generates these signal (in other words, they are output pins on the microprocessor). They are used to tell the computer’s memory whether the microprocessor wants to read from memory or write to it - with the actual location in memory being the number currently being displayed on the address lines. When read is on and write is off, the read command is being issued by the microprocessor and vice versa.

Another interesting control line is the input/output request line or ioreq. This is also an output signal from the microprocessor and is used in conjunction with the read or write signals. It is used for sending data to or from the computer’s input/output (I/O) devices. These devices are usually connections to the outside world like a keyboard, printer port, serial port or disc drive. Like memory locations, each I/O device has a unique numbered address. The microprocessor can write to or read data from these devices by first placing their address on the address lines then turning ioreq on, along with either the write or read line.

The reset line is an input to the microprocessor. It is used to clear and restart the microprocessor. If you are wondering what the reset button on the front of your computer is connected to, you now know.

Tick, tock clock

Figure 2

Microprocessor clock signalThe clock line is another input. Unlike the clocks we are familiar with, the microprocessor relies on a very different type of clock. The clock signal is simply a repeating series of on-off pulses. If you drew a graph of this signal, it would look something like Figure 2. Each microprocessor has a limit on its maximum clock rate and this varies anywhere from only a few megahertz (1 megahertz is one million on-off cycles per second; abbreviated MHz) up to 500 or 600 MHz. The faster the clock runs, the faster the processor can execute instructions.

The clock is critical to the microprocessor’s functioning since it controls the sequencing of all events inside the chip. Think of it like the relationship between music and a waltz. The music beat provides the cues for the sequence of steps in the waltz. The microprocessor clock provides the cues for each of the steps needed to execute an instruction. In the microprocessor case, the instruction might be to fetch a byte of data from the keyboard. The first clock cycle might cue the microprocessor to put the correct address on the address lines. The next might cue it to turn the read and ioreq lines on, and the next clock cycle might be the one in which the byte is actual read from the data lines.

Execution, by the clock

An important thing to notice about clock cycles and instructions is that they are not the same. It takes several clock cycles to execute one instruction (the number of cycles depends on the microprocessor and the instruction). This obviously slows the microprocessor down quite a bit. Newer microprocessors have some innovative architecture to help get around this problem. Superscalar microprocessors have multiple instruction processing units enabling several instructions to be executed at once.

The superscalar approach isn’t perfect and causes its own problems - for example, things like conditional program branches are difficult to handle (a conditional program branch is an instruction that says “if the result of the previous calculation was x, then execute instruction A but if the result was y, then execute instruction B). Clearly, the branch can’t be predicted until after the previous instruction is completed. In cases like this a superscalar processor has to start executing down both branches, then abandon the incorrect one when the results of the branch instruction are known. Nevertheless, in many other cases there is a substantial improvement in performance. Both of the major microprocessor makers (Intel and Motorola) incorporate the superscalar approach in their current microprocessors.

Interrupts

Probably the most complex control lines found on most microprocessors are the interrupt lines. These are inputs to the processor. While the operation of interrupts is quite complex, the principle is very simple. You can think of the interrupt line serving the same function as the whistle on a kettle. When the kettle boils, the whistle sounds and you are immediately alerted to turn it off. When a device wants to get the microprocessor’s attention, it can send a signal to it on the interrupt line. This tells the microprocessor that something out there requires its attention. An example might be the keyboard. Each time a key is pressed, an interrupt is sent to the microprocessor to tell it that another keystroke is ready for processing.

Figure 3

Typical microprocessor internal architectureInterrupts are quite efficient because they allow the microprocessor to do other things (or even have a ‘rest’ to conserve power) instead of having to constantly make the effort to check external devices to see if they are doing anything that needs attention. The interrupting device only gets the attention it needs, and when it needs it - at the time it sends an interrupt. In case you’re wondering, the other method, where the microprocessor takes the initiative to constantly check peripheral devices, is called polling - although you won’t come across systems that use this approach very often.

A detailed look...

Let’s nowhave a look at an internal functional diagram (Figure 3) of a ‘typical’ simplified microprocessor. Starting with the data bus, incoming data (or instructions) first encounter a control unit that determines whether the microprocessor is currently accepting data and if so, where it is to be sent internally. If the microprocessor is currently doing some internal operation that doesn’t require outside data, it will probably have told the control unit to turn its external data lines off.

In the case of incoming data, there are several possible destinations:

  • If the microprocessor is expecting an instruction, it will be shunted to the instruction register which, in turn, will be read by the instruction decoder. This will generate the appropriate internal (and external) signals to actually execute the instruction.
  • It could go to the Arithmetic Logic Unit (ALU) which is the piece of circuitry that does addition, logical operations and other data processing. The ALU contains a special storage location or register which is sometimes called the accumulator.
  • It could go to a general purpose storage ‘bin’ called a central processing unit register, or CPU register for short. Microprocessors might have anything from 3 or 4 general purpose registers up to a dozen or more. The CPU registers simply provided a convenient internal storage point for data prior to it being processed in the ALU or being used elsewhere.
  • Lastly, the data might end up in one of the address registers. These are special purpose registers which connect to the microprocessor’s address lines and are used for sending addresses from the processor. There would normally be at least two address registers. One would be the instruction pointer which points to the memory location that is currently being used to fetch instructions from (ie where the user’s software is loaded). The other address register would be a general purpose one that could point to data (for example, a word processing document). In some microprocessors, the address and data registers are part of one common pool of registers.

Newer microprocessors have other components as well although I don’t intend to cover most of these. For example, they can have caches which I explain a bit later. They can also have floating point units (FPUs). These are like ALUs except that do mathematics on numbers with decimal points in them (floating point numbers). FPUs are very complex as Intel can testify to its great expense. One of its first Pentium processors had a minor flaw in the FPU which eventually forced Intel to recall and replace them all.

An example

To help give you a better understanding, of how the different parts of the microprocessor work together, let’s look at a simple example which involves the execution of three instructions. The first instruction tells the microprocessor to read memory location 8245 and place its contents into the ALU’s accumulator. The next instruction is to add this value to the value in CPU register ‘B’. The final instruction tells the microprocessor to write the result of the addition to memory location 5432. Before getting into the detail, I should point out this these three instructions would probably arrive as five words:

  • the first word would be the instruction to fetch data from memory and place it in the accumulator
  • the second word would be the address in memory to fetch the data from (ie address 8245)
  • the third word would be the instruction to add the contents of the accumulator to the contents of register B.
  • the fourth word would be the instruction to write the contents of the accumulator to memory
  • the final word would be the memory address to write the result to (ie 5432).

Here’s how the processing would occur (refer to the details in Figure 3):

  1. The instruction pointer in the address registers contains the memory address of the first instruction to fetch. The instruction pointer would be connected, through the address control unit, to the microprocessor’s address lines.
  2. The microprocessor would switch on its read line.
  3. After waiting a moment the microprocessor would then turn its data control unit on to accept input, and this input would be switched through to the instruction register.
  4. The new word in the instruction register would then be interpreted by the instruction decoder. This would tell the microprocessor that it needs to fetch the next ‘instruction’ from memory before doing anything else.
  5. To do this, the instruction pointer would be incremented by 1 and steps 1-3 above would be repeated, except this time, that data arriving at step 3 (the number 8245) would be placed in the general purpose address register.
  6. The microprocessor would now connect the general purpose address register to the address lines.
  7. The data at address 8245 would be fetched as per steps 2 and 3 above except the incoming data will be shunted to the accumulator.
  8. The instruction pointer would be incremented yet again, and the next instruction would be fetched from memory as per steps 1-3 above.
  9. When decoded by the instruction decoder, it would tell the microprocessor to send the data in CPU register B to the ALU where it is to be added to the value in the accumulator.
  10. The instruction pointer will be incremented again and the next instruction fetched and decoded as above. This will tell the microprocessor to fetch yet another ‘instruction’ (which contains the memory address to write the result to ie 5432).
  11. This ‘instruction’ will be fetched and shunted to the general purpose address register (where it will overwrite the old value in that register).
  12. The general purpose address register will be connected to the microprocessor’s address lines.
  13. The data in the accumulator will be sent from the microprocessor to memory. The microprocessor will turn its ‘write’ line on and the data will be written to the desired address.

RISC

Following on from the discussion about microprocessor clocks there is another related subject worth a look. You may have seen the term RISC when referring to microprocessors. RISC processors generally have very fast clock speeds. These processors are found in a variety of computers most commonly a number of the more recent Macintosh machines (the ones with Power PC processors). RISC stands for reduced instruction set computing. This is a very apt description because RISC processors have a much simpler and smaller instruction set than other microprocessors. The Intel 80x86 and Pentium processors are complex instruction set processors - they are not RISC machines, especially now that Intel has added a whole raft of new instructions to the latest models to improve multimedia handling (the MMX extensions). Complex instruction set processors require lots of complex instruction decoding circuitry and for this reason run significantly slower than comparable RISC processors.

The idea behind RISC is that a smaller instruction set means a much simpler and more compact internal layout. That also means a substantially faster microprocessor. The penalty is that to perform the same operations as a complex instruction set processor, the RISC machine might have to execute more instructions. So some or all of the clock speed advantage can be lost. However, the designers of RISC chips choose the instructions they support very carefully. They look at lots and lots of software to see which types of instructions are used most commonly and make sure these form part of the core RISC instruction set. So in many cases, a RISC chip might need the same number of instructions as a complex instruction set processor. In these cases, a RISC machine can well and truly outperform a complex instruction set machine of similar vintage.

The cost of no cache

While on the subject of microprocessor speed performance, there are other methods to improve performance and speed. The use of a cache is one such method in wide use is applicable to both RISC and complex instruction set machines. A cache is an additional chunk of memory that sits right next to the microprocessor, or is actually part of the microprocessor chip in newer devices.

For some time now, engineers have known that a limiting factor on processor performance is the speed with which it can fetch instructions or data from memory. One of the reasons is the slow speed of conventional computer memory - especially compared with processor speeds today. The solution is to provide some really fast access memory sitting very close to the processor. This is what the cache is.

Caches really make a huge difference to performance. I remember some years ago, a technician replaced the main board in my work computer with a new one. The new one had an Intel 80486 DX2 66 MHz processor. However, he forgot to set it up with the processor’s internal cache enabled. I couldn’t believe how poorly the computer performed. My guess was something like a 5 fold increase in speed after the on-board cache was activated.

L1, L2, L3...

There are actually different classes of cache. Level 1 (L1) cache is usually small and very, very fast. It sits on the processor chip right next to where it is needed. Level 2 cache feeds the level 1 cache but is slower and bigger. Level 2 cache can be found on the chip or physically separate. It’s even possible to have level 3 caches although I suspect they aren’t very common.

When you see the cache size mentioned in computer ads, it usually refers to the level 2 cache. On many of the Pentium chips now, the L2 cache is 512,000 (512 K) bytes. The latest Macintoshes with Power PC microprocessors from Motorola have similar L2 cache sizes. I’m no expert on these but from looking at the technical data on the Motorola devices, it would appear that this L2 cache is located off the processor chip. The actual cache (presumably L1) that is physically part of the high end Power PC processors is of the order of 64 K bytes in size.

Cache accounting

For the cache to work there has to be some extra control circuitry that keeps the cache loaded up with data and instructions from the main memory. This extra circuitry has to keep tabs on where the microprocessor is currently getting this information. Again, branch instructions do cause problems. Special algorithms are used to make a best estimate of where the data/instructions will be needed from next. Such algorithms are still very much in the development stage and this is an important area of microprocessor/cache research.

If memory serves me correctly, even processors as primitive as the Intel 80286 (or 286 for short) had a small cache but it only consisted of a few bytes of storage.

The drum on DRAMs (etc)...

By now you will have picked up that the microprocessor and memory work very closely together. It’s worth taking a quick look at some of the main types of memory, why they are different and what the seemingly endless jargon means.

Until recently, life was simple. Virtually all computer memory was of the DRAM type. This is short for dynamic random access memory. Let’s start with the random access part of the name. This actually had its origins back in the early days of computing when a lot of information was stored on tapes. Just like video or audio tapes, computer tapes are sequential access - that is you might have to feed a whole lot of unwanted tape through the tape reader before getting to the bits you want. Memory chips don’t work like this. The computer can jump to any location in memory whenever it wants, without having to read through all the other bits first. In other words, random access.

How refreshing

The term dynamic RAM means the contents of the RAM are volatile. In fact, very volatile. Unless DRAMs are constantly ‘refreshed’ every thousandth of a second or so, they forget what was stored in them. Computers usually contain special circuits that are solely dedicated to refreshing the DRAMs. Unfortunately, while the refresh operation is occurring, the DRAM is not available to the microprocessor for reading and writing. So, what advantages do DRAMs have?

To understand the advantages, take a quick look at their less forgetful cousins - SRAMs. The ‘S’ in SRAM stands for static meaning ‘not changing’. Static RAMs don’t forget what has been written to them (at least not while they have power connected to them). There is no refreshing and the SRAM contents are available all the time to the microprocessor. This is the reason, for example, that caches are often made from SRAMs.

Figure 4

Circuit to store a single bit in an SRAM

Figure 5

Circuit for storage of single bit in a DRAM

Figure 4 shows a diagram of the circuit to store a single bit (a 1 or a 0) in an SRAM (Diagram provided courtesy Texas Instruments). There is no need to understand the detail, but look at Figure 5 which shows the circuit to store the same bit in a DRAM. The latter consists of a single transistor, nothing more. DRAMs are vastly simpler to make and many more bits can be packed onto a memory chip because of their simplicity. This is the great advantage of the DRAM and the main reason they are preferred for general computer memories.

MOS does grow on a dynamic memory

The transistor in a DRAM is of a special type called a metal oxide semiconductor (MOS) transistor. It is made from a layer of silicon semiconductor separated from a minute metal pad by a very thin oxide insulating layer (Figure 6). You can probably work out why they are called ‘metal oxide semiconductor’ transistors now. The oxide insulating layer is very thin and it can be easily burnt out by excess voltage. A common source of such voltage is static electricity and DRAMs must be handled in an anti-static environment. By the way, most of the ‘chips’ found in modern computers, including the microprocessor, are made from MOS transistors and are very susceptible to static damage. They too must be handled with appropriate caution.

Figure 6

MOS transistor structure


EDO

EDO memory is the latest rage and many computers now ship with it. The full title of these memories is EDO DRAM. They are basically DRAMs except they have some design enhancements that substantially reduce the time needed for refreshing. The EDO stands for extended data out meaning the data stored in them is accessible for ‘extended’ periods. As such, EDO DRAMs can pump more data to the microprocessor over a given period of time compared with standard DRAMs. The faster the microprocessor can read and write data to main memory, the better.

SDRAM

There is another variation on the standard DRAM - the SDRAM. Don’t confuse these with SRAMs. An SDRAM is a synchronous DRAM. Normal DRAMs are refreshed by a special controller but this refreshing process isn’t directly synchronised with the microprocessor’s master clock. On the other hand, SDRAMs have all their functions including refreshing synchronised with the master microprocessor clock. This has the effect of providing much greater availability to the microprocessor. The reason might not be immediately obvious but you can think of it like a busy road with a few sets of traffic lights along it. If all of the traffic lights are synchronised with one another, then the traffic will flow a lot better. However, if one set of lights works on its own timing schedule, then there is a reasonable chance that cars that made it through the previous set of lights will have to wait at non-synchronised set.

EPROM

The final category of memory I’ll cover is EPROM. This stands for erasable programmable read only memory. EPROMs are non-volatile like SRAMs, but they retain their contents even when the power is switched off. The ‘ROM’ part of the name means they can normally only be read, not written to. EPROMs can be erased but only by placing them under a strong UV light. They have a special clear quartz window on top for this purpose and are easy to recognise because of this feature.

EPROMs are found on virtually all computer boards and store the so-called ROM BIOS. BIOS is short for basic input output system. The BIOS provides the start up or ‘boot instructions’ which allow the computer to enter a low level functional state when it is first switched on. The BIOS EPROM is physically wired into a special set of system memory addresses that the microprocessor automatically goes to when it is powered up. These addresses are usually determined by the designer of the microprocessor and are permanently stored inside the microprocessor.

Traditional EPROMs are increasingly being replaced by EEPROMs, also called E2PROMs or flash memories. The acronym is the same except for the 'EE' part which stands for electrically erasable. This is a very descriptive title. To erase an EEPROM, a special voltage is usually applied to one of its pins. They have the advantage that they don't have to be pulled out and exposed to UV light before being re-programmed. This enables them to be re-loaded on location. Why would anyone want to do this? As an example, many newer modems contain flash memories so they can be reprogrammed to operate at higher speeds as new standards are developed. (Modems contain their own powerful microprocessor and software, the latter being stored in the flash memory.)

Coming back to your computer's BIOS, you might be wondering what happens after this executes. The last instructions in the BIOS tell the microprocessor to fetch its next instructions from a special location on the computer’s hard disc (or floppy disc). This location is called the boot sector and the instructions located there tell the microprocessor where to find and execute the computer’s operating system - whatever that may be, for example DOS, OS/2 Windows 95, UNIX, Mac OS etc. Once that task is done, the computer will display its familiar DOS prompt, Windows screen or whatever.

The end

Well, I have ended up writing quite a bit more than I anticipated, and I still didn’t cover everything I though I might - for example, I haven’t mentioned DMA (aren’t you glad!).  If you have any questions, comments or criticisms, please feel free to email.  Be warned, however, that I'm self taught on this stuff so please don't assume what you have read is a full or accurate description in every respect.