How Computers Evolved

Author : Guest Blogger
Date : September 30, 2018

The history of computing is actually quite surprising, and until recently we didn’t know just how ancient the idea of a computer really is.  Here’s a quick view of how computers evolved from ancient times to the modern day:

doc003img01

The early computers (before 1944) were mechanical and did not run on electricity.  Some experimental computers were a combination of electricity and mechanical operation, but they weren’t as successful as either the purely mechanical computers or purely electrical ones.

doc003img02

As is often the case, it took the outbreak of war to really stimulate innovation.  The first known computer to run entirely on electricity was named “Colossus”, and was used by Allied Forces to help break codes towards the end of the Second World War.   The idea for Colossus may have come from a machine being used by the Axis Forces called the Zeus Z3, but that machine was electro-mechanical.  A few years later, the United States made their own version of Colossus, which they named ENIAC.

doc003img03

Until the end of the 1950s, computers were all really big and required an entire room just to hold them.  As we’ll talk about in a moment, computers work by switching electric current, and the technology they used in those days required vacuum tubes to do the job.  These huge computers became retrospectively known as “mainframes”, and such computers are still built today, but using modern components, which makes modern mainframes vastly more powerful.

doc003img04

Vacuum tubes were really big, got really hot when they were working, and thousands of them were needed to make a working computer.  They also needed a lot of power.  If somebody had a working replica of ENIAC in their home today, it would cost nearly $20 per hour to run it.

doc003img05

Then, in the 1960s, computers started to be built with transistors, which are smaller, more efficient, and much less expensive than vacuum tubes.  This meant that instead of a computer needing an entire room, it could now be built to just about the size of a small closet.  These smaller computers became known as “minicomputers”.

doc003img06

The next big leap was the invention of the microprocessor.  This allowed entire circuits to be built into a single component, which is why another name for a microprocessor is an “integrated circuit chip”.  Over time, these chips have gotten smaller and smaller, while also becoming more powerful.  Unfortunately as chips got smaller and more powerful, they also got less thermally efficient.

doc003img07

Now, for the first time, small computers that could fit on top of a desk could be built.  These were the first microcomputers, which is a bit of a misleading name because they’re not that small.  The early microcomputers were impressive in size reduction, but represented a bit of a step back in terms of processing ability.

doc003img08

For comparison, the electro-mechanical Zeus Z3, which was built more than 30 years earlier, was a 22 bit computer (“bit”, in computing, is an abbreviation for “binary digit”), while many early microcomputers were only 8 bit machines.  They were still faster and a lot more efficient than a Z3, however, because of the other advances in technology that went into their design.

The next big step up in the evolution of computers was the 1977 release of the Apple II computer (Apple chose to use the Roman numeral II to represent 2, it should not be confused for 11).  What was really revolutionary about this computer was that it included an integrated keyboard, and a 40 x 24 character VDU (originally monochrome, and later evolving to color).  The original Apple II used cassette tapes for data storage, and then upgraded to 5.25″ floppy drives when the technology became available.

doc003img09

The main drawback to the Apple II was its ridiculously high price—a tradition that Apple continues to maintain to the present day—that put it out of reach of most of the home users that the company claimed to have created the machine for.  Most of the early customers, apart from a few high wealth home users, tended to be businesses and schools.

doc003img10

The error (or greed) of Apple in pricing their “home computer” so high allowed many competitors to enter the market at significantly lower price points and compete against Apple aggressively.  The main rival in the home computing market of the early 1980s was Commodore, with the C64 priced at less than a third of the price of an Apple II, while using much of the same technology.  The C64 also had better graphics and sound, making it better for games, but Apple had a much wider range of software for business and educational users.

The entry of IBM into the microcomputer market, with the release of the first IBM PC, wasn’t actually very spectacular.  It was only slightly less expensive than the Apple II, so again most of the interest was from business and schools.

doc003img11

But there was one thing that made the invention of the PC a major milestone in the evolution of computers, and that was the decision to use open architecture, which means they used standard parts available on the common market, making it easy for technicians to repair it and replace faulty components.  This won the hearts of business customers because computers weren’t cheap.

Because the business market fell in love with the PC, the range of software for the PC rapidly overtook the range available for Apple computers.  Because of it’s high price tag, other microcomputers still dominated the home market. That all changed when other companies figured out a legal way to imitate the way the IBM PC worked.

doc003img12

These machines became known as PC clones, and because they weren’t burdened with IBM’s legacy costs, they could sell “PC clones” for a lot less than a real IBM PC.  This was not only the beginning of the death of IBM in the personal computer market, but the beginning of the death of virtually every other home computer on the market except Apple’s range of Apple II and Macintosh computers.

doc003img13

Once the PC clone became the industry standard, around the early 1990s, the range of software for the PC increased even further, until eventually nearly all the software on the market was made only for use on the PC, with only a few titles being converted to be compatible with the newly re-branded Apple Mac.

doc003img14

The success of Microsoft Windows 95 changed everything again, so that most software and hardware needed to be Windows compatible, and this allowed Microsoft to dominate the Operating System market even more strongly.  This was reinforced with the widespread uptake of the Visual Studio programming suite, which allowed developers to create Windows compatible software more easily and more fast.

doc003img15

That innovation is what kept down the early success of the new entry into the Operating System market, a free open source operating system called Linux.  Windows software still dominates the market, but because Linux has become a lot easier to use, and because it is free, the range of software for Linux is also increasing, and hardware manufacturers like Hewlett Packard have started ensuring their products are Linux compatible.

doc003img16

Toward the end of the 1990s, cell phones started to have increasingly better computing abilities, including the ability to access web pages.  At first, these were of a PDA design, like the still popular (but obsolete) Blackberry.  Then Apple released the iPhone and iPad, and everything changed once again.

doc003img17

The most interesting feature of these new phones and tablets was that they dropped the concept of an integrated keypad entirely, switching to a touchscreen design.  Yes, the company that was first to integrate a keyboard into a microcomputer was also the first to drop the keyboard from a cellphone.  Google, and later Microsoft, followed in Apple’s footsteps, and “the future” finally arrived…. a world where almost anyone could have a computer in their pocket.

The next step, which is already in development, will allow us to have computer components installed directly into our bodies.  This is very controversial (which means some people like it, but a lot of other people don’t like it), and development has slowed down.  For now, the gap between smartphones and bionics is being spanned by wearable technology.

doc003img18

If you had to choose between having technology placed directly into your body or wearing it outside your body, what would you choose and why?  Think about some of the risks that might be involved in having that technology in your body, but also the convenience of not having to carry or wear anything.

doc003img19

Many people are concerned about the potential threat to their privacy, the possibility of an implanted device being maliciously hacked, or simply a device malfunctioning and causing problems as a result.  These are all things we need to think about as we develop the next wave of technology, and it’s important that you think about it, because you’re probably going to play a part in it.

 

Leave a Reply

More Blogs


Special Recent Posts

The Educational Benefits of Kids Learning to Program

The Educational Benefits of Kids Learning to Program

December 9th, 2018

As a parent, you may wonder how to turn your child's interest in computers and electronic devices in[...]

Understanding the Differences Between Digital Literacy and Coding

Understanding the Differences Between Digital Literacy and Coding

December 16th, 2018

Digital literacy is sometimes confused with computer programming when we talk about technology educa[...]

Learning To Program Is So Easy A Kid Can Do It

Learning To Program Is So Easy A Kid Can Do It

December 2nd, 2018

Computers can do so much these days. They're practically an indispensable part of life for many peop[...]

Show Buttons
Hide Buttons