The Digital Abacus 18535 Essay

essay B
  • Words: 3795
  • Category: Computer

  • Pages: 14

Get Full Essay

Get access to this section to get all the help you need with your essay and educational goals.

Get Access

#1. The Hisory As the computer revolution continues along at it’s furious pace one wonders where we are heading. Will we eventually live in an automated utopia where we lounge about, or might our creations destroy us as in the Terminator movies? Will the Internet make it’s vast knowledge available to all, or will it further the separation between the have’s and have not’s. While there are many good things that stem from computers there is a dreadful weakness. We as a nation and a world are addicted to electricity. If there were to be a worldwide outage of power we would be like lost children. Knowledge of the origins of computers and theory of their function is important to civilization should such an event occur. The history of computers starts out about 2000 years ago, at the birth of the abacus, a wooden rack holding two horizontal wires with beads strung on them. When these beads are moved around, according to programming rules memorized by the user, all regular arithmetic problems can be done. Another important invention around the same time was the Astrolabe, used for navigation. Blaise Pascal is usually credited for building the first digital computer in 1642. It added numbers entered with dials and was made to help his father, a tax collector. In 1671, Gottfried Wilhelm von Leibniz invented a computer that was built in 1694. It could add, and, after changing some things around, multiply. Leibnitz invented a special stepped gear mechanism for introducing the addend digits, and this is still being used. The prototypes made by Pascal and Leibnitz were not used in many places, and considered weird until a little more than a century later, when Thomas of Colmar created the first successful mechanical calculator that could add, subtract, multiply, and divide. A lot of improved desktop calculators by many inventors followed, so that by about 1890, the range of improvements included the following: accumulation of partial results, storage and automatic reentry of past results (A memory function), and printing of the results. Each of these required manual installation. These improvements were mainly made for commercial users, and not for the needs of science. While Thomas of Colmar was developing the desktop calculator, a series of very interesting developments in computers was started by Charles Babbage in Cambridge, England. In 1812, Babbage realized that many long calculations, especially those needed to make mathematical tables, were really a series of predictable actions that were constantly repeated. From this he suspected that it should be possible to do these automatically. He began to design an automatic mechanical calculating machine, which he called a difference engine. By 1822, he had a working model to demonstrate with. With financial help from the British Government, Babbage started fabrication of a difference engine in 1823. It was intended to be steam powered and fully automatic, including the printing of the resulting tables, and commanded by a fixed instruction program. The difference engine, although having limited adaptability and applicability, was really a great advance. Babbage continued to work on it for the next 10 years, but in 1833 he lost interest because he thought he had a better idea — the construction of what would now be called a general purpose, fully program-controlled, automatic mechanical digital computer. Babbage called this idea an Analytical Engine. The ideas of this design showed a lot of foresight, although this couldn t be appreciated until a full century later. The plans for this engine required an identical decimal computer operating on numbers of 50 decimal digits (or words) and having a storage capacity (memory) of 1,000 such digits. The built-in operations were supposed to include everything that a modern general purpose computer would need, even the all important Conditional Control Transfer Capability that would allow commands to be executed in any order, not just the order in which they were programmed. The analytical engine was soon to use punched cards, which would be read into the machine from several different reading stations. The machine was supposed to operate automatically, by steam power, and require only one person there. Babbage’s computers were never finished. Various reasons are used for his failure. Most used is the lack of precision machining techniques at the time. Another speculation is that Babbage was working on a solution of a problem that few people in 1840 really needed to solve. After Babbage, there was a temporary loss of interest in automatic digital computers. Between 1850 and 1900 great advances were made in mathematical physics, and it came to be known that most observable dynamic phenomena can be identified by differential equations (which meant that most events occurring in nature can be measured or described in one equation or another), so that easy means for their calculation would be helpful. A step towards automated computing was the development of punched cards, which were first successfully used with computers in 1890 by Herman Hollerith and James Powers, who worked for the U.S. Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, workflow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of cards and accessed when needed. These advantages were seen by commercial companies and soon led to the development of improved punch-card using computers created by International Business Machines (IBM), Remington, Boroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion — like turning the wheels of an adding machine. Such systems included features to: feed in a specified number of cards automatically, add, multiply, sort, and feed out cards with punched results. As compared to today s machines, these computers were slow, usually processing 50 – 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of I/O , and memory storage on a huge scale. For more than 50 years after their first use, punched card machines did most of the world s first business computing, and a considerable amount of the computing work in science. The start of World War II produced a large need for computer capacity, especially for the military. New Weapons were made for which trajectory tables and other essential data were needed. In 1942, John O. Eckert, John W. Mauchly, and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high speed electronic computer to do the job. This machine became known as ENIAC (Electrical Numerical Integrator and Calculator) The size of ENIAC s numerical “word” was 10 decimal digits, and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers. ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. The executable instructions making up a program were embodied in the separate “units” of ENIAC, which were plugged together to form a route for the flow of information. These connections had to be redone after each computation, together with presetting function tables and switches. This wire your own technique was inconvenient (for obvious reasons), and with only some latitude could ENIAC be considered programmable. It was, however, efficient in handling the particular programs for which it had been designed. ENIAC is commonly accepted as the first successful high-speed electronic digital computer (EDC) and was used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIACS basic digital concepts, the claim being made that another physicist, John V. Atanasoff, had already used basically the same ideas in a simpler vacuum – tube device he had built in the 1930 s while at Iowa State College. In 1973 the courts found in favor of the company using the Atanasoff claim. Fascinated by the success of ENIAC, the mathematician John Von Neumann undertook, in 1945, an abstract study of computation that showed that a computer should have a very simple, fixed physical structure, and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself. Von Neumann contributed a new awareness of how practical, yet fast computers should be organized and built. These ideas, usually referred to as the stored program technique, became essential for future generations of high-speed digital computers and were universally adopted. The Stored Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very high-speed operation attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy. Arrangements must be made, therefore, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently. Von Neumann met these two needs by making a special type of machine instruction, called a Conditional control transfer. This allowed the program sequence to be stopped and started again at any point and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be arithmetically changed in the same way as data. As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in libraries and read into memory only when needed. Thus, much of a given program could be assembled from the subroutine library. The all – purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an errand runner for the overall process. As soon as the advantage of these techniques became clear, they became a standard practice. The first generation of modern programmed electronic computers to take advantage of these improvements was built in 1947. This group included computers using Random Access Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. . These machines had punched – card or punched tape I/O devices and RAM s of 1,000 – word capacity and access times of .5 Greek MU seconds (.5*10-6 seconds). Some of them could perform multiplication in 2 to 4 MU seconds. Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first – generation stored – program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation and were used for 8 to 12 years. They were usually programmed in ML, although by the mid 1950 s progress had been made in several aspects of advanced programming. This group of computers included EDVAC and UNIVAC, the first commercially available computers. Early in the 50 s two important engineering discoveries changed the image of the electronic computer field, from one of fast but unreliable hardware to an image of relatively high reliability and even more capability. These discoveries were the magnetic core memory and the Transistor Circuit Element. These technical discoveries quickly found their way into new models of digital computers. RAM capacities increased from 8,000 to 64,000 words in commercially available machines by the 1960 s, with access times of 2 to 3 milliseconds. These machines were very expensive to purchase or even to rent and were particularly expensive to operate because of the cost of expanding programming. Such computers were mostly found in large computer centers operated by industry, government, and private laboratories staffed with many programmers and support personnel. This situation led to modes of operation enabling the sharing of the high potential available.

One such mode is batch processing, in which problems are prepared and then held ready for computation on a relatively cheap storage medium. Magnetic drums, magnetic disk packs, or magnetic tapes were usually used. When the computer finishes with a problem, it dumps the whole problem (program and results) on one of these peripheral storage units and starts on a new problem. Another mode for fast, powerful machines is called time-sharing. In time-sharing, the computer processes many jobs in such rapid succession that each job runs as if the other jobs did not exist, thus keeping each “customer” satisfied. Such operating modes need elaborate executive programs to attend to the administration of the various tasks. In the 1960 s, efforts to design and develop the fastest possible computer with the greatest capacity reached a turning point with the LARC machine, built for the Livermore Radiation Laboratories of the University of California by the Sperry – Rand Corporation, and the Stretch computer by IBM. The LARC had a base memory of 98,000 words and multiplied in 10 Greek MU seconds. Stretch was made with several degrees of memory having slower access for the ranks of greater capacity, the fastest access time being less then 1 Greek MU second and the total capacity in the vicinity of 100,000,000 words. During this period, the major computer manufacturers began to offer a range of capabilities and prices, as well as accessories such as: consoles, card feeders, printers, and graphing devices. These were widely used in businesses for such things as: accounting, payroll, inventory control, ordering supplies, and billing. CPU s for these uses did not have to be very fast arithmetically and were usually used to access large amounts of records on file, keeping these up to date. By far, the largest number of computer systems were sold for the more simple uses, such as hospitals (keeping track of patient records, medications, and treatments given). They were also used in libraries, such as MEDLARS, the National Medical Library retrieval system, and in the Chemical Abstracts System, where computer records on file now cover nearly all known chemical compounds. The trend during the 1960’s was, to some extent, moving away from very powerful, single purpose computers and toward a larger range of applications for cheaper computer systems. Most continuous process manufacturing, such as petroleum refining and electrical-power distribution systems, now used computers of smaller capability for controlling and regulating their jobs. In the 1960 s, the problems in programming applications were an obstacle to the independence of medium sized on-site computers, but gains in applications programming language technologies removed these obstacles. Applications languages were now available for controlling a great range of manufacturing processes, for using machine tools with computers, and for many other things. Moreover, a new revolution in computer hardware was under way, involving shrinking of computer logic circuitry and of components by what are called large-scale integration techniques. In the 1950s it was realized that scaling down the size of electronic digital computer circuits and parts would increase speed and efficiency and by that, improve performance, if they could only find a way to do this. About 1960 photo printing of conductive circuit boards to eliminate wiring became more developed. Then it became possible to build resistors and capacitors into the circuitry by the same process. In the 1970 s, vacuum deposition of transistors became the norm, and entire assemblies became available on tiny chips. In the 1980 s, very large scale integration, in which hundreds of thousands of transistors were placed on a single chip, became more and more common. Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers supplied with software packages. The shrinking trend continued with the introduction of personal computers (PC’s), which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. Many companies, such as Apple Computer and Radio Shack, introduced very successful PC s in the 1970s, encouraged in part by the popularity in computer games. In the 1980s some friction occurred in the crowded PC field, with Apple and IBM keeping strong. In the manufacturing of semiconductor chips, the Intel and Motorola Corporations were very competitive into the 1980s, although Japanese firms were making strong economic advances, especially in the area of memory chips. By the late 1980s, some personal computers were run by microprocessors that, handling 32 bits of data at a time, could process about 4,000,000 instructions per second. Microprocessors equipped with read-only memory (ROM), which stores constantly used, unchanging programs, now performed an increased number of process-control, testing, monitoring, and diagnosing functions, like automobile ignition systems, automobile-engine diagnosis, and production-line inspection duties. Cray Research and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s. In April of 1987 IBM unveiled its newest PC the PS/2. This new machine introduced a few new standards into the market. While other companies had presented the 3.5-inch floppy disk drives earlier than IBM, one could not get an internal 5.25-inch drive on several models of the PS/2. The PS/2 also brought with it the new Video Graphics Array (VGA) standard. The VGA system had two large improvements over the earlier EGA standard. VGA offered an increase in resolution which allowed each pixel to be more square and thus the images seemed less distorted. The second advantage with VGA was that is allowed more colors to be displayed simultaneously on screen. During the same year the announcement of a new operating system developed jointly by IBM and Microsoft was made: OS/2. When the operating first appeared representatives from both companies claimed that OS/2 would be a replacement for DOS. However, the new system did come with a few concerns. OS/2 was originally designed for the 286, but this processor came with certain limitations. This problem would not be cured until the new 386 with its Virtual 86 mode, which allowed the machine to run multiple sessions. It was during this time period that Microsoft released Windows 2.0. It included features such as overlapping windows, the ability to resize windows, and keyboard shortcut keys. However, applications could not multitask well and were limited in size. This problem was solved with the release of Window/286 and Windows/386. In May 1990, the computing world got a new standard, Windows 3.0. This new OS ran on top of DOS, which offered compatibility with the DOS programs. Win 3.0 could also multitask both the DOS programs and Windows programs. Applications for the new system soon followed by almost every major developer. IBM and Microsoft continued work on OS/2 in particular with OS/2 2.0, which would be the first real 32-bit version. Nearly a year after the release of Windows 3.0, IBM and Microsoft finally split. After the split IBM would make one last attempt to make OS/2 mainstream with the more consumer-orientated OS/2 Warp 3.0. Although it would sell millions of copies it could not slow the industry’s move toward Windows. .During this era, Intel and Microsoft were the leaders of the PC industry, Windows became the standard application, and networking had become mainstream. One minor but unique change during this era was the term IBM Compatible had fallen out of play, to be replaced by the processor as the primary descriptor of hardware. 1995 was the beginning of the Internet boom. The Internet was not a new thing. It began in the 1960’s as a way to link the universities and other networks together. It was designed to withstand a nuclear attack. The Internet as we know it didn’t surface until 1990 when the hypertext markup language was created. In 1995 browsers from Netscape and Microsoft started to dominate the web. The biggest event during 1995 was the long awaited release of Microsoft’s new operating system, Windows 95. It allowed 32-bit applications, preemptive multitasking, supported new e-mail and communication standards, and had a fresh new interface. #2. Long Term Effects The long-term effects of the computer are numerous and affect things such as: telecommunications, information services, military, commerce, government, music, space travel, and everyday life. The benefits number just as many. The speed of commerce has increased. One can easily make a phone call not only across an ocean, but do it from their car. The military can not only deploy troops more rapidly, but with automated missiles they don t even need to deploy them. ATM’s make banking simpler and more accessible. Music has become easier to compose through various computer programs. The new technologies being developed in the aerospace industry may one day make space colonization a possibility. Everyday life has been effected by all of these things. As we continue along these lines we also see the possibility of automated servants. As the programming becomes more and more refined we can run into several ethical dilemmas. For instance, when does the AI (artificial intelligence) become true intelligence, and is it possible for a machine to be alive? (a problem Data had on STNG) We must reevaluate our ideas of personhood when we get to this level. There are as previously stated negative effects as well. The dependence we tend to have on computers is one example. Why should anyone actually do math when their calculator can do it for them? If the world’s power were to falter our worldwide civilization would most probably fail. As we switch from a production-based civilization to an information based civilization we experience growing pains. We will see our national television standard switch from NTSC to HTSC a digital format. Tape players have been and continue to be replaced by CD’s. Finally, DVD looms over the entire video industry like a bomb about to hit. These changes cost money, and not everybody can keep up. As this happens the gap between the classes widens. Society will be changed not only by the information they have access to, but also by the economic gap. The road in front of us has many forks, some good others bad. If one way is chosen the other never can be. The pity is that one never knows if a decision that seems good today may cause a bad one tomorrow. Because of this we must all be aware of the past. Historical mistakes often have a modern reflection. We are not necessarily destined for success, nor are we predestined for failure. We can, however go either way. Works Cited 1) Cortada, James W. Bibliographic Guide to the History of Computing, Computers & the Information Processing Industry. Westport, Conn. Greenwood Press, 1996. 2) Campbell, Martin and William Aspray, Computer: A History of the Information Machine. Ontario, Canada. Basic Books, 1997. 3) Aspray, William. Computing before Computers. Ames, Iowa. Iowa St U. 1990.

Get instant access to
all materials

Become a Member