Essay about Computer
Essay about Computer

Essay about Computer

Available Only on StudyHippo
  • Pages: 12 (3156 words)
  • Published: December 23, 2017
  • Type: Case Study
View Entire Sample
Text preview

Previously, there was a discussion regarding the superiority of analog or digital computers. Particularly in the 1970s, analog computers were commonly employed for solving finite difference equations in oil reservoir modeling. Nonetheless, digital computers eventually demonstrated their dominance thanks to their power, cost-effectiveness, and scalability when dealing with extensive computations. Presently, digital computers are extensively utilized and integrated into society, encompassing handheld calculators as well as supercomputers.

Hence, this concise overview of the advancement of scientific computing focuses solely on digital, electronic computers. The progress of digital computing is commonly categorized into generations, with each one showcasing significant enhancements compared to its predecessor in terms of technology for building computers, internal organization of computer systems, and programming languages. Additionally, there has been a consistent improvement in algorithms, including those utilized in computational

...

science.

The history below has been categorized into different generations.
3. 1 The Mechanical Era (1623-1945)
The concept of using machines to solve mathematical problems can be traced back to the 17th century. Mathematicians such as Wilhelm Shackled, Blaine Pascal, and Gottfried Leibniz created calculators capable of performing addition, subtraction, multiplication, and division. Charles Baggage's Difference Engine, started in 1823 but never finished, is recognized as the first programmable computing device.

The Analytical Engine, a more ambitious machine, was designed in 1842 but was only partially completed by Babbage. Babbage's inability to finish these projects can be attributed to the unreliable technology of the time. However, several important programming techniques, such as conditional branches, iterative loops, and index variables, were recognized by Babbage's colleagues, notably Dad, Countess of Lovelace.

A machine, based on the design of Baggage, was possibly the first one utilized

View entire sample
Join StudyHippo to see entire essay

in computational science. In 1833, George Escheat came across the information about the difference engine, and together with his son Adverb Escheat, they embarked on creating a smaller version. By 1853, they successfully developed a machine capable of processing 1 5-digit numbers and calculating fourth-order differences. This achievement earned them a gold medal at the Exhibition of Paris in 1855. Ultimately, they sold the machine to the Dudley Observatory in Albany, New York, which employed it for calculating the orbit of Mars.

The US Census Bureau was one of the first to utilize mechanical computers commercially. They employed punch-card equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911, Hollerith's company merged with a competitor and established the corporation that eventually became International Business Machines (IBM) in 1924. From 1937 to 1953, the first generation of electronic computers emerged. These machines replaced electromechanical relays with electronic switches in the form of vacuum tubes.

The tubes were comparable to relays in reliability and electronic switches were more reliable as they had no moving parts that would wear out. However, electronic components could open and close about 1,000 times faster than mechanical switches. In 1937, J. V. Donations, a professor of physics and mathematics at Iowa State, made the earliest attempt to build an electronic computer.

Donations were used to construct a machine that would assist graduate students in solving partial differential equations. In 1941, this endeavor led to the successful creation of a machine capable of solving 29 simultaneous equations containing 29 unknowns. Nonetheless, this machine lacked programmability and functioned primarily as an electronic calculator. Another significant early electronic machine, known as

Colossus, was developed by Alan Turing in 1943 for the British military. It played a crucial role in deciphering codes employed by the German army during World War II.

Turing played a significant role in computer science with his concept of the Turing machine, a mathematical tool extensively utilized in examining computable functions. The existence of Colossus was not disclosed until well after the war had concluded, and the recognition for Turing and his peers' creation of one of the initial functional electronic computers was delayed. Additionally, the first electronic computer capable of performing diverse tasks was MANIAC (Electronic Numerical Integrator and Computer), developed by J. Prosper Checker and John V.

The Ordnance Department required a method for calculating ballistics in World War II, leading to the development of a machine. However, the machine was not completed until 1945. Nonetheless, it was extensively utilized for calculations during the creation of the hydrogen bomb and remained active until its decommissioning in 1955. Throughout its operational period, it also supported research on wind tunnel design, random number generation, and weather prediction. Meanwhile, Checker, Macaulay, and John von Neumann, who served as a consultant to the MANIAC project, commenced work on a new machine while MANIAC was still being developed. The key innovation introduced by their new project, named ADVANCE, was the concept of a stored program.

Despite some controversy regarding the credit for this idea, there is unanimous agreement on the idea's significance for the future of general purpose computers. MANIAC's operations were controlled by external switches and dials, necessitating physical adjustments to these controls in order to change the program. Moreover, these controls also imposed speed

limitations on internal electronic operations. ADVANCE, on the other hand, achieved much faster performance by utilizing a memory capable of storing both instructions and data and by using the program stored in memory to regulate the sequence of arithmetic operations.

Storing instructions and data in the same medium allows designers to focus on enhancing the machine's internal structure without considering its external control speed. The ADVANCE project exemplifies the strength of interdisciplinary projects in modern computational science, regardless of who should be credited for the idea of a stored program. The ADVANCE group understood that computer instructions, represented as numbers, can be stored in the computer's memory along with numerical data.

The concept of utilizing numbers to depict functions was a crucial advancement made by Goaded in his incompleteness theorem in 1937, an achievement that von Neumann, being a logician, was well acquainted with. Von Neumann's logical expertise, along with Checker and Macaulay's electrical engineering aptitude, created a highly influential interdisciplinary team. During this era, software technology was extremely rudimentary. Initial programs were transcribed in machine code, meaning programmers manually recorded the numeric representations of the instructions they wished to store in memory.

The early electronic machines used a symbolic notation called assembly language and manually translated it into machine code. Assemblers later performed this translation task. Despite being primitive, these machines proved to be valuable in applied science and engineering. With a Merchant calculator, it was estimated to take eight hours to solve a set of equations with eight unknowns and 381 hours to solve 29 equations with 29 unknowns. However, the Donations-Berry computer was able to complete this task in less than

an hour.

The initial issue run on the MANIAC, a numerical emulation utilized in the creation of the hydrogen bomb, took 20 seconds. During this same period, the UNIVAC, considered to be the first commercially successful computer, was developed. In 1952, just 45 minutes after the polls closed and with 7% of the vote counted, UNIVAC made a prediction that Eisenhower would triumph over Stevenson with 438 electoral votes (ultimately, he received 442). The second generation of computers (1954-1962) witnessed significant advancements at all levels of computer system design, encompassing the technology used for constructing basic circuits and the programming languages employed for scientific applications.

In this era, electronic switches relied on discrete diode and transistor technology, with a switching time of around 0.3 microseconds. TRIADIC at Bell Laboratories in 1954 and TX-O at Mitt's Lincoln Laboratory were among the initial machines built using this technology. Memory technology was based on magnetic cores, which allowed for random access, unlike mercury delay lines where data was stored as an acoustic wave that moved through the medium sequentially and could be accessed only at the 1/0 interface.

Important advancements in computer architecture included the addition of index registers for loop monitoring and floating point units for calculations involving real numbers. Before these innovations, accessing consecutive elements in an array was difficult and often required writing self-modifying code. This practice, although considered powerful at the time due to the concept that programs and data were essentially identical, is now discouraged as it is challenging to debug and impossible in many high-level programming languages.

In the beginning, software libraries were used to perform floating point operations on early computers.

But later, hardware was used for these operations in second generation machines. At the same time, high level programming languages like FORTRAN (1956), ALGOL (1958), and COBOL (1959) were introduced. The IBM 704 and its subsequent models, the 709 and 7094, played a vital role during this era of computing. The inclusion of input/output processors in the latter model greatly improved data transfer between main memory and input/output devices.

During the second generation of computers, two supercomputers were introduced: the Livermore Atomic Research Computer (LARCH) and the IBM 7030 (aka Stretch). These machines were specifically designed for scientific applications involving numeric processing. They utilized parallel processing and overlapped memory and processor operations, making them significant in terms of power during that time. In the third generation (1963-1972), advanced processing techniques such as integrated circuits (ICs), semiconductor memories, microprogramming, pipelining, and others were introduced. This era also marked the emergence of operating systems and time-sharing.

Originally, the initial CICS were built using small-scale integration (AS') circuits, where each circuit (or 'chip') contained approximately 10 devices. Over time, this technology advanced to the use of medium-scale integrated (MS') circuits, which could accommodate up to 100 devices per chip. Multilayered printed circuits were invented, and faster solid state memories replaced core memory. Computer designers also embraced parallelism by incorporating multiple functional units, overlapping CPU and 1/0 operations, and implementing pipelining (internal parallelism) in both the instruction stream and the data stream.

In 1964, Seymour Cray created the CDC 6600, the first architecture to utilize functional parallelism. This design incorporated 10 separate functional units and 32 independent memory banks, allowing it to achieve a computation rate of 1

million floating point operations per second (1 Mops). Shortly after, in 1969, CDC introduced the 7600, another creation by Seymour Cray. The CDC 7600, equipped with pipelined functional units, is recognized as the initial vector processor and possessed the ability to perform at a rate of 10 Mops.

The IBM 360/91 was roughly twice as fast as the CDC 660. It used instruction look ahead, separate floating point and integer functional units, and a pipelined instruction stream. The IBM 360-195 was comparable to the CDC 7600, with its performance being largely due to a fast cache memory. The Westinghouse Corporation developed the SOLOMON computer, while the ILIAC 'V' was a joint development by Burroughs, the Department of Defense, and the University of Illinois. Both the SOLOMON and ILIAC 'V' were examples of the earliest parallel computers.

The Texas Instrument Advanced Scientific Computer (IT-ASS) and the STAR-100 of CDC were vector processors with pipelining capabilities. These processors not only proved the feasibility of this design concept but also established benchmarks for future vector processors. In the early stages of the third generation, Cambridge and the University of London collaborated to create CAP (Combined Programming Language, 1963). The objective of CAP, as stated by its creators, was to encapsulate the essential aspects of the intricate and advanced ALGAL programming language.

However, similar to ALGAL, CAP was a large programming language with numerous complex features. In an effort to further simplify it, Martin Richards from Cambridge created a subset of CAP known as BPCS (Basic Computer Programming engage, 1967). In 1970, Ken Thompson from Bell Labs introduced another simplified version of CAP called B, which was developed alongside

an early version of the UNIX operating system. During the fourth generation of computing (1972-1984), advancements in technology allowed for the use of integrated circuits with up to 1000 devices per chip and very large scale integration (VEILS) with 100,000 devices per chip in the construction of computing elements.

At this scale, it is possible for entire processors to fit onto a single chip. Additionally, for simpler systems, the complete computer, including the processor, main memory, and 1/0 controllers, can fit on a single chip. The gate delays have decreased to approximately 1 NSA per gate. Semiconductor memories have taken over core memories as the primary form of main memory in most systems. Prior to this time, semiconductor memory was mostly used only in registers and cache. During this era, high-speed vector processors such as the CRAY 1, CRAY X-PM, and CYBER 205 were dominant in the field of high-performance computing.

Computers with large main memory, like the CRAY 2, started to emerge. Different parallel architectures also became visible during this time. However, parallel computing was mainly experimental at this point and most computational science was done on vector processors. Microcomputers and workstations were introduced as alternatives to time-shared mainframe computers. In terms of software, there were advancements in high-level programming languages like UP (functional programming) and Prolog (programming in logic).

These languages typically employ a declarative programming style, in contrast to the imperative style utilized by Pascal, C, FORTRAN, and others. In a declarative style, programmers specify the mathematical computation requirements, while leaving the implementation details to the compiler or runtime system. Although not widely adopted at present, these languages show great potential as

programming notations for massively parallel computers that consist of over 1,000 processors.

Optimization techniques have been implemented in compilers for established languages to enhance code efficiency. Additionally, compilers for vector processors have enabled the conversion of simple loops into single instructions that operate over entire vectors. Bell Labs played a significant role in the early stages of the third generation with two notable developments: the creation of the C programming language and the UNIX operating system. Dennis Ritchie, aiming to meet the design objectives of CAP and expand on Thompson B, pioneered the development of the C language in 1972.

The DECK PDP-11 version of UNIX was developed by Thompson and Ritchie using the C programming language. This UNIX implementation, which was based on C, later became adaptable for different computer systems. This eliminated the need for users to learn a new operating system every time they switched hardware. Nowadays, UNIX or its derivatives are widely used as the standard on almost all computer systems.

A significant advancement in computational science took place with the release of the Lax report. In 1982, Peter D. Lax chaired a panel sponsored by the US Department of Defense (DOD) and National Science Foundation (NSF) regarding Large Scale Computing in Science and Engineering.

The Lax Report highlighted the lack of coordinated national attention in the United States towards high performance computing, especially in contrast to aggressive and focused foreign initiatives, particularly in Japan. In response to the Lax report, the NSF supercomputing centers were established as one of the first and most visible actions. Phase I of this program aimed to promote the use of high performance computing at American

universities by providing immediate availability of cycles and training on three (and later six) already existing supercomputers. Following this initial stage, in 1984-1985, further developments took place.

NSF has funded the creation of five Phase II supercomputing centers. These centers include the San Diego Supercomputing Center in San Diego, the National Center for Supercomputing Applications in Illinois, the Pittsburgh Supercomputing Center in Pittsburgh, the Cornell Theory Center in Cornell, and the Princeton Cohn von Neumann Center in Princeton. They have been highly successful in providing computing time on supercomputers to academicians. Moreover, these centers have conducted various training programs and developed many freely accessible software packages.

These Phase II centers further enhance the significant high performance computing initiatives at the National Laboratories, particularly the Department of Energy (DOE) and NASA locations. The fifth generation (1984-1990) of computer systems was marked by the adoption of parallel processing. Previously, parallelism was restricted to pipelining and vector processing, or at most to a small number of processors sharing tasks. The fifth generation witnessed the emergence of machines equipped with hundreds of processors capable of simultaneously executing different segments of a single program.

The integration scale in semiconductors advanced rapidly, with the capability to construct chips containing a million components by 1990. Semiconductor memories became a standard feature in all computers. Additionally, computer networks became widely utilized, and single-user workstations were increasingly adopted. Before 1985, large scale parallel processing was primarily a research objective. However, around this period, two systems emerged as representative examples of the initial commercial products based on parallel processing.

The Sequent Balance 8000 was a machine that connected up to 20 processors to a shared

memory doodle (though each processor had its own local cache). It was created to rival the DECK FAX-780 as a general purpose Unix system, where each processor worked on a separate user's job. However, Sequent supplied a set of subroutines that enabled programmers to write programs using more than one processor, making it popular for testing parallel algorithms and programming methods.

The Intel pips-1, also known as 'the hypercube', adopted a unique approach by connecting each processor to its own memory and using a network interface to link the processors. This architecture allowed for distributed memory, which eliminated memory as a limiting factor. Towards the end of this period, a third type of parallel processor known as data-parallel SIMD was introduced to the market. These machines consisted of several thousand simplistic processors.

In this period, scientific computing was still mainly dominated by vector processing. Most manufacturers of vector processors introduced parallel models, but these parallel machines typically had only a few (two to eight) processors. All processors in these machines work under the direction of a single control unit. For example, if the control unit instructs them to 'add a to b," each processor will locate their local copy of a and add it to their local copy of b. Some notable machines in this class include the Connection Machine from Thinking Machines, Inc., and the PM-I from Mammas, Inc.

Advancements in computer networking have led to significant progress in both wide area network (WAN) and local area network (LANA) technology. This has caused a shift from traditional mainframe computing to distributed computing, where users have their own workstations for tasks like editing, compiling orgasm,

and reading mail. However, the sharing of costly resources such as file servers and supercomputers still persists.

The computational power of affordable workstations and servers has been greatly improved thanks to RISC technology and the decreasing costs of RAM. This improvement is accompanied by advancements in both the quality and quantity of scientific visualization. The research mentioned received sponsorship from the U.S. Department of Energy. (C) 1991, 1992, 1993, 1994, 1995 by the Computational Science Education Project. Authors and Editors: Richard C. Allen (Applied Mathematics) and Sandra National Laboratory, NM Chris.

Botcher from Physics Division at Oak Ridge National Laboratory in TN, Phillip Birding from Conscience at University of Tulsa, K Pat Burns from Department of Mechanical Engineering at Colorado State University, John Conner from Department of Computer & Information Science at U. Of Oregon in Eugene, Thomas R. Davies from Department of Physics at Duquesne University in PA, James Deemed from Department of Computer Science at Berkeley, Chris Johnson from Department of Computer Science at University of Utah, Alkalis Kanata from Colorado Center for Astronautics Research at University of Colorado Boulder in CO, and William Martin from Department of Nuclear Engineering at University of Michigan Geoffrey.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New