Neural Networks, Also Known as Ann or Snn Essay Example
Neural Networks, Also Known as Ann or Snn Essay Example

Neural Networks, Also Known as Ann or Snn Essay Example

Available Only on StudyHippo
  • Pages: 10 (2593 words)
  • Published: October 13, 2018
  • Type: Essay
View Entire Sample
Text preview

A neural network, also referred to as an artificial neural network, offers a distinct computing structure that has only started to be exploited. It is utilized for tackling challenges that are either too complex or cumbersome for conventional methods. Moreover, these innovative computing structures drastically differ from the commonly used computers. ANN's operate as highly parallel systems, relying on dense interconnections and remarkably uncomplicated processors (Cr95, Ga93).

Artificial neural networks are named after the networks of nerve cells found in the brain. While these computing models simplify the biological details, they still retain enough of the brain's structure to offer insights into how neural processing in the brain might work (He90). Neural networks are effective for a wide range of applications and excel in solving pattern-related problems such as pattern mapping, completion, and classification (He95). They can be used to convert images into

...

keywords or financial data into predictions (Wo96). Neural networks employ a parallel processing structure with numerous processors and interconnections between them.

These processors are less complex than traditional central processing units (He90) but in a neural network, each processor is connected to multiple neighbors, resulting in a greater number of interconnections compared to processors. The neural network's strength lies in the vast amount of interconnections (Za93). Engineers and scientists are increasingly fascinated by ANN's, as they contribute to our comprehension of biological models.

Neural networks offer a new form of parallel processing that has powerful capabilities and potential for innovative hardware implementations (Wo96). These networks satisfy the need for fast computing hardware and offer the potential to solve various application problems. They also stimulate our imagination and our urge to understand ourselves, while equipping

View entire sample
Join StudyHippo to see entire essay

us with a range of unique technological tools. What has particularly sparked fascination in neural networks is their ability to create models resembling biological nervous systems that can effectively perform useful computations (Da90). In contrast, traditional single-processor computers, such as Von Neumann machines, have a sequential computation approach using a single CPU (He90).

A typical CPU can perform over a hundred basic commands, such as additions, subtractions, loads, and shifts. These commands are executed one by one, following the steps of a time clock. On the other hand, a neural network processing unit usually only performs one or a few calculations. It applies a summation function to its inputs and makes incremental adjustments to the parameters linked to interconnections.

According to Vo94, even though this simple structure doesn't have a lot of complexity, it still enables a neural network to have the ability to classify and recognize patterns, perform pattern mapping, and be useful for computing purposes. The processing power of a neural network is mainly determined by the frequency of interconnection updates per second. On the other hand, Von Neumann machines are evaluated based on the number of instructions that can be executed per second, in a sequential manner, by a single processor as mentioned in He90. Neural networks adjust the parameters related to the interconnections between neurons during their learning phase. Therefore, the speed of learning is determined by the rate at which interconnection updates occur as stated by Kh90.

Neural network architectures differ from traditional parallel processing architectures in several ways. Firstly, the processors in a neural network have extensive interconnections, with more interconnections than processing units (Vo94). This ratio of interconnections to processing units

is usually larger than that found in state-of-the-art parallel processing architectures (Za93). Additionally, parallel processing architectures often include processing units that are similar in complexity to those found in Von Neumann machines (He90).

Neural networks have a different structure compared to traditional architectures. They have simpler processing units that can sum multiple inputs and adjust interconnection parameters. From a computational perspective, neural networks are attractive because of their ability to learn and represent knowledge. Many researchers believe that machine learning techniques offer the greatest potential for accomplishing complex artificial intelligence tasks (Ga93). Similar to how children learn to recognize dogs by observing examples, most neural networks learn by training on a set of examples (Wo96). This training set provides the network with the opportunity to learn from a group of examples.

The text describes the use of training patterns as vectors taken from various sources like images, speech signals, sensor data, and diagnosis information. These patterns are used in supervised learning scenarios where the network is given an input pattern along with its target output. The target output serves as the correct answer or classification for the input pattern. The neural network adjusts its internal weights based on these examples and if successful, the internal parameters are adjusted to the extent that the network can accurately respond to each input pattern.

Neural networks have the potential to create computer systems that do not require programming, as they learn through examples (Wo96). This represents a fundamentally different approach to computing when compared to traditional methods that involve developing computer programs. In a computer program, each step the computer takes is predetermined by the network. Conversely, neural networks start with

sample inputs and outputs, and gradually learn to provide the correct output for each input (Za93).

The neural network approach eliminates the need for humans to identify features or develop specific algorithms for the classification problem, saving both time and effort (Wo96). However, there are drawbacks to this approach, such as the unknown training time and the complexity of designing a network that effectively solves a given applications problem.

The potential of the approach, however, appears significantly better than past approaches (Ga93). Neural network architectures encode information in a distributed fashion. Typically, a neural network shares the information stored in it among many processing units. This coding method differs greatly from traditional memory schemes, where specific pieces of information are stored at specific locations in memory. For instance, traditional speech recognition systems have a lookup table of template speech patterns stored in a specific location of computer memory. These templates are then compared one by one to spoken inputs.

Neural networks utilize multiple processing units to identify spoken syllables. This results in a distributed internal representation across the network. Additionally, a single network can store multiple syllables or patterns simultaneously. The potential of neural networks as foundational components in future computational systems is extensive. Numerous useful applications have already been developed, constructed, and brought to market, with ongoing research focused on expanding this success. Areas where neural networks offer a superior approach to traditional computing are emphasized in their applications.

Neural networks have the potential to solve various problems that involve pattern recognition, pattern mapping, dealing with noisy data, pattern completion, associative lookups, and systems that learn or adapt during use (Fr93, Za93). These types of problems can

be found in areas such as speech synthesis and recognition, image processing and analysis, sonar and seismic signal classification, and adaptive control. Moreover, neural networks are capable of performing certain knowledge processing tasks and can be utilized for implementing associative memory (Kh90). Additionally, neural networks can address optimization tasks. The range of potential applications for neural networks is remarkable.

The first advanced application was identifying handwritten characters. In this application, a neural network is trained on a set of handwritten characters, which includes printed letters of the alphabet. The training set for the network includes the handwritten characters as inputs along with their correct identifications. Once the training is finished, the network is able to identify handwritten characters even with variations (Za93).

NETtalk, an impressive application study, focused on a neural network called NETtalk. This network was designed to learn the phonetic pronunciation of written text. The input for this network consisted of English text in the form of consecutive letters found in sentences. In response, the network produced phonetic notation that corresponded to the sound that should be generated based on the text input. This output was then connected to a speech generator, allowing observers to hear how the network learned to speak.

Trained by Sejnowski and Rosenberg, this network achieved a high level of accuracy in pronouncing English text (Za93). Neural network studies in adaptive control have also been conducted. Widrow and Smith originally carried out the classic broom-balancing experiment in 1963, implementing a neural network control system. The network learned to maneuver a cart to maintain a balanced upside-down broom on its handle tip (Da90).

Recently, studies were conducted on teaching a robotic arm

to reach its target position and stabilize it. Additionally, research was carried out on training a neural network to control an autonomous vehicle by using simplified vehicle control scenarios (Wo96). It is anticipated that neural networks will work in conjunction with other technologies rather than replacing them. Tasks that are effectively handled by conventional computer methods do not require the use of neural networks. However, the potential for combining neural networks with other technologies is extensive (He90).

Expert systems and rule-based knowledge-processing techniques are sufficient for certain applications, but neural networks have the capacity to learn rules in a more flexible manner. In some instances, advanced systems can be constructed by integrating expert systems and neural networks (Wo96). A system that incorporates a neural network for analysis and pattern recognition can combine sensors for visual or acoustic data. Furthermore, neural network components may be employed in the future for robotics and control systems.

Simulation techniques, including simulation languages, can be expanded to incorporate frameworks for simulating neural networks. Additionally, neural networks can now contribute to the optimization of engineering designs and industrial resources (Za93). Developing a neural network application encompasses several design choices, starting with selecting the overall field of application.

The usual scenario is that a certain problem is identified as potentially solvable with a neural network. The problem then needs to be precisely defined in order to determine the inputs and outputs for the network. The choices for inputs and outputs involve identifying the specific patterns that will be utilized by the network. Moreover, the researcher must also devise a way to represent the required information using these patterns. Finally, decisions regarding the internal design of

the network must be made, such as determining its topology and size (Kh90).

The network specifies the number of processing units and their interconnections. These processing units are typically organized into layers that can be fully or partially interconnected (Vo95). The network also offers choices for the dynamic activity of the processing units. There are various neural net paradigms available, each determining how the network learns through the readjustment of parameters.

The optimization of the ANN design requires tuning internal parameters, such as the learning rate in the back-error propagation paradigm. This parameter affects the learning rate and potentially the success of the network's learning process. Experiments have shown that decreasing this parameter during a learning session can lead to more successful learning. In some cases, multiple parameters need to be tuned in various paradigms. Generally, experimental results and experience with the specific application problem are used to tune network parameters.

The selection of training data plays a vital role in determining the extent to which a neural network learns a specific task. Just like a child, the network's learning ability is influenced by the examples it is exposed to. It is necessary to provide a good set of examples that effectively demonstrate the tasks to be learned in order for the desired learning outcomes to be achieved. Additionally, the set of training examples should also account for the various patterns that the network will encounter after training. While there are already established neural network paradigms, ongoing research is focused on exploring different variations that aim to increase complexity and enhance capabilities.

Investigations into additional structures include incorporating delay components, using sparse interconnections, and enabling interaction between different interconnections.

It is possible to combine multiple neural nets, where the outputs of some networks become the inputs of others. These combined systems often result in improved performance and faster training times (Da90). Neural network implementations can take various forms, with software simulators being the most commonly used today.

These computer programs are simulators for neural networks, and their speed is determined by the hardware on which they are run. There are different accelerator boards available to speed up the computations on individual computers (Wo96). Simulation plays a crucial role in the advancement and implementation of neural network technology. By using a simulator, one can make various design decisions for a neural network system. It allows testing of input and output choices, as well as evaluating the capabilities of the specific paradigm used (Wo96).

Implementations of neural networks are not confined to computer simulation. For example, one implementation could involve an individual manually calculating the changing parameters of the network using pencil and paper. Another implementation could involve a group of people, with each person acting as a processing unit and using a handheld calculator (He90). Although these implementations may not be efficient enough for practical applications, they still serve as methods for emulating a parallel computing structure based on neural network architectures (Za93).

One challenge in using neural network applications is that they require more computational power than currently available computers have. Determining the appropriate size for a network can be difficult based solely on small-scale simulations. To accurately evaluate the performance of a neural network, it must be tested using a network of the same size expected to be used in the actual application (Za93).

One possible solution

to address this challenge is to utilize specialized hardware to accelerate the response of an Artificial Neural Network (ANN). Such hardware can be developed using analog computing technology or a combination of analog and digital methods. However, the development of this specialized hardware is still ongoing, and there are numerous problems that need to be resolved (Wo96).

To achieve better neural network implementations, technological advancements such as custom logic chips and logic-enhanced memory chips are being considered (Wo96). These advancements hold promise for enhancing the performance and efficiency of neural networks.

The original neural networks, which are biological nervous systems, must be mentioned when discussing implementation. These systems were the first to implement neural network architectures. Both systems have parallel computing units that are heavily interconnected and include feature detectors, redundancy, massive parallelism, and modulation of connections (Vo94, Gr93). However, there are significant differences between biological systems and artificial neural networks. Artificial neural networks typically have regular interconnection topologies that are based on a fully connected, layered organization. On the other hand, biological interconnections do not precisely conform to this fully connected, layered organization model. Nonetheless, they do have a defined structure at the systems level, which includes specific areas that aggregate synapses and fibers as well as various other interconnections (Lo94, Gr93).

Although there may appear to be randomness or statistical patterns in brain connections, it is likely that there is a significant level of precision at the cellular, ensemble, and system levels. Another distinction between artificial and biological systems is that the brain organizes itself dynamically during development and can permanently establish its wiring based on experiences during specific critical periods. This impact on connection patterns

is not present in current artificial neural networks (ANNs). The future of neurocomputing can greatly benefit from studying biological systems. The structures found in biological systems can serve as inspiration for designing new architectures for ANN models. Additionally, the development of neurocomputing models can also benefit biology and cognitive science.

Artificial neural networks, such as those illustrated by Le91, serve as models for characteristics found in the human brain. It is important to draw conclusions carefully to avoid confusion between these two types of systems.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New