ARTIFICIAL NEURAL NETWORKING
This project work is intended to help understand what Artificial Neural Networks are, how to use them, and where they are currently being used. The architecture of the network is explained explicitly. Also, discussed are types of applications currently utilizing the different structures and how some structures lend themselves to specific solution. The various types of neural networks are briefly explained, applications of neural network are described, and a detailed historical background is provided. The concept of background propagation is introduced with mathematical explanation as well as its code in C++. Moreover, I used a program written in C++ to demonstrate the usage of neural network in finding the minimum values of eight numbers.
TABLE OF CONTENTS
Table of contents
Table of figures
- History of Artificial Neural Networks
- To the Human Brain
- To other Artificial Intelligence
- To Traditional Computers
- Brief History of Some Artificial Neural Networks
- McCulloch-Pitts Neuron
- Backpropagation Networks
- Constructive Algorithms
- Implementing Neural Networks
- Training Neural Network Using Back Propagation
2.4.1 The Back Propagation formulae
2.4.2 Back Propagation Weight Update Rule
ARCHITECTURE OF NEURAL NETWORK
- Network Layers
- Major Components of an Artificial Neuron
- Learning paradigms
3.3.1 Supervised Learning
3.3.2 Unsupervised Learning
3.3.3. Reinforcement Learning
3.4 Learning Rate
3.5 Learning Laws
3.6 Types of Artificial Neural Networks
APPLICATIONS OF NEURAL NETWORKS
- Neural Networks in Practice
- Neural Networks in Business
- Neural Networks in Medicine
- Others Areas
- New Application Areas
- How to Determine if an Application is a Neural Network Candidate
- Use of Neural Network to Find Minimum of Eight Values
CONCLUSION AND FUTURE OF ANN
Back Propagation Algorithm in C++
Use of Neural Network to find Minimum of eight Values in C++
Bi – Code
B2 – Output
TABLE OF FIGURES
Figure 1: A simple neuron
Figure 2: A basic artificial neuron
Figure 3: A multilayer Perception
Figure 4: Notation used
Figure 5: weight value
Figure 6: A simple network diagrams
Figure 7: Major components of an artificial neuron
Figure 8: Sample transfer function
Table 1: Comparison of expert systems and neural network
Table 2: Network selector table
Artificial neural network are relatively crude electronic models based on the neural structures of the brain. The brain basically learn from experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages. This brain modeling premises a less technical way to develop machine solution. This new approach to computing provides a more graceful degradation during system overload than its more traditional counterparts.
ANN is composed of large number of highly interconnected processing element (Neurons) working in unison specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as patterns recognition or data classification, through a learning process. Learning in biological system involves adjustment to the synaptic connection that exist between the neurons. This is true of ANNs as well.
These biological inspired methods of computing are thought to be the next major advancement in the computing industry. Even simple animal brains are capable if functions that are currently impossible for computer. Computers do rote things well, like keeping ledgers or performing complex math. But computers have trouble recognize even simple patterns much less generalizing those patterns of the past into action of the future.
There is no precise agreed definition among researchers as to what a neural network is, but most would agree that it involves a network.
An artificial neural network (ANN), also called a simulated neural network (SNN), is an interconnected group of artificial neurons that uses a mathematical or computation model for information processing based on a connectionist approach to computation, of highly complex processing element (neurons), where the global behaviour is determined by the connections between the processing element and element parameters.
Informally, an artificial neural network (ANN) is a set of interconnected processing units performing a function or task. These network are similar to the biological neural networks in the sense that function are formed collectively and in parallel by the units, rather than there being a clear delineation of sub – tasks to which various units are assigned.
- History Of Artificial Neural Network
- Fist Attempts. The study of the human brink is thousand of years old. The first step towards artificial neural network come in 1943 when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. They modeled a simple neural network with electrical circuits. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumption about how neurons worked. Their network were based on simple on simple neurons, which were considered to be binary devices with fixed thresholds. The results of their model were simple logic function such as “a or b” and “a and b”. Another attempts was by using computer simulations.
- Promising & Emerging Technology: Neuroscience, psychologists and engineers contributed to the progress of neural network simulations, Resenblatt (1958), a neuro – biologist of Cornell, stirred considerable interest and activity in the filed when he designed and developed the perception. He was intrigued with the operation of the eye of a fly. Much of the processing which tells a fly to flee is done in its eye. The perception, which resulted from this research from this research, was built in hardware and is the oldest neural network still in use today. The perception had three layers with the middle layer known as the association layer. This system could learn to connect or associate a given input to a random output unit. Another system was the ADALINE (ADAptive Linear Element) which was developed in 1960 by Widows and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple component. The method used of learning was different to that of the perception, it employed the Least –Mean- Square (LMS) learning rule
- Period of frustration & Disrepute: Unfortunately, the earlier success caused people to exaggerate the potential of neural network, particularly in light of the limitation in the electronics then available. Disappointment set in as promises were unfilled. Also, a fear set as writers began to ponder what effect “thinking machines” would have on man. Asimoy’s on robots revealed the effect on man’s morals an values when machines are capable of doing all of mankind’ work. These fears, combined with unfulfilled outrageous claims, caused respected voices to critique the neural research. The result was to halt much of the funding. This period of stunted growth lasted through 1981.
- Innovation: Although public interest and available funding were minimal several researchers continued working to develop neuromorphically based computational methods for problems such as pattern recognition. During this period several paradigms were generated which modern work continue to enhance. Gross berg’s (Steve Gross berg and Gail Carpenter in 1988) influence founded a school of thought which explores resonating algorithms. They developed the ART (Adaptive Resonance Theory) network based on biologically plausible models. Anderson and Kohonen developed associative techniques independent of each Klopf (A Henry Klopf) in 1972 eloped a basis for learning in artificial neurons based on a biological principle for neuronal learning called homeostasis. Werbos (Paul Wrbs 1974) developed and used the back – propagation learning method. Back – propagation net are probably the most well known and widely applied of the neural network today. In essence, the back – propagation network is a perception with multiple layers, a different thresholds functions in the artificial neuron, and a more robust and capable learning rule. Amari (A. Shun – Ichi 1967) was involved with theoretical development: he published a paper which established a mathematical theory for a learning basis (error – correction method) dealing with adaptive pattern classification. While Fukushima (F. Kunihiko developed a step wise trained multilayered neural network for interpretation of handwritten characters. The original network was published in 1975 and was called the Cognition.
- Re Emergence: Progress during the late 1970s and early 1980s was important to the re – emergence of interest in the neural network filed. Several factors influenced this movement. For example, comprehensive books and conferences provided a form for people in divers filed with specialized technical languages, and the response to conferences and publication was quite positive. The news media picked up on the increased activity and tutorials helped disseminate the technology. Academic programs appeared and courses were introduced at most major universities (in US and Europe. Attention is now focused on funding levels throughout Europe, Japan and the US as this funding becomes available, several new commercial with applications in industry and financial institutions emerge.
- Today: Significant progress has been made in the field of neural networks enough to attract a great deal of attention and fund further research. Advancement beyond current commercial applications appears to be possible, and research is advancing the field on many fronts. Very key to the whole technology lies in hardware development. Neurally based chips are emerging and applications to complex problems developing. Companies are working on three types of neuro chips-digital, analogy and optical. Some companies are working on a creating a “silicon complier” to generate a neural network Application Specific Integrated Circuit (ASIC). These ASICs and neuron-like digital chips appear to be the wave of the near future, although optical chips look very promising. Yet, it may be years before optical chips see the light of day in commercial applications.
Clearly, today is a period of transition for neural network technology.
- To the Human Brain
The most basic element of the human brain is a specific type of cell, which, unlike the rest of the body. Doesn’t appear to regenerate. Because this type of cell is the only part of the body that isn’t slowly replaced, it is assumed that these cells are what provie us with our abilities to remember, think, and apply previous experiences to our every action. These cells, all 100 billion of them, are known as neurons, although 1,000 to 10,000 are typical. The individual neurons are complicated. They have a myriad of parts, sub-systems, and control mechanisms. Each neuron is a cell that uses biochemical reactions to receive, process, and transmit information. A typical neuron collects signals from others through a host of fine structures called dendrites.
Dendrites are hair-like extensions of the soma, which act like input channels. These input channels receive their input through the synapses of other neurons. The soma then processes these incoming signals over time, and then turns that processes value into an output which is sends out as spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurons.
When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down it axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.
Real brains, however, are orders of magnitude more complex than any artificial neural network so far considered. These artificial neural networks try to replicate only the most basic elements of this complicated, versatile, and powerful organism. The goal of artificial neural networks is not the reaction of the brain. On the contrary, neural network researchers are seeking an understanding of nature’s capabilities for which people can engineer solutions to problems that have not been solved by traditional computing. To do this, the basic unit of neural networks, the artificial neurons, stimulates the four basic functions of natural neurons.
An artificial neural network consists of a number of very simple and highly interconnected processors, also called neuron, like the biological neuron. The neurons are connected by weighted links passing signals from one neuron to another. Each neuron receives a number of input signals but produces only a single output signal. The output signal is transmitted through the neuron’s outgoing connection (corresponding to the biological axon), which splits into a number of branches that transmit the same signal (the signal is not divided). The outgoing branches terminate at the incoming connections of other neurons in the network.
Neural networks resemble the human brain in the following two ways:
- A neural network acquires through learning.
- A neural network’s knowledge is stored within inter-neuron connection strengths known as synaptic weights.
- To other Artificial Intelligence
It is a knowledge base system capable of computing in words; hence it belongs to the family of Artificial Intelligence – fifth generation computing (a science that has defined its goal as making machines do things that would require intelligence if done by humans such as the ability to reason, discover meanings, generalize, or learn from past experience). Artificial Intelligence research yielded results which include decision making, natural-language comprehension, and pattern recognition. Major advanced both in microelectronics, notably VLSI, and in programming resulted in intensification of AI research efforts. Numerous integrated circuits in which millions of CPU, memory, and input-output circuits are combined on a single, tiny chip, and used to provide bases for the hardware needed for to construct intelligent machines, thus parallel processing (i.e. simultaneous execution of several operations) is allowed.
Other members of the family are expert system and fuzzy logic. Expert systems (ES) use human knowledge and expertise in the form of specific rules, and are distinguished by a clean separation of the knowledge and the reasoning mechanism. Knowledge must be acquired, validated and revised. It is an extension of the traditional computer. An expert system has two parts, an inference engine (shell, user interface, files, etc) and a knowledge base (containing information that is specific to a particular problem). The expert only need to know what he wants to do and how the ES works. The ES generates the computer programming itself.
Fuzzy logic and fuzzy set theory provides a means to compute with words. It concentrate on capturing of words, human reasoning and decision making, and provide a way of breaking through the computational burden of traditional expert systems.
Artificial neural networks (sometimes called the sixth generation of computing) process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neutrons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network find out how to solve the problem by itself, its operation can be unpredictable.
- To Traditional Computers
Conventional computers use a cognitive (or algorithmic) approach to problem solving; the way the problem is to solve must be known and stated in small unambiguous instructions. These instructions are then converted to a high-level language program and then into machine code that the computer can understand.
These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
Neural networks and conventional algorithmic computers are not competition but complement each other. There are tasks more suited to an algorithmic approach (like arithmetic operations and problems that can be well characterized) and tasks that are more suited to neural networks (like pattern recognition). Even more, a large number of tasks require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.
|CHARACTERISTIC||TRADITIONAL COMPUTING (INCLUDING EXPERT SYSTEMS)||ARTIFICIAL NEURAL NETWORKS|
|Processors||VLSI (traditional processors)||Artificial Neural Networks, variety of technologies; hardware development is ongoing|
|Processing Approach||Separate||The same|
|Processing style functions||Sequential logically (left brained) Via Rules Concepts Calculations||Parallel Gestault (right brained)
|Learning Method Application||By rules (didactically) Accounting word processing math inventory
|By example (Socratically Sensor processor speech recognition
|Connections||Externally programmable||Dynamically self programming|
|Self learning||Only algorithmic parameters modified||Continuously adaptable|
|Fault tolerance||None without specific processors||Significant in the very nature of the interconnected neurons|
|Neurobiology in design||None||Moderate|
|Programming||Through a rule base complicated||Self-programming; but network must be set up properly|
|Ability to be fast||Requires big processors||Requires multiple custom built chips|
Table 1: Comparisons of Experts Systems and Neural Networks.
Yet, despite the advantages of neural networks over both expert systems and more traditional computing in this specific areas, neural net are not complete solutions. They learn, and as such, they do continue their implementers to meet a number of conditions.
These conditions include:
- A data set that includes the information that can characterize the problem.
- An adequate sized data set to both train and test the network.
- An understanding of the basic nature of the problem can be made. These decisions include the atomization and transfer functions, and the learning methods.
- An understanding of the development tools.
- Adequate processing powers (some applications demand real-time processing that excels what is available in the standard, sequential processing hardware. The development of hardware is the key to the future of neural networks).
Once these conditions are met, neural networks offer the opportunity of solving problems in an arena where traditional processors lack both the processing power and a step-by-step methodology.
This new way of computing requires skills beyond traditional computing. It is a natural evolution. Initially, computing was only hardware and engineers made it work. Then, there were software specialists – programmers, systems engineers, database specialists, and designer. Now, there also neural architects. This new professional needs skills different from his predecessors from the past. For instance, he will need to know statistics in order to choose and evaluate training and testing situations. This skill of making neural networks work is one that will stress the logical thinking of current software engineers.
The biggest demand of neural network is that the process is not simply logic. It involves an empirical skill, an intuitive feel as to how a network might be created.
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships, and in their ability to learn these relationships directly from data being modeled. Traditional linear models are simply inadequate when it comes to modeling data that contains non-linear characteristics.
A trained neural network can be used to provide projections given new situations of interest and answer “what if” questions.
Other advantages include:
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-organization: An ANN can create its own organization or representation of the information it receives during learning time.
Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major damage.
Are You in Need of Help? Call us or WhatsApp us via +234-809-323-9919 or Email us via [email protected]
CLICK HERE TO HIRE A WRITER IF YOU CAN’T FIND YOUR TOPIC
Disclaimer: The copyright owner created this PDF Content to serve as a RESEARCH GUIDE for students to conduct academic research.
The original PDF Research Material Guide that you receive can be used in the following ways.
- To provide additional information about the topic of the project.
- You can use them as a resource for your research (if you properly reference them).
- Proper paraphrasing is required (consult your school’s definition of plagiarism and acceptable paraphrase).
- If properly referenced, direct citation.