LEOPRABU
EBOOK-INTRODUCTION TO MICROPROCESSORS AND MICROCONTROLLERS 





Click the following links to download this Ebook

LEOPRABU
Click on the below links to download the material in PDF format for FREE

LESSON 1--INTRO
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-1%28SSG%29%28PE%29%20%28%28EE%29NPTEL%29.pdf

LESSON 2--FEATURES,OPERATION & CHARACTERISTICS OF SEMI CONDUCTORS
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-2%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 3--BJT
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-3%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 4 -- THYRISTOR & TRIAC
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-4%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 5 -- GTO
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-5%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 6--- MOSFET
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-6%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 7 --- IGBT
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-7%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 8-- HARD & SOFT SWITCHING POWER SEMICONDUCTORS
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-8%28SSG%29%28PE%29%20%20%28%28EE%29NPTEL%29.pdf


LESSON 9---SINGLE PHASE UNCONTROLLED RECTIFIERShttp://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-9%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 10--SINGLE PHASE FULLY CONTROLLED RECTIFIER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-10%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 11-- SINGLE PHASE HALF CONTROLLED BRIDGE CONVERTER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-11%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 12--SINGLE PHASE UNCONTROLLED RECTIFIER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-12%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 13-- OPERATION & ANALYSIS OF 3-PHASE FULLY CONTROLLED BRIDGE             RECTIFIER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-13%28DK%29%28PE%29%20%28%28EE%29NPTEL%29%20.pdf
LESSON 14--OPERATION & ANALYSIS OF 3-PHASE HALF CONTROLLED CONVERTER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-14%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf
LESSON 15-- EFFECT OF SOURCE OF INDUCTANCE
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-15%28DK%29%28PE%29%20%28%28EE%29NPTEL%29.pdf 

LESSON 16 -- POWER FACTOR IMPROVEMENT, HARMONIC REDUCTION, FILTER
http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT%20Kharagpur/Power%20Electronics/PDF/L-16%28NKD%29%28PE%29%20%28%28EE%29NPTEL%29%20.pdf
LEOPRABU

FORMULAS, EQUATIONS & LAWS

 
Symbolic:
E =VOLTS ~or~ (V = VOLTS)
P =WATTS ~or~ (W = WATTS)
R = OHMS ~or~ (R = RESISTANCE)
I =AMPERES ~or~ (A = AMPERES)
HP = HORSEPOWER
PF = POWER FACTOR
kW = KILOWATTS
kWh = KILOWATT HOUR
VA = VOLT-AMPERES
kVA = KILOVOLT-AMPERES
C = CAPACITANCE
EFF = EFFICIENCY (expressed as a decimal)
 
DIRECT CURRENT
 
AMPS=WATTS÷VOLTSI = P ÷ EA = W ÷ V
WATTS=VOLTS x AMPSP = E x IW = V x A
VOLTS=WATTS ÷ AMPSE = P ÷ IV = W ÷ A
HORSEPOWER=(V x A x EFF)÷746
EFFICIENCY=(746 x HP)÷(V x A)
 
AC SINGLE PHASE ~ 1ø
 
AMPS=WATTS÷(VOLTS x PF)I=P÷(E x PF)A=W÷(V x PF)
WATTS=VOLTS x AMPS x PFP=E x I x PFW=V x A x PF
VOLTS=WATTS÷AMPSE=P÷IV=W÷A
VOLT-AMPS=VOLTS x AMPSVA=E x IVA=V x A
HORSEPOWER=(V x A x EFF x PF)÷746
POWERFACTOR=INPUT WATTS÷(V x A)
EFFICIENCY=(746 x HP)÷(V x A x PF)
 
AC THREE PHASE ~ 3ø
 
AMPS=WATTS÷(1.732 x VOLTS x PF)I = P÷(1.732 x E x PF)
WATTS=1.732 x VOLTS x AMPS x PFP = 1.732 x E x I x PF
VOLTS=WATTS÷AMPSE=P÷I
VOLT-AMPS=1.732 x VOLTS x AMPSVA=1.732 x E x I
HORSEPOWER=(1.732 x V x A x EFF x PF)÷746
POWERFACTOR=INPUT WATTS÷(1.732 x V x A)
EFFICIENCY=(746 x HP)÷(1.732 x V x A x PF)
LEOPRABU
ELECTRIC POWER CURRENT VOLTAGE RELATIONS








LEOPRABU

P-N Junction Diode

P-N junction is a semiconductor device, which is formed by P-type and N-type semiconductor material. P-type has high concentration of holes and N-type has high concentration of electrons. Hole diffusion from p-type to n-type and electron diffusion is from n-type to p-type.
From the figure, as the free electrons move across the junction from n-type to p-type the donor ions become positively charged. Hence, positive charge is built on the N-side of the junction. The free electrons across the junction is the negative acceptor ions by filling in the holes, then negative charge established on the p-side of the junction is shown in figure above.  An electric field formed by the positive ions in the n-type region and negative ions in p-type regions. This region is called diffusion region. Since the electric field quickly sweeps free carriers out, hence the region is depleted of free carriers. A built-in potential Vbi due to Ê is formed at the junction is shown in figure.
Functional Diagram of P-N Junction Diode:
Functional Diagram of P-N Junction Diode
Forward Characteristics of P-N Junction:
When positive terminal of battery is connected to P-type and negative terminal is connected to N-type is called forward bias of P-N junction is shown figure below.
Forward Characteristics of P-N Junction
If this external voltage becomes greater than the value of the potential barrier, approximately 0.7 volts for silicon and 0.3 volts for germanium, the potential barriers opposition will be overcome and current will start to flow. This is because the negative voltage pushes or repels electrons towards the junction giving them the energy to cross over and combine with the holes being pushed in the opposite direction towards the junction by the positive voltage. This results in a characteristics curve of zero current flowing up to this voltage point.
P-N Juncation Forward Bias Characteristics
Reverse Characteristics of P-N Junction:
When a diode is connected in a reverse bias condition, a positive voltage is applied to the N-type material and a negative voltage is applied to the P-type material.
P-N Junction Reverse Characteristics Circuit
Where the positive voltage applied to the N-type material attracts electrons towards the positive electrode and away from the junction, while the holes in the P-type end are also attracted away from the junction towards the negative electrode. The net result is that the depletion layer grows wider due to a lack of electrons and holes and presents a high impedance path, almost an insulator. The result is that a high potential barrier is created thus preventing current from flowing through the semiconductor material.
P-N Junction Reverse Bias Characteristics
Applications of P-N Junction Diode:
P-N junction diode is a two terminal polarity sensitive device, diode conducts when in forward bias and diode not conducts when reverse bias. Due to these characteristics the diode has various characteristics:
  1. Rectifiers in D.c power supply
  2. Demodulation circuits
  3. Clipping and clamping networks
LEOPRABU
Cellular Communication may damage your health !!!

Introduction

Could it really be? Are all those upwardly mobile people who walk around with mobile phones stuck to their ears destined for an early grave due to cancer or brain damage? There have been many papers and journals written on this very topic, studying the biological effects of electromagnetic radiation, which is emitted by cellular hand-held transceivers. Specifically, these papers have investigated the effect of electromagnetic radiation on the areas of the human body which are exposed to the cellular phone when it is being used ie: the head, ear, eye and hand.
This article will describe the biological effects that Electromagnetic (EM) radiation (Electromagnetic spectrum shown below) may have. Ways of measuring radiation emission levels to the head will also be described including a way of modeling the human head for this purpose. This information can then be used to assess the real danger posed by the future usage of cellular communication for digital data and voice transmission.




The Electromagnetic Spectrum



General Biological Effects of Radiation

When ionizing radiation interacts with cells in the human body, the following events may occur:
  1. Cells may repair themselves with no abnormal side effects.
  2. Cells may die and be replaced by new cells created through the body's normal regenerative processes or
  3. Cells may change in terms of their reproductive structure causing deformities in future cell reproduction.
The effects of radiation exposure can take effect either soon after the exposure or after a long delay (up 15-20 years in cases).



Biological effects of Electromagnetic Radiation

Exposure to electromagnetic fields results in the induction of electrical fields in biological tissues of the body. One of the most commonly visible effects of electric fields is the movement of body hair when a charged object is passed across it. A simple example being a charged comb which is passed over body hair.
The electric fields that are induced, may cause changes in psychological responses such as changes in complex reasoning and arousal of the human being. These effects may occur at levels of EM radiation present in the proximity of overhead power lines which are a very common site all over the UK. It has been shown experimentally that other physiological, psychological and behavioural effects do occur at certain conditions ie: certain magnitudes of electric and magnetic fields. These include the following:

  • Resting heart rate has been shown to be reduced by a few beats per minute, as well as changes in the pattern of electrical impulses produced by the brain (Electroencephalogram or EEG). All of these changes are not regarded as dangerous because they all occur within the ranges of normal fluctuations of these values.

    An EEG Trace
  • The biological rhythm of the human body (metabolic rate for example) may be influenced by very weak electric fields. This result has only been shown in humans exposed to an electric field while they were living in an underground apartment without access to natural light and dark intervals. Experiments have shown that, in animals, extreme exposure to electric fields does affect various aspects of the animals biological rhythm including the process by which the melatonin hormone is produced. This is hormone affects pigmentation of skin.Changes in this hormone level have been associated with causing cancer and so this is a very significant result. It isnt really clear if this gives conclusive evidence as to the danger of EM radiation when applied to cellular transceivers because usage of cellular transceivers would not give such a chronic level of exposure to a human being. Also very recent studies have not always observed these effects hence doubting the conclusion first made.
  • In some cases of acute field exposure to humans, headaches were observed as well as alterations in the visually evoked responses. This is very significant as these type of symptoms would not necessarily be associated to the cellular transceiver but more due to other factors ie: stress or overexposure to the sun etc.
One of the more publicised effects of EM radiation has been the effect on reproduction in humans. This publicity first began when experiments on chick embryos showed that exposure to weak magnetic fields caused deformity in resulting new born chicks. Although the results of this experiment seem very ambiguous and the fact that the experiments were carried out on chicks and not mammals, which would have been more relevant to humans, the effects were publicised due to the controversial nature of the implications. It is concievable, though, that with intense exposure to fields, human reproduction may experience some behavioural deficiencies as well as some retardation in development. But this is at a very intense source of EM radiation, far in excess of any fields emitted from cellular transceivers or from any other everyday machinery ie: microwave ovens etc.
Another majorly publicised affect is that of the inducement of cancers. As explained above, depression of melatonin production has been related to causing cancer, but only at excessive levels of EM fields. There have been no conclusive results to prove any relationship between exposure to EM fields and cancer.




Measurement of Electromagnetic Radiation effects

If the future of data communications becomes more and more mobile orientated hence using cellular technology to a far greater extent than currently, then users of the system will not only utilise the data communciation facilities but also the voice communication facilities that are so common nowadays. This implies that exposure to EM radiation will be more intense in the vicinity of the head. So it is very useful to try to model the human head and measure the amount of radiation that would be absorbed by the head when a cellular transceiver is being used for voice communication (ie: as a mobile phone).



Modelling the human head

The human head can be modelled using the "phantom" model. This is a fibreglass shell filled with a liquid having the same dielectric as human brain tissue. The magnitude of EM fields can then be measured at specific points and depths in the modelled human skull when a cellular transceiever is placed in a close proximity ie: In these experiments a cellular phone was used in a regular position, by the ear.
There are problems with this model. The skin surrounding the skull, the facial muscles, eye chamber and the ears are not easily represented by the phantom model. In order to simulate this, extra simulated brain and muscle tissue needs to be used with the fibreglass model. Fatty tissue is approximated by using the same liquid as for brain tissue. This results in a higher measured value of radiation absorbed by the brain than the actual value. This is acceptable in these situations where safety levels of radiation emission are calculated and so the worst case level will be measured. The measurements are shown for a single unit cellular phone as an example. The measurements are given in Specific Absorption Rate explained below.


SAR values in the proximity of the head.



Quantification of Radiation absorption.

A widely adopted measurement of radiation absorption is the Specific Absorption Rate (SAR). This is defined as the derivative of energy divided by mass ie: dW/dm. It is measured in Watts per kilogram (W/kg).
Using the example of two types of cellular transcievers, a one unit phone and a flip phone, the diagrams below show the SAR distribution across the simulated tissue of a phantom model, when the phone is being used. This shows that the exposure of the face is well spread across the area of the phone and peaks of SAR occur at few points. This is advantageous as it implies that there are no major concentrations of field intensity on the head. As it has been explained above, intense concentration of EM fields may have dangerous implications.


SAR Distributions for two types of Cellular Transceivers.
The experiments have shown that the area of the head which is exposed to the largest SAR value of fields is the area shaded in the diagram below. This area contains the phone display and touch tone pad on the phone unit. The National Council for Radiation Protection and Measurements (NCRP) use SAR values to specify the maximum safe limits of radiation allowed to be emitted from cellular transceivers.


Area of highest EM exposure on a human head.




Summary

There have been many studies of the the biological effects of EM radiation, many of these have been inconclusive. Using the research available, the following observations can be made:
  • Electrical fields of intensity similar to that found in the proximity of overhead power lines, may cause psychological responses such as changes in complex reasoning and arousal of humans.
  • In very special conditions as explained earlier, in the presence of weak radiation, the human body's biological rhythm may be altered, and in animals it has been shown the intense exposure to radiation can slow down the production of the hormone Melatonin. Changes in the level of this hormone have been linked to causing cancer.
  • Acute radiation exposure in human subjects has been shown to cause headaches and alterations in visual responses evoked by the radiation.
  • In the presence of intense radiation, human reproduction may experience behavioural deficiencies as well as retardation in development.
Due to the effects that EM radiation can cause, it is useful to be able to measure and limit the amount of radiation emitted from cellular transceivers which may be used to transmit data or voice signals. This requires a model of the human head, which is achieved by using the Phantom model. Radiation absorption is measured using the SAR unit.In conclusion, the effects of EM radiation are only a threat if the dosage of radiation is very high. In the case of cellular transceivers, the dose is not very high. Most of the latest studies that have been done on the subject, have returned inconclusive results and so it is impossible to give a definite answer to the question of whether cellular communications are bad for your health.
Further research is always being carried out, with phone companies eager to invest in research so as to ease their consumer's fears. Many companies are taking advantage of the public's fear by selling products such as shielding cases which enclose the cellular tranceiver and shield the user from the majority of EM radiation emitted by the phone. These tend to decrease the reception capability of the phone units.




References


  • "Current and SAR induced in a human head model by the electromagnetic fields irradiated from a cellular phone." Journal Paper. Hsing-Yi Chen; Hou-Hwa Wang. IEEE Transactions on Microwave theory and Techniques Dec 1994. Vol: 42 Iss: 12 pt.1 pp2249-54.
  • "Modeling of hand-held recieving antennas in the presence of a human head" Conference Paper. Thiry X; Mittra R. IEEE Antennas and Propagation Society International Symposium 1995. pp 1116-1119, vol 2.
  • "Health Effects of exposure to electromagnetic fields" Conference paper. Stuchly, M.A. 1995 IEEE Aerospace Applications Conference. pp 351-68, vol 1.
  • "Modeling biological effects from magnetic fields". Journal paper. Blanchard J.P. IEEE Aerospace and Electronics Systems Magazine Feb 1996. Vol:11 Iss: 2 p 6-10.
  • "Magnetic Fields and cancer". Journal Paper. Carstensen E.L. IEEE Engineering in Medicine and Biology magazine July-Aug 1995. Vol: 14 Iss: 4 p 362-9.
  • "Three dimensional modelling of EM fields in the human head". Conference paper. Huang Y; Dilworth I.J. 9th IEE Internatinal Conference on Antennas and Propagation 1995. p 223-6 vol 1.
  • "Electromagnetic Energy exposure of Simulated Users of Portable Cellular Telephones". Journal Paper. Balzano Q; Garay O; Manning T,J. IEEE Transactions on Vehicular Technology Aug 1995. Vol: 44 Iss:3 p 390-403.
  • Interview with Mr.George Stavrides, 2nd Year Biology Student, Imperial College 30.05.1996.
LEOPRABU

Introduction

Genetic Algorithms (GAs) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of survival of the fittest. As such they represent an intelligent exploitation of a random search within a defined search space to solve a problem.
First pioneered by John Holland in the 60s, Genetic Algorithms has been widely studied, experimented and applied in many fields in engineering worlds. Not only does GAs provide an alternative methods to solving problem, it consistently outperforms other traditional methods in most of the problems link. Many of the real world problems involved finding optimal parameters, which might prove difficult for traditional methods but ideal for GAs. However, because of its outstanding performance in optimisation, GAs have been wrongly regarded as a function optimiser. In fact, there are many ways to view genetic algorithms. Perhaps most users come to GAs looking for a problem solver, but this is a restrictive view [ De Jong ,1993 ] .
Herein, we will examine GAs as a number of different things:
GAs as problem solvers
GAs as challenging technical puzzle
GAs as basis for competent machine learning
GAs as computational model of innovation and creativity
GAs as computational model of other innovating systems
GAs as guiding philosophy
However, due to various constraints, we would only be looking at GAs as pro blem solvers and competent machine learning here. We would also examine how GAs is applied to completely different fields.
Many scientists have tried to create living programs. These programs do not merely simulate life but try to exhibit the behaviours and characteristics of a real organisms in an attempt to exist as a form of life. Suggestions have been made that alife would eventually evolve into real life. Such suggestion may sound absurd at the moment but certainly not implausible if technology continues to progress at present rates. Therefore it is worth, in our opinion, taking a paragraph out to discuss how Alife is connected with GAs and see if such a prediction is far fetched and groundless.

Brief Overview

GAs were introduced as a computational analogy of adaptive systems. They are modelled loosely on the principles of the evolution via natural selection, employing a population of individuals that undergo selection in the presence of variation-inducing operators such as mutation and recombination (crossover). A fitness function is used to evaluate individuals, and reproductive success varies with fitness.
The Algorithms
  1. Randomly generate an initial population M(0)
  2. Compute and save the fitness u(m) for each individual m in the current population M(t)
  3. Define selection probabilities p(m) for each individual m in M(t) so that p(m) is proportional to u(m)
  4. Generate M(t+1) by probabilistically selecting individuals from M(t) to produce offspring via genetic operators
  5. Repeat step 2 until satisfying solution is obtained.
The paradigm of GAs descibed above is usually the one applied to solving most of the problems presented to GAs. Though it might not find the best solution. more often than not, it would come up with a partially optimal solution.

 Who can benefit from GA

Nearly everyone can gain benefits from Genetic Algorithms, once he can encode solutions of a given problem to chromosomes in GA, and compare the relative performance (fitness) of solutions. An effective GA representation and meaningful fitness evaluation are the keys of the success in GA applications. The appeal of GAs comes from their simplicity and elegance as robust search algorithms as well as from their power to discover good solutions rapidly for difficult high-dimensional problems. GAs are useful and efficient when
The search space is large, complex or poorly understood.
Domain knowledge is scarce or expert knowledge is difficult to encode to narrow the search space.
No mathematical analysis is available.
Traditional search methods fail.
The advantage of the GA approach is the ease with which it can handle arbitrary kinds of constraints and objectives; all such things can be handled as weighted components of the fitness function, making it easy to adapt the GA scheduler to the particular requirements of a very wide range of possible overall objectives.
GAs have been used for problem-solving and for modelling. GAs are applied to many scientific, engineering problems, in business and entertainment, including:
Optimization: GAs have been used in a wide variety of optimisation tasks, including numerical optimisation, and combinatorial optimisation problems such as traveling salesman problem (TSP), circuit design [Louis 1993] , job shop scheduling [Goldstein 1991] and video & sound quality optimisation.
Automatic Programming: GAs have been used to evolve computer programs for specific tasks, and to design other computational structures, for example, cellular automata and sorting networks.
Machine and robot learning: GAs have been used for many machine- learning applications, including classificationa and prediction, and protein structure prediction. GAs have also been used to design neural networks, to evolve rules for learning classifier systems or symbolic production systems, and to design and control robots.
Economic models: GAs have been used to model processes of innovation, the development of bidding strategies, and the emergence of economic markets.
Immune system models: GAs have been used to model various aspects of the natural immune system, including somatic mutation during an individual's lifetime and the discovery of multi-gene families during evolutionary time.
Ecological models: GAs have been used to model ecological phenomena such as biological arms races, host-parasite co-evolutions, symbiosis and resource flow in ecologies.
Population genetics models: GAs have been used to study questions in population genetics, such as "under what conditions will a gene for recombination be evolutionarily viable?"
Interactions between evolution and learning: GAs have been used to study how individual learning and species evolution affect one another.
Models of social systems: GAs have been used to study evolutionary aspects of social systems, such as the evolution of cooperation [Chughtai 1995], the evolution of communication, and trail-following behaviour inants.

Applications of Genetic Algorithms


GA on optimisation and planning: Travelling Salesman Problem

The TSP is interesting not only from a theoretical point of view, many practical applications can be modelled as a travelling salesman problem or as variants of it, for example, pen movement of a plotter, drilling of printed circuit boards (PCB), real-world routing of school buses, airlines, delivery trucks and postal carriers. Researchers have tracked TSPs to study biomolecular pathways, to route a computer networks' parallel processing, to advance cryptography, to determine the order of thousands of exposures needed in X-ray crystallography and to determine routes searching for forest fires (which is a multiple-salesman problem partitioned into single TSPs). Therefore, there is a tremendous need for algorithms.
In the last two decades an enormous progress has been made with respect to solving travelling salesman problems to optimality which, of course, is the ultimate goal of every researcher. One of landmarks in the search for optimal solutions is a 3038-city problem. This progress is only party due to the increasing hardware power of computers. Above all, it was made possible by the development of mathematical theory and of efficient algorithms. Here, the GA approach is discussed.
There are strong relations between the constraints of the problem, the representation adopted and the genetic operators that can be used with it. The goal of traveling Salesman Problem is to devise a travel plan (a tour) which minimises the total distance travelled. TSP is NP-hard (NP stands for non-deterministic polynomial time) - it is generally believed cannot be solved (exactly) in time polynomial. The TSP is constrained:
The salesman can only be in a city at any time
Cities have to be visited once and only once.
When GAs applied to very large problems, they fail in two aspects:
  1. They scale rather poorly (in terms of time complexity) as the number of cities increases.
  2. The solution quality degrades rapidly.
LEOPRABU
What is a Neural Network?

First of all, when we are talking about a neural network, we should more properly say "artificial neural network" (ANN), because that is what we mean most of the time. Biological neural networks are much more complicated than the mathematical models we use for ANNs. But it is customary to be lazy and drop the "A" or the "artificial".
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

Some Other Definitions of a Neural Network include:
According to the DARPA Neural Network Study (1988, AFCEA International Press, p. 60):
... a neural network is a system composed of many simple processing elements operating in parallel whose function is determined by network structure, connection strengths, and the processing performed at computing elements or nodes.
According to Haykin, S. (1994), Neural Networks: A Comprehensive Foundation, NY: Macmillan, p. 2:
A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects:
1.Knowledge is acquired by the network through a learning process.
2.Interneuron connection strengths known as synaptic weights are used to store the knowledge. 

ANNs have been applied to an increasing number of real-world problems of considerable complexity. Their most important advantage is in solving problems that are too complex for conventional technologies -- problems that do not have an algorithmic solution or for which an algorithmic solution is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are well suited to problems that people are good at solving, but for which computers are not. These problems includes pattern recognition and forecasting (which requires the recognition of trends in data).

Why use a neural network?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:

  1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
  2. Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
  3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manifactured which take advantage of this capability.
  4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilites may be retained even with major network damage.

Neural Networks in Practice
Given this description of neural networks and how they work, what real world applications are they suited for? Neural networks have broad applicability to real world business problems. In fact, they have already been successfully applied in many industries.
Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including:

  • sales forecasting
  • industrial process control
  • customer research
  • data validation
  • risk management
  • target marketing
But to give you some more specific examples; ANN are also used in the following specific paradigms: recognition of speakers in communications; diagnosis of hepatitis; recovery of telecommunications from faulty software; interpretation of multimeaning Chinese words; undersea mine detection; texture analysis; three-dimensional object recognition; handwritten word recognition; and facial recognition.

Historical Background of Neural Networks
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras.
Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.

The history of neural networks that was described above can be divided into several periods:

  1. First Attempts: There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons which were considered to be binary devices with fixed thresholds. The results of their model were simple logic functions such as "a or b" and "a and b". Another attempt was by using computer simulations. Two groups (Farley and Clark, 1954; Rochester, Holland, Haibit and Duda, 1956). The first group (IBM reserchers) maintained closed contact with neuroscientists at McGill University. So whenever their models did not work, they consulted the neuroscientists. This interaction established a multidiscilinary trend which continues to the present day.
  2. Promising & Emerging Technology: Not only was neroscience influential in the development of neural networks, but psychologists and engineers also contributed to the progress of neural network simulations. Rosenblatt (1958) stirred considerable interest and activity in the field when he designed and developed the Perceptron. The Perceptron had three layers with the middle layer known as the association layer.This system could learn to connect or associate a given input to a random output unit.
    Another system was the ADALINE (ADAptive LInear Element) which was developed in 1960 by Widrow and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple components. The method used for learning was different to that of the Perceptron, it employed the Least-Mean-Squares (LMS) learning rule.
  3. Period of Frustration & Disrepute: In 1969 Minsky and Papert wrote a book in which they generalised the limitations of single layer Perceptrons to multilayered systems. In the book they said: "...our intuitive judgment that the extension (to multilayer systems) is sterile". The significant result of their book was to eliminate funding for research with neural network simulations. The conclusions supported the disenhantment of reserchers in the field. As a result, considerable prejudice against this field was activated.
  4. Innovation: Although public interest and available funding were minimal, several researchers continued working to develop neuromorphically based computaional methods for problems such as pattern recognition.
    During this period several paradigms were generated which modern work continues to enhance.Grossberg's (Steve Grossberg and Gail Carpenter in 1988) influence founded a school of thought which explores resonating algorithms. They developed the ART (Adaptive Resonance Theory) networks based on biologically plausible models. Anderson and Kohonen developed associative techniques independent of each other. Klopf (A. Henry Klopf) in 1972, developed a basis for learning in artificial neurons based on a biological principle for neuronal learning called heterostasis.
    Werbos (Paul Werbos 1974) developed and used the back-propagation learning method, however several years passed before this approach was popularized. Back-propagation nets are probably the most well known and widely applied of the neural networks today. In essence, the back-propagation net. is a Perceptron with multiple layers, a different thershold function in the artificial neuron, and a more robust and capable learning rule.
    Amari (A. Shun-Ichi 1967) was involved with theoretical developments: he published a paper which established a mathematical theory for a learning basis (error-correction method) dealing with adaptive patern classification. While Fukushima (F. Kunihiko) developed a step wise trained multilayered neural network for interpretation of handwritten characters. The original network was published in 1975 and was called the Cognitron.
  5. Re-Emergence: Progress during the late 1970s and early 1980s was important to the re-emergence on interest in the neural network field. Several factors influenced this movement. For example, comprehensive books and conferences provided a forum for people in diverse fields with specialized technical languages, and the response to conferences and publications was quite positive. The news media picked up on the increased activity and tutorials helped disseminate the technology. Academic programs appeared and courses were inroduced at most major Universities (in US and Europe). Attention is now focused on funding levels throughout Europe, Japan and the US and as this funding becomes available, several new commercial with applications in industry and finacial institutions are emerging.
  6. Today: Significant progress has been made in the field of neural networks-enough to attract a great deal of attention and fund further research. Advancement beyond current commercial applications appears to be possible, and research is advancing the field on many fronts. Neurally based chips are emerging and applications to complex problems developing. Clearly, today is a period of transition for neural network technology.

Are there any limits to Neural Networks?
The major issues of concern today are the scalability problem, testing, verification, and integration of neural network systems into the modern environment. Neural network programs sometimes become unstable when applied to larger problems. The defence, nuclear and space industries are concerned about the issue of testing and verification. The mathematical theories used to guarantee the performance of an applied neural network are still under development. The solution for the time being may be to train and test these intelligent systems much as we do for humans. Also there are some more practical problems like:
  • the operational problem encountered when attempting to simulate the parallelism of neural networks. Since the majority of neural networks are simulated on sequential machines, giving rise to a very rapid increase in processing time requirements as size of the problem expands.
    Solution: implement neural networks directly in hardware, but these need a lot of development still.
  • instability to explain any results that they obtain. Networks function as "black boxes" whose rules of operation are completely unknown.

The Future
Because gazing into the future is somewhat like gazing into a crystal ball, so it is better to quote some "predictions". Each prediction rests on some sort of evidence or established trend which, with extrapolation, clearly takes us into a new realm.
Prediction 1:
Neural Networks will fascinate user-specific systems for education, information processing, and entertainment. "Alternative ralities", produced by comprehensive environments, are attractive in terms of their potential for systems control, education, and entertainment. This is not just a far-out research trend, but is something which is becoming an increasing part of our daily existence, as witnessed by the growing interest in comprehensive "entertainment centers" in each home.
This "programming" would require feedback from the user in order to be effective but simple and "passive" sensors (e.g fingertip sensors, gloves, or wristbands to sense pulse, blood pressure, skin ionisation, and so on), could provide effective feedback into a neural control system. This could be achieved, for example, with sensors that would detect pulse, blood pressure, skin ionisation, and other variables which the system could learn to correlate with a person's response state.
Prediction 2:
Neural networks, integrated with other artificial intelligence technologies, methods for direct culture of nervous tissue, and other exotic technologies such as genetic engineering, will allow us to develop radical and exotic life-forms whether man, machine, or hybrid.
Prediction 3:
Neural networks will allow us to explore new realms of human capabillity realms previously available only with extensive training and personal discipline. So a specific state of consiously induced neurophysiologically observable awareness is necessary in order to facilitate a man machine system interface.

References: Klimasauskas, CC. (1989). The 1989 Neuro Computing Bibliography. Hammerstrom, D. (1986). A Connectionist/Neural Network Bibliography. DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab. Neural Networks, Eric Davalo and Patrick Naim. Prof. Aleksander. articles and Books. (from Imperial College) WWW pages through out the internet Assimov, I (1984, 1950), Robot, Ballatine, New York. current news from multimedia services (Tv)
LEOPRABU

Fuzzy Set Operations

Definitions.

Universe of Discourse
The Universe of Discourse is the range of all possible values for an input to a fuzzy system.
 Fuzzy Set
A Fuzzy Set is any set that allows its members to have different grades of membership (membership function) in the interval [0,1].
 Support
The Support of a fuzzy set F is the crisp set of all points in the Universe of Discourse U such that the membership function of F is non-zero.
 Crossover point
The Crossover point of a fuzzy set is the element in U at which its membership function is 0.5.
 Fuzzy Singleton
A Fuzzy singleton is a fuzzy set whose support is a single point in U with a membership function of one.

Fuzzy Set Operations.

Union

The membership function of the Union of two fuzzy sets A and B with membership functions and  respectively is defined as the maximum of the two individual membership functions

The Union operation in Fuzzy set theory is the equivalent of the OR operation in Boolean algebra.

Complement

The membership function of the Complement of a Fuzzy set A with membership function is defined as

The following rules which are common in classical set theory also apply to Fuzzy set theory.

De Morgans law

 , 

Associativity

Commutativity

Distributivity


REFERENCES
[1] Daniel Mcneil and Paul Freiberger " Fuzzy Logic"

[2] http://www.ortech-engr.com/fuzzy/reservoir.html

[3] http://www.quadralay.com/www/Fuzzy/FAQ/FAQ00.html 

[4] http://www.fll.uni.linz.ac.af/pdhome.html 

[5] http://soft.amcac.ac.jp/index-e.html 

[6] http://www.abo.fi/~rfuller/nfs.html 

[7] L.A.Zadeh,"Making computer think like people, IEEE spectrum, 8/1984, pp 26-32 [8] S.Haack, " Do we need fuzzy logic? " Int .Jr nl .of Man-Mach.stud , vol.11, 1979, pp 437-445