Logo en.artbmxmagazine.com

Artificial intelligence in production problems

Anonim

This document is focused on further analyzing artificial intelligence with its different paradigms, the most relevant being neural networks, genetic algorithms, fuzzy logic systems and programmable automata, with their different applications in everyday life and more specifically applied to solutions to problems related to industrial engineering.

It is considered that production nowadays can be highly supported by new technologies, such as artificial intelligence, either as a support for more effective decision-making or in the aid of tasks, tasks, that require a great demand of time or represent a high degree of danger to humans.

Keywords: Artificial intelligence, neural networks, genetic algorithms, fuzzy logic systems, production.

applications-of-artificial-intelligence-in-production-problems

INTRODUCTION

Artificial intelligence is an area of ​​research where algorithms are developed to control things, and that is how in 1956 the foundations were established to function as an independent field of computing.

There are many studies and applications that have been achieved with the development of this science, among which we have neural networks applied to quality control where the network evaluates whether or not a certain product meets the demanded specifications, control of the chemical process in the degree of acidity, genetic algorithms applied to the quadratic problem of facilities allocation that deals with the allocation of N jobs in M ​​machines, the programmable automata that are used for the optimization of production systems, in short, there is still much to discover regarding to the applications of this science.

HISTORY OF ARTIFICIAL INTELLIGENCE

The origins of artificial intelligence could be located with the definition of the formal neuron given by McCulloch & Pitts, as a binary device with several inputs and outputs.

Already in 1956 the subject of artificial intelligence (AI) was touched upon at the Massachusetts Institute of Technology by John McCarthy where the Dartmouth conference was held in Hanover (United States). In this contest McCarthy, Marvin Minsky, Nathaniel Rochester and Claude E. Shannon established the foundations of artificial intelligence as an independent field within computing. Previously, in 1950, Alan M. Turing had published an article in Mind magazine, entitled “Computing Machinery and Intelligence”, in which he reflected on the concept of artificial intelligence and established what would later be known like the Turing test, a test that allows to determine if a computer behaves according to what is understood as artificially intelligent or not.

Artificial intelligence in the sixties, as such, did not have many successes since it required too much investment for that time and most of the technologies were typical of large research centers. In the years 70 to 80 some significant advances were made in one of its branches called Expert Systems, with the introduction of PROLOG LISP. Basically what artificial intelligence intends is to create a programmed sequential machine that repeats indefinitely a set of instructions generated by a human being.

At present, much research is still being carried out in the large educational and private technological laboratories; without neglecting the notable advances in computer vision systems (applied, for example, for the classification of scrambled items -screws or pieces marked by color codes, to name one case-), autonomous robotic control (Sony, with its robots capable of moving in an almost human way and reacting to pressures just as a person does when walking), fuzzy logic applications (application of automatic tracking in our videotapes, to name one application), etc. However, Artificial Intelligence is still largely limited by its technological dominance, and little has been able to reach the final consumer market or industry.

DEFINITIONS OF ARTIFICIAL INTELLIGENCE

With respect to the current definitions of artificial intelligence, there are authors such as Rich & Knight, Stuart, who generally define AI as the ability of machines to perform tasks that are currently performed by human beings; other authors such as Nebendah, Delgado, provide more complete definitions and define them as the field of study that focuses on the explanation and emulation of intelligent behavior based on computational processes based on experience and continuous knowledge of the environment.

There are more authors like Marr, Mompin, Rolston, who in their definitions involve the terms of solutions to very complex problems.

In the opinion of the authors, the definitions of Delgado and Nebendan are very complete, but without the support of the formed judgment, the emotionality of the human being can lose weight such solutions, therefore, a synergistic environment must be achieved between both parties for greater effectiveness of solutions.

TRENDS IN ARTIFICIAL INTELLIGENCE SYSTEMS

Currently, according to Delgado, Stuart, there are three paradigms regarding the development of AI.

  • Neural Networks, Genetic Algorithms, Fuzzy Logic Systems.

But other paradigms have been highlighting such as intelligent decision agents and programmable automatons, with respect to the latter they are usually used to a large extent in industrial processes according to needs to be satisfied such as reduced space, periodically changing production processes, sequential processes, variable process machinery, etc.

In the authors' opinion, it is determined that all these developments considerably shorten the decision-making process and optimize them, but there you have to be very careful since you have to analyze the different impacts, whether environmental, social, political and economic.

Neural networks

Broadly speaking, it will be recalled that the human brain is made up of tens of billions of neurons interconnected to each other, forming circuits or networks that perform specific functions.

A typical neuron picks up signals from other neurons through a myriad of delicate structures called dendrites. The neuron emits pulses of electrical activity along a thin, thin layer called the axon, which splits into thousands of branches.

The extremities of these branches reach the dendrites of other neurons and establish a connection called synapse, which transforms the electrical impulse into a neurochemical message by releasing substances called neurotransmitters that excite or inhibit the neuron, in this way the information is transmitted in neurons to others and is being processed through synaptic connections and learning varies according to the effectiveness of the synapse.

Figure 1. Neurons and synaptic connections.

Source: Sandra Patricia Daza, Nueva Granada Military University, 2003.

A psychologist D Hebb, introduced two fundamental ideas that have decisively influenced the field of neural networks. The Hebb hypothesis, based on psychophysiological research, intuitively presents the way in which neurons memorize information and it is synthetically translated into the famous Hebb learning rule (also known as the product rule). This rule indicates that the connections between two neurons are strengthened if both are activated. Many of the current algorithms come from the concepts of this psychologist.

Widrow publishes a theory on neural adaptation and models inspired by this theory, the Adaline (Adaptive Linear Neuron) and the Madaline (Multiple Adaline). These models were used in numerous applications and allowed a neural network to be used for the first time on a major real-world problem: adaptive filters that remove echoes on telephone lines.

Hopfield, elaborates a network model consisting of interconnected process units that reach energy minimums, applying the stability principles developed by Grossberg. The model was very illustrative about the memory storage and retrieval mechanisms. His enthusiasm and clarity of presentation gave new impetus to the field and caused an increase in research.

Other notable developments of this decade are the Boltzmann machine and the Bam (Bi-directinal Associative Memory) models.

Biological and artificial neural network analogy

According to Herrera Fernandez

Neurons are modeled by processing units, characterized by an activity function that converts the total input received from other units into an output value, which acts as the firing rate of the neuron.

Synaptic connections are simulated by weighted connections, the strength or weight of the connection plays the role of the effectiveness of the synapse. The connections determine whether it is possible for one unit to influence another.

A process unit receives several inputs from the outputs of other process units of total input of a process unit and is usually calculated as the sum of all weighted inputs, that is, multiplied by the weight of the connection. The inhibitory or excitatory effect of the synapse is achieved using negative or positive weights respectively

Table 1. Comparison between real neurons and the processing units used in the computational models.

Source: Francisco Herrera Fernández

Neural networks

biological

Neural networks

artificial

Neurons Process units
Synaptic connections Weighted connections
Synapse effectiveness Connections weight
Excitatory or inhibitory effect Sign of the weight of a connection
Total stimulation Weighted total input
Trigger (rate of fire) Trigger function (output)

Neural networks must have as a structure several layers which are: first layer as input buffer, storing the raw information supplied in the network or carrying out a simple pre-processing of it, we call it input layer; another layer acts as an interface or output buffer that stores the response of the network so that it can be read, we call it the output layer; and the intermediate layers, the main ones in charge of extracting, processing and memorizing the information, are called hidden layers.

Figure 2. Multi-layer cascade network model.

Source: Sandra Patricia Daza, Nueva Granada Military University, 2003.

Fuzzy logic systems

Delgado's concept is the second tool that allows us to emulate human reasoning. Human beings think and reason through words and in degrees between two states, for example black and white or hot and cold, etc. These fuzzy logic systems are an improvement on traditional expert systems, in the sense that they allow us to use human language as we reason.

Talking about traditional expert systems, they try to reproduce human reasoning in a symbolic way. It is a type of computer application program that makes decisions or solves problems in a certain field, such as production systems, finance or medicine, using the knowledge and analytical rules defined by experts in that field. Experts solve problems using a combination of fact-based knowledge and reasoning skills. In expert systems, these two basic elements are contained in two separate but related components: a knowledge base and a deduction, or inference machine. The knowledge base provides objective facts and rules on the subject,while the deduction machine provides the reasoning capacity that allows the expert system to draw conclusions. Expert systems also provide additional tools in the form of user interfaces and explanation mechanisms. User interfaces, as in any other application, allow the user to formulate queries, provide information, and interact in other ways with the system. Explanation mechanisms, the most fascinating part of expert systems, allow systems to explain or justify their conclusions, and they also enable programmers to verify the operation of the systems themselves. Expert systems began to appear in the 1960s. Their fields of application are chemistry, geology, medicine, banking and investment, and insurance.

In the experience of one of the authors, the hardware on which these systems are based, which are digital integrated circuits, are very effective and durable for life if they are used correctly.

Genetic algorithms:

According to Delgado, they are a technique inspired by biological aspects, the process of evolution that Charles Darwin refers to can be applied to optimize control devices or robots or any other type of aspects that can be optimized as production lines.

It is generally accepted that any genetic algorithm to solve a problem must have five basic components, as will be seen below:

  • A coding or representation of the problem is needed, which is adequate to it A way to create an initial population of solutions An adjustment function or adaptation to the problem, also called an evaluation function, which assigns a real number to each possible solution During the execution of the algorithm, the parents - two individuals belonging to the initial population, who are feasible solutions to the problem - must be selected for reproduction; these selected parents will then be crossed, generating two children, new solutions to the problem, on each of which a mutation operator will act according to a certain probability. The result of the combination of the above functions will be a set of individuals (possible solutions to the problem),which in the evolution of the Genetic Algorithm will be part of the following population.
  • Values ​​for the parameters: population size, probability of application of genetic operators.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE TECHNIQUES THEY USE

Within the approach to Artificial Intelligence engineering, the techniques that can be used as tools to solve problems are classified in the following categories:

  1. Basic techniques, so called because they are at the base of various AI applications. Among others are Heuristic Search for Solutions, Knowledge Representation, Automatic Deduction, Symbolic Programming (LISP) and Neural Networks. These techniques are the foundation of the applications. For the most part, it does not need to be known by the end user, but by the professionals who are dedicated to its application and the generation of commercial applications.

    2. Technologies, or combinations of several basic techniques, aimed at solving families of problems. The technologies are more specialized than the basic techniques and are closer to the final applications. Robotics and Vision, Natural Language, Expert Systems can be mentioned

    3. Classes or types of applications: Diagnostics, Prediction (self-control systems of atomic reactors), Sequencing of operations ("Scheduling"), Design, Interpretation of data. All of them are families of type problems. For example, diagnosis refers to finding the causes of failures, whether they are failures in a production line or illness in a person.

    4. Fields of application: Engineering, Medicine, Manufacturing Systems, Administration, Management Decision Support, etc. All fall within the areas of computer systems, but are considered as clients of Artificial Intelligence.

APPLICATION OF ARTIFICIAL INTELLIGENCE IN PRODUCTIVE SYSTEMS

The incorporation of intelligent decision agents, neural networks, expert systems, genetic algorithms and programmable automatons for optimization of production systems is an active trend in the industrial environment of countries with high technological development and with a large investment in research and development. Said components of Artificial Intelligence have as main function to independently control, and in coordination with other agents, industrial components such as manufacturing or assembly cells, and maintenance operations, among others.

There is a growing trend towards the implementation of more autonomous and intelligent manufacturing / assembly systems, due to the market demands for obtaining products with very high levels of quality; which with manual operations becomes complicated and makes underdeveloped countries like ours do not reach competitive levels worldwide. When designing a computer-integrated production system, importance should be given to the supervision, planning, sequencing, cooperation and execution of the operation tasks in work centers, added to the control of inventory levels and quality and reliability characteristics of the system. The mentioned factors determine the structure of the system and its coordination represents one of the most important functions in the management and control of production.

Very often, the reason for building a simulation model is to find answers to questions such as What are the optimal parameters to maximize or minimize a certain objective function? In recent years there have been great advances in the field of optimization of production systems. However, progress in developing analysis tools for simulation model results has been very slow. There is a large number of traditional optimization techniques that only individuals with great knowledge of statistics and simulation concepts have made significant contributions in the area.

Due to the rise of meta-heuristic search algorithms, a new field has opened in the area of ​​optimization with simulation. New software packages, such as OptQuest (Optimal Technologies), SIMRUNNER (Promodel Corporation) and Evolver (Palisade Software), have come onto the market providing user-friendly solutions for optimizing systems that do not require internal control over the built model, but over the results. that said model throws under different conditions. In addition, new artificial intelligence techniques applied to stochastic optimization problems have demonstrated their efficiency and computational and approximation capacity.

Reinforced Learning is a set of techniques designed to solve problems based on the Markovian decision processes. Markovian processes are stochastic decision processes that are based on the concept that the action to be taken in a given state, at a given instant, depends only on the state of the system at the time of making the decision.

One of the areas that may have the greatest direct impact on the production processes of the industry worldwide is the design of support systems for decision-making based on the optimization of the system's operating parameters. For this purpose, the use of intelligent parametric and non-parametric techniques for data analysis is of great interest.

However, in the opinion of the authors, most of the architectures proposed so far for computer integrated manufacturing lack a fundamental integration factor. Communication between the various hierarchical levels of a production plant is very little, since each department limits itself to carrying out its function without seeking an integration of the entire production plant with the exception of companies such as ABB with its Baan software, etc.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN THE SOLUTION OF SPECIFIC PRODUCTION PROBLEMS

Automatic quality control operation using a computer vision system (Royman López Beltrán, Edgar Sotter Solano, Eduardo Zurek Varela. Robotics and Automatic Production Laboratory. Universidad del Norte)

Every industrial process is evaluated for the quality of its final product, this makes the quality control stage a crucial phase of the process. The mechanisms used to establish the quality of a product vary depending on the parameters that are relevant to it. When the relevant parameter is the geometry or shape of the manufactured object, it is usually left in view of the operator who carries out both inspection and verification functions for quality control, however there may be errors in the geometry of an object that escape from the sight of an operator and then prevent the proper functioning of said object. In a case like this, a good alternative emerges to use an artificial vision system capable of detecting those errors that an operator could overlook.The Robot Vision PRO artificial vision system is capable of fully automatic object identification and quality control tasks.

The Robot Vision PRO system is a vision software package that enables image acquisition, pre-processing and segmentation. It also performs high-level data processing that provides image filtering, clustering and patterning, and object identification. This system has a video camera and a monitor in charge of identifying each of the salient parts of the process and making a comparison with 100% quality parts to later determine if the packaging can be released to the market or should be discarded.

Below are some images supplied by the Robot Vision PRO system for the execution of the quality control operation. The packages were arranged in such a way that the geometries were fully contained in the program, and the quality control for each of the packages was subsequently carried out individually.

Figure 3: good packaging with 100% quality

The two subsequent figures show defective packaging because it does not meet the necessary specifications and therefore the quality system rejects the product.

Figure 4. Packaging rejected due to poor quality

Figure 5. Packaging rejected due to poor quality

The Robot Vision PRO computer vision system after being evaluated in the company was efficient for the detection of geometric defects in the packaging of centrifugal compressors, since the flexibility of the software allowed to adjust the process conditions to the quality system required for the proper measurement of packaging. This system is didactic enough to develop expressions that allow measurements of the object, recognition and quality control tasks to be carried out fully automatically.

The authors believe that the use of this technology is very suitable in companies where the surface finish of a part is very demanding or where there are tight tolerances, such as spare parts for cars, industrial instrumentation, etc.

  • Projects under development by the line of research and development of artificial intelligence (research group of the University of Manizales)

JAT (Intelligent Dispatch and Control System for Public Transportation): its main idea is to improve the urban transport service of the city of Manizales through dispatch and intelligent control that allows improving the quality of the service and reducing operating costs. The intelligent part is in charge of scheduling the dispatch of routes looking for all the buses to cover them equally.

Intelligent Remote Monitoring and Surveillance System: the aim is to implement closed circuit TV systems, which include the capacity for remote monitoring through a computer and a telephone line from anywhere in the world and through the Internet.

  • Recognition of environments in mobile robotics through neural networks

This study is focused on the global identification of environments carried out by a mobile robot based on the training of a neural network that receives the information captured from the environment by the robot's sensory system (ultrasound). It is considered that the robot, through the neural network, has the sole task of maximizing the knowledge of the environment that is presented to it. In this way it models and explores the environment efficiently while executing obstacle avoidance algorithms.

The result of this study is of great importance in the field of mobile robotics due to the fact that: the robot acquires greater autonomy of movement, the use of ultrasound as an obstacle detector is optimized and it is an important tool for the development of road planners. trajectory and ´´intelligent´´ drivers.

Using an architecture: 2 - 2 -1

Nih: Number of input neurons (2).

Nhid: Number of neurons in the intermediate layer (1).

Nout: Number of output neurons (2).

One of the examples with which the network was trained will be broadly shown (for more details, consult research by Rivera & Gauthier Universidad de los Andes).

The parameters used in the training were a learning constant of 0.2 and a moment constant of 0.9 Source: Claudia Rivera 1995

Figure 6. three obstacle training environment

The robot is located in eight different positions and in each of these a sweep was made and in this way eight files were formed with which the network was trained, and this already recognizing the environment will not crash with any obstacle.

In the neural network, as the internal capacities are increased, it will have more capacity and speed to learn different environments.

In the intervention of the authors, they determine that the use of mobile robotics is very important in production processes where man cannot withstand high or low temperature environments for long periods of time, such as MEALS, where a person could be trained robot and as its training is perfected prepare it later as a cargo transporter.

  • Genetic algorithms applied to the quadratic problem of facility allocation QAP ( Department of Operations Research, School of Industrial Engineering, University of Carabobo, Valencia, Venezuela. Ninoska Maneiro. Genetic Algorithm Applied to Facility Location Problems. Year 2001 cemisid.ing.ula.ve / area3).

QAP is a combinatorial problem, considered by some authors as NP-complete. The objective of the QAP is to find an allocation of facilities to sites, in order to minimize a function that expresses costs or distances.

The location and distribution of facilities is one of the most important topics in the training of professionals in the area of ​​Industrial Engineering and of all those

professionals who are responsible for the planning, organization and systematic growth of cities. In the daily and professional life of every individual, a great variety of problems of location of facilities arise.

The problems of location and distribution of facilities are strategic for the success of any manufacturing operation. The main reason is that materials handling costs comprise between 30 and 75% of total manufacturing costs. A good facility allocation problem solution would contribute to the overall efficiency of operations, a poor distribution can lead to the accumulation of work-in-process inventory, overload of material handling systems, inefficient set-ups and long queues. Within this broad class of problems that can be classified as QAP is the generalized line flow problem, which is a flow line in which operations flow forward and are not necessarily processed on all machines in the line.A job on such kind of line can begin to process and complete its process on any machine, always moving downstream by successive operations according to the process work sequence. When the sequence of operations for a job does not specify a machine positioned ahead of its current location, the job has to travel in the opposite direction (upstream) in order to complete the required operation. This “reverse trip” of operations is called backtracking, and it deviates from an ideal flow line for a specific job, resulting in a less efficient work structure, as shown in the following figure.always moving forward (downstream) by successive operations according to the process work sequence. When the sequence of operations for a job does not specify a machine positioned ahead of its current location, the job has to travel in the opposite direction (upstream) in order to complete the required operation. This “reverse trip” of operations is called backtracking, and it deviates from an ideal flow line for a specific job, resulting in a less efficient work structure, as shown in the following figure.always moving forward (downstream) by successive operations according to the process work sequence. When the sequence of operations for a job does not specify a machine positioned ahead of its current location, the job has to travel in the opposite direction (upstream) in order to complete the required operation. This “reverse trip” of operations is called backtracking, and it deviates from an ideal flow line for a specific job, resulting in a less efficient work structure, as shown in the following figure.This “reverse trip” of operations is called backtracking, and it deviates from an ideal flow line for a specific job, resulting in a less efficient work structure, as shown in the following figure.This “reverse trip” of operations is called backtracking, and it deviates from an ideal flow line for a specific job, resulting in a less efficient work structure, as shown in the following figure.

In the authors' view, this quadratic assignment problem should be addressed in the production shop class because of its relevance when analyzing N / M sequences.

Fig. 7. A generalized flow line Source: Ninoska Maneiro 2001.

CONCLUSIONS

  • At the National University, Manizales headquarters, in the industrial engineering program, more work should be done on computer science, in order to delve into areas of artificial intelligence applied to industrial engineering.
  • With the development of this work, satisfactory results have been obtained at the level of theoretical research, since with the documentation obtained advances in computer science were known that in some cases were unknown to the authors. The great advances in AI applied to production systems have made the industry in its constant search to improve its competitiveness achieve this objective, but in many cases displace a large amount of labor that carries with it a social deterioration that is reflected in the global indicators of unemployment and poverty levels.

BIBLIOGRAPHY

  • Elaine Rich. Knight Kevin. Artificial intelligence. Second edition. Mc Graw Hill. Mexico 1994 Stuart Russell. Norving Meter. Artificial Intelligence a Modern Approach. Printice Hall. Mexico 1996. La Ventana Informática Magazine. Issue No. 09. University of Manizales. Pages 56 - 57. May 2003. Delgado Alberto. Artificial Intelligence and Mini robots. Second edition. Ecoe Editions. July 1998, Delgado Alberto. Artificial Intelligence and Mini robots. VII National Congress of Students of Industrial, Administrative and Production Engineering National University Headquarters Manizales. Memories Congress. October 4 - 10, 1998 Computer and Computing Encyclopedia. Software Engineering and Artificial Intelligence. July 1992 Nebendah Dieter. Expert systems. Engineering and Communication. Marcombo Publishers. Barcelona 1988. Mar DC Artificial Intelligence: a Personal View, Artificial Intelligence. USA 1977, Rolston W. David. Principles of Artificial Intelligence and Expert Systems. Mc Graw Hill. Mexico 1992, Mompin P. José. Artificial Intelligence: Concepts, Techniques and Applications.Marcomobo SA Editions. Spain 1987. Ibero-American Journal of Artificial Intelligence. Application of Artificial Intelligence in Automated Production Systems. Llata, JR, Sarabia, EG, Fernández, D., Arce J., Oria, JP. Number 10, pages 100-110. Available at (http://www.aepia.org/).

Francisco Herrera Fernández Ph. D. Professor of the Department of Automatic Control Universidad Central de las Villas Santa Clara, Cuba. Article Control based on neural networks for a non-linear dynamic process. Pages 42 - 44

Ninoska Maneiro. Genetic Algorithm Applied to Facility Location Problems. Master's Thesis. Faculty of Engineering. University of Carabobo, 2001.

Claudia Rivera. Alain Gauthier. January 1995, Universidad de los Andes

Download the original file

Artificial intelligence in production problems