Artificial Intelligence | Definition, Examples, Types & more

Artificial Intelligence | Definition, Examples, Types & more

Artificial Intelligence | Definition, Examples, Types & more

What is artificial intelligence with examples?What are the 4 types of artificial intelligence?What is purpose of artificial intelligence?What is an artificial intelligence in very simple words?artificial intelligence examples, artificial intelligence article,artificial intelligence wikipedia,artificial intelligence pdf,artificial intelligence and machine learning,types of artificial intelligence,artificial intelligence future,benefits of artificial intelligence,artificial intelligence examples,artificial intelligence – wikipedia,artificial intelligence article,artificial intelligence pdf,types of artificial intelligence,artificial intelligence future

The ability of a digital computer or computer-controlled robot to carry out tasks that are typically associated with intelligent beings is referred to as artificial intelligence (AI). The endeavor of constructing systems that are endowed with the intellectual processes characteristic of humans, such as the ability to reason, uncover meaning, generalize, or learn from previous experience, is sometimes referred to as artificial general intelligence (AGI). Since the development of the digital computer in the 1940s, it has been proved that computers can be programmed to carry out very complicated activities with a high level of expertise. Some examples of these tasks include finding proofs for mathematical theorems and playing chess. However, despite ongoing improvements in the processing speed and memory capacity of computers, there are not yet any programs that can equal the human level of flexibility across a larger range of subjects or in undertakings that require a great deal of common knowledge. On the other hand, certain programs have achieved the performance levels of human experts and professionals in performing certain specific tasks. As a result, artificial intelligence in this constrained sense can be found in applications as varied as medical diagnosis, computer search engines, and voice or handwriting recognition.

What is Artificial Intelligence?

Except for the most basic of human behaviors, intelligence is assumed to be the cause of all of them. However, even the most complex of insect behaviors are never considered to be evidence of intelligence. What exactly is the distinction? Take, for example, the manner in which the digger wasp, Sphex ichneumoneus, behaves. When the female wasp returns to her burrow with food, she first places it on the threshold, then checks to see whether there are any intruders within her tunnel, and only after that does she bring her food inside of the coast is clear. If the food is moved a few inches away from the entrance to the wasp’s burrow while she is within the burrow, the true nature of the wasp’s innate behavior is revealed: when she emerges, she will repeat the entire process as frequently as the food is displaced. The ability to adjust one’s behavior in response to shifting conditions is an essential component of intelligence, which was glaringly lacking in Sphex.

The term “intelligence” is rarely used by psychologists to refer to a single quality of human beings; rather, they focus on the sum of a person’s many different skills. The majority of attention in the field of artificial intelligence research has been paid to the following aspects of intelligence: learning, reasoning, problem-solving, perception, and the use of language.

DO NOT MISS: What is a Data Scientist? What Do They Do?

Learning

Learning can take many various shapes when it comes to how it’s implemented in artificial intelligence. Learning from one’s own mistakes and experiences is the easiest method. For the purpose of finding mates in one, straightforward chess-solving computer software might, for instance, try out different moves at random until it succeeds. After that, the program might save the solution together with the position, so that the following time the computer came across the identical position, it would remember the solution. The practice of simply memorizing particular objects and procedures, sometimes known as rote learning, can be implemented on a computer in a relatively straightforward manner. The difficulty of creating something that is called generalization is even more difficult to solve. The act of generalizing includes using prior experience to new situations that are analogous to those previously encountered. For instance, a computer program that memorizes the past tense of regular English verbs cannot produce the past tense of a word like jump unless it has first been shown the word jumped. On the other hand, a computer program that is able to generalize can learn the “add ed” rule and therefore form the past tense of jump based on its previous experience with verbs that are similar to jump.

Reasoning

To make inferences that are suitable to the circumstances is the act of reasoning. Both deductive and inductive reasoning can be used to construct inferences. The first example is when someone says something like, “Fred must be either in the museum or the café.” Because he is not in the café, he must be in the museum,” and in reference to the latter, “Previous accidents of this sort were caused by instrument failure; thus, this accident must have been caused by instrument failure.” The most important distinction between these two types of logic is that, in the case of deductive reasoning, the truth of the premises provides absolute assurance that the conclusion is also true, whereas in the case of inductive reasoning, the truth of the premise provides support for the conclusion without providing absolute assurance that it is also true. The use of inductive reasoning is popular in the scientific community. In this context, data are gathered and provisional models are constructed to characterise and predict future behaviour. This process continues until the discovery of anomalous data requires the model to be altered. In the fields of mathematics and logic, where intricate systems of indisputable theorems are built up from a small collection of fundamental axioms and rules, deductive reasoning is a popular form of reasoning.

 

Programming computers to make inferences, particularly deductive conclusions, has seen a significant amount of progress in recent years. True thinking, on the other hand, requires more than simply drawing inferences; rather, it requires drawing inferences that are pertinent to the resolution of the specific problem or circumstance at hand. This is one of the most difficult challenges that artificial intelligence must overcome.

Problem-solving

The process of problem-solving, which is particularly prevalent in the field of artificial intelligence, can be conceptualized as the methodical exploration of a variety of potential courses of action in the interest of achieving some sort of predetermined objective or finding some kind of answer. The approaches that can be taken to resolve issues can be categorized as either generic or specific.

A method with a special purpose is one that has been developed specifically to address a certain issue and, in many cases, makes use of extremely unique characteristics of the environment in which the issue is embedded. On the other hand, a method that is general-purpose can be used to solve many different kinds of issues. AI makes use of a number of different methods, one of which is called means-end analysis. This method involves gradually or incrementally reducing the gap that exists between the current state and the ultimate objective. The computer will continue to select actions from a list of possible means until the objective is accomplished. For example, in the case of a straightforward robot, this list would include the commands PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT.

Programs using artificial intelligence have proven successful in solving a wide variety of issues. Creating mathematical proofs, determining the winning move (or sequence of moves) in a board game, and manipulating “virtual objects” in a computer-generated world are some examples of computationally intensive tasks.

Perception

During the process of perception, the surrounding environment is analyzed using a variety of sensory organs, which may be natural or artificial, and the scene is broken down into distinct objects arranged in a variety of spatial relationships. The fact that an object’s appearance might change depending on the angle from which it is viewed, the direction and intensity of the illumination in the scene, and how much the object contrasts with the field around it makes analysis more difficult.

At this point in time, artificial perception has progressed to the point that optical sensors are able to identify humans, autonomous vehicles are able to travel at modest speeds on open roads, and robots are able to roam through buildings and collect empty drink cans. One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving television eye and a pincer hand. FREDDY was built at the University of Edinburgh in Scotland between the years 1966 and 1973 under the direction of Donald Michie. FREDDY was one of the first systems to integrate perception and action. FREDDY was able to identify a wide variety of items and could be programmed to construct straightforward artefacts, like as a toy car, from a disorganised pile of their component parts.

Language

A language is a system of signals that have meaning based on the conventions that govern their use. Language, in this sense, does not necessarily need to be limited to the spoken word. It is a matter of convention in some countries that the symbol denotes “danger ahead.” This is just one example of how traffic signs can form their own minilanguage. The fact that linguistic units possess meaning as a result of the convention is a feature that is unique to languages. Linguistic meaning is quite distinct from what is known as natural meaning, as demonstrated by statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”

In contrast to the sounds made by birds and the symbols used on traffic signs, the amount of information conveyed by fully developed human languages is significantly more. A language that is productive is capable of formulating an infinite number of sentence varieties.

It is not very difficult to develop computer systems that, in extremely constrained situations, give the impression of being able to respond fluently in a human language to questions and statements posed in that language. In spite of the fact that none of these computers are capable of true linguistic comprehension, it is theoretically possible for them to advance to the point where their command of a language is indistinguishable from that of a typical person. What, therefore, are the components of genuine comprehension, if even a machine that utilises language in the same way as a natural human speaker would is not recognized as knowing? The answer to this tricky subject is not something that can be agreed upon by everyone. According to one school of thought, whether or not a person understands is dependent not only on their actions but also on their past: in order to be considered to understand, a person must have learned the language and been trained to take their place in the linguistic community through interaction with other people who use the language.

HERE MORE: 26 Top Halloween Costumes Ideas of All Time to Inspire Your Best Look Yet

Methods and goals in AI

Symbolic vs. connectionist approaches

The symbolic approach, also known as the “top-down” approach, and the connectionist approach, sometimes known as the “bottom-up” approach, are the two main methodologies that are followed in the field of artificial intelligence research. The top-down approach attempts to reproduce intelligence by investigating cognition in terms of the processing of symbols, which is where the name “symbolic” gets its origin. This approach is independent of the biological anatomy of the brain. The bottom-up approach, on the other hand, entails the creation of artificial neural networks that are an imitation of the structure of the brain, which is where the connectionist name comes from.

Consider the challenge of developing a system that can identify the letters of the alphabet when it is provided with an optical scanner. This will help show the distinction between these two methods. Typically, training an artificial neural network using a bottom-up strategy includes introducing letters to the network one at a time, progressively improving performance by “tuning” the network. (Tuning modifies the degree to which various brain pathways respond to various stimuli.) A top-down method, on the other hand, is more likely to include the creation of a computer programme that evaluates each letter based on geometric descriptions. Simply defined, the bottom-up approach centres on neuronal processes as its foundation, whereas the top-down approach centres on symbolic descriptions as its foundation.

 

Edward Thorndike, a psychologist at Columbia University in New York City, first proposed the idea that human learning consists of an unidentified property of connections between neurons in the brain in his book The Fundamentals of Learning (1932). Thorndike’s work is credited with being the first to propose this idea. Donald Hebb, a psychologist at McGill University in Montreal, Canada, proposed in his book The Organization of Behavior (1949) that the process of learning specifically involves reinforcing certain patterns of neural activity by increasing the probability or weight of induced neuron firing between associated connections. Hebb’s research was published after he published the book. The concept of weighted connections will be further upon in the following section, which is devoted to connectionism.

 

In 1957, two ardent proponents of symbolic artificial intelligence, Allen Newell, a researcher at the RAND Corporation in Santa Monica, California, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, summed up the top-down approach in what they called the physical symbol system hypothesis. Both Newell and Simon were active in the field of artificial intelligence research. According to this hypothesis, the processing of symbolic structures alone is sufficient, in principle, to produce artificial intelligence in a digital computer, and that not only is human intelligence the result of the same type of symbolic manipulations, but artificial intelligence is also the result of the same types of symbolic manipulations.

Both the top-down and bottom-up approaches were pursued simultaneously during the 1950s and 1960s, and both achieved notable, albeit limited, outcomes. During this time period, the top-down strategy was more prevalent. Bottom-up AI, on the other hand, was mostly ignored during the 1970s, and it wasn’t until the 1980s that this methodology once again gained widespread attention. Both strategies are used in today’s world, and it is well accepted that both are fraught with challenges. Symbolic techniques are effective in simplified domains but often fail when confronted with the real world; in the meantime, bottom-up researchers have been unable to recreate the neural systems of even the most basic living beings. Caenorhabditis elegans, a worm that has been the subject of extensive research, has roughly 300 neurons, and the structure of their interconnections is completely understood. However, connectionist models have not even been successful in imitating this worm. It should come as no surprise that the neurons described by the connectionist theory are a severe simplification of the actual phenomenon.

Strong AI, applied AI, and cognitive simulation

Research into artificial intelligence makes use of the methodologies described above in an effort to accomplish one of three goals: strong AI, applied AI, or cognitive simulation. The end goal of strong AI is to create machines that can think. (In 1980, the philosopher John Searle of the University of California, Berkeley, was the one who first suggested using the phrase “strong AI” to describe this type of study.) Strong artificial intelligence seeks to achieve its ultimate goal of creating a machine with an overall intellectual ability that is indistinguishable from that of a human being. This objective sparked a great deal of interest in the 1950s and 1960s, as is discussed in the section on the early milestones in AI; but, such optimism has now given way to a realization of the immense obstacles required in achieving this aim. To this point, there hasn’t been much of a breakthrough. Some skeptics question whether or not research will, in the not-too-distant future, be able to generate even a system with the general intellectual aptitude of an ant. In point of fact, there are researchers working in AI’s other two fields who believe that strong AI is not something that should be pursued.

The goal of applied artificial intelligence, also known as advanced information processing, is to develop “smart” systems that may be used profitably in business, such as “expert” medical diagnosis systems and stock-trading systems. As may be seen in the section titled “Expert systems,” applied artificial intelligence has been quite successful.

Computers are employed in the field of cognitive simulation to test hypotheses regarding the operation of the human mind, such as theories concerning the manner in which individuals remember faces or recall past experiences. The fields of neuroscience and cognitive psychology already make extensive use of the potent instrument of cognitive simulation.

Alan Turing and the beginning of AI

Theoretical work

Alan Mathison Turing, a British logician and computer pioneer, is credited with having performed some of the first and most significant work in the field of artificial intelligence in the middle of the 20th century. In 1935, Turing described an abstract computing machine that consisted of an unlimited memory and a scanner that moved back and forth across the memory, symbol by symbol, reading what it found and writing further symbols. The scanner also read what it found and wrote more symbols. A programme of instructions, which is likewise kept in the memory in the form of symbols, is responsible for controlling the operations that are carried out by the scanner. This is the idea behind Turing’s stored programme, and it includes the prospect that the machine may operate on its own programme, thereby altering or improving it. Turing is credited with developing this concept. The concept that Turing conceived is now generally referred to as the universal Turing machine. In their most fundamental form, all modern computers can be seen as universal Turing machines.

 

At the Government Code and Cypher School in Bletchley Park, Buckinghamshire, England, where he worked during World War II, Alan Turing was a leading cryptanalyst. After the end of hostilities in Europe in 1945, Turing was finally able to return to the project of developing an electronic computing machine that could store and retrieve programme instructions. Despite this, he devoted a significant amount of his attention to contemplating the problem of artificial intelligence while the war was going on. One of Turing’s coworkers at Bletchley Park, Donald Michie, who would later found the Department of Machine Intelligence and Perception at the University of Edinburgh, recalled that Turing frequently discussed how computers could learn from experience as well as solve new problems through the use of guiding principles. This method is now known as heuristic problem solving. Turing is credited with being the first person to discuss how computers could learn from experience and solve new problems through the use of guiding principles.

In what is believed to be the first public lecture to mention computer intelligence, which took place in London in 1947, Alan Turing stated, “What we want is a machine that can learn from experience,” and that “the possibility of letting the machine alter its own instructions provides the mechanism for this.” In a study titled “Intelligent Machinery,” which he published in 1948, he presented many of the fundamental ideas that underpin AI. However, Turing never really published this work, and many of his ideas have since been rethought by other people. For example, one of Turing’s early concepts was to teach a network of artificial neurons to carry out a set of predetermined tasks. This strategy is discussed in greater detail in the section under “Connectionism.”

Chess

Turing explained his thoughts on machine intelligence by referring to chess while he was working at Bletchley Park. Chess is a useful source of tough and clearly defined issues against which proposed methods for problem solving can be evaluated. In theory, a computer that plays chess could compete by searching exhaustively through all of the potential moves. However, in practise, this is not practical because it would require evaluating an unimaginably vast number of moves. Heuristics are essential for directing a search that is both more focused and more discriminating. In spite of the fact that Turing dabbled in the construction of chess programmes, he was limited to the study of theory because he did not have access to a computer capable of running his chess program. The development of the first genuine AI programs was delayed until the advent of electronic digital computers that could store and run programs.

Deep Blue, a chess computer constructed by the International Business Machines Corporation (IBM), defeated the reigning world champion, Garry Kasparov, in a six-game match in 1997, just over 50 years after Turing’s prediction in 1945 that computers might one day play extremely excellent chess. Although Turing’s prediction was correct, he was disappointed to find that chess programming did not contribute to his knowledge of how people think. Deep Blue’s 256 parallel processors enabled it to examine 200 million possible moves per second and to look ahead as many as 14 turns of play, which contributed significantly to the massive improvement in computer chess that has occurred since Turing’s time. Advances in artificial intelligence have not been able to keep up with these technological advancements. Many people share the opinion of Noam Chomsky, a linguist at the Massachusetts Institute of Technology (MIT), who stated that it would be about as exciting for a bulldozer to win an Olympic weightlifting tournament as it would be for a computer to beat a grandmaster at chess.

The Turing test

In 1950, Turing sidestepped the conventional discussion surrounding the definition of intelligence by presenting a practical test for computer intelligence that is now simply referred to as the Turing test. This test is named after Turing because he was the first person to develop it. There are three people who take part in the Turing test: a computer, a human interrogator, and a human foil. Through a series of questions directed at the other two participants, the interrogator endeavours to establish which of the three is the computer. The keyboard and the display screen are the only means of communication. The interrogator has the freedom to ask questions that are as probing and all-encompassing as they see fit, and the computer is given permission to try everything in its power to produce an incorrect identification. (For instance, the computer might respond, “No,” in response to the question, “Are you a computer?” and would follow a request to multiply one enormous number by another with a lengthy wait and an inaccurate answer.) The interrogator needs the foil in order to be able to make an accurate identification. If a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to the proponents of Turing’s test), the computer is considered to be an intelligent, thinking entity. This test is carried out by a number of different people who take turns playing the roles of the interrogator and the foil.

Hugh Loebner, an American philanthropist, established the yearly Loebner Prize competition in 1991. He offered a payout of one hundred thousand dollars to the first computer that was able to pass the Turing test, as well as a prize of two thousand dollars for the finest effort each year. On the other hand, no AI program has even come close to passing the Turing test in its purest form.

Early milestones in AI

The first AI programs

Christopher Strachey, who would go on to manage the Programming Research Group at the University of Oxford, is credited with writing the first effective artificial intelligence (AI) programme in 1951. The programme for checkers, often known as draughts, was written by Strachey and executed on the Ferranti Mark I computer at the University of Manchester in England. By the summer of 1952, this programme was able to play a whole game of checkers at a speed that was considered to be fair.

The year 1952 saw the publication of details pertaining to the initial effective demonstration of machine learning. On the EDSAC computer was the programme known as Shopper, which was written by Anthony Oettinger at the University of Cambridge. The virtual environment of Shopper was a shopping centre with eight different stores. Shopper would hunt for the item it was ordered to purchase by going to several stores until it was located. This would continue until the item was successfully purchased. Shopper would commit to memory a few of the things that were carried in each store that they stopped in while they were looking (just as a human shopper might). The following time Shopper was dispatched to find the same item, or for some other item that it had already located, it would head straight to the appropriate store. The term “rote learning” refers to this straightforward method of acquiring knowledge, which is mentioned in the first section of the article entitled “What is intelligence?”

 

Checkers was also the topic of Arthur Samuel’s first artificial intelligence programme, which was built in 1952 for the prototype of the IBM 701 and was the first AI programme to run in the United States. Samuel took over the core components of Strachey’s checkers software and, over the course of several years, significantly expanded its functionality. In 1955, he introduced capabilities to the software that made it possible for it to learn from previous experiences. The improvements that Samuel made, which included methods for both rote learning and generalization, eventually led to his program winning one game against a former Connecticut checkers champion in the year 1962.

Evolutionary computing

The fact that Samuel’s checkers software was one of the earliest examples of evolutionary computing also contributed to its notoriety. (His software “developed” by competing a modified copy against the most recent and finest version of his software; the winner of this competition became the new standard.) The use of an automatic approach for the production and evaluation of consecutive “generations” of a programme is a typical component of evolutionary computing. This process continues until a highly competent solution is developed.

John Holland, a prominent advocate of evolutionary computing, has also contributed to the development of test software for the prototype of the IBM 701 computer. In specifically, he contributed to the development of a “virtual” rat based on a neural network that could be taught to successfully complete a maze. The results of this work persuaded Holland that the bottom-up methodology was the way to go. Holland relocated to the University of Michigan in 1952 in order to earn a PhD in mathematics. During this time, he continued to work as a consultant for IBM. However, he quickly shifted his focus to a new inter-disciplinary programme in computers and information processing that was developed by Arthur Burks. This programme would later become known as communications science. Burks was one of the builders of ENIAC and its successor EDVAC. In his dissertation for what was almost certainly the first computer science Ph.D. in the history of the world, which Holland completed in 1959, he proposed a new kind of computer called a multiprocessor computer, which would assign each artificial neuron in a network to a separate processor. Holland was the inventor of the modern personal computer. (In 1985, Daniel Hillis overcame the technical challenges involved in the construction of the first computer of this kind, a supercomputer with 65,536 processors built by Thinking Machines Corporation.)

After completing his education, Holland became a member of the teaching staff at the University of Michigan, where he remained for the next forty years and oversaw a significant portion of the investigation into techniques for the automation of evolutionary computing, also known as genetic algorithms. A chess program, simulations of single-cell biological organisms, and a classifier system for regulating a simulated gas-pipeline network were some of the systems that were implemented in Holland’s laboratory. However, genetic algorithms are no longer only used in “academic” presentations. In one significant real-world application, a genetic algorithm works in conjunction with a witness to a crime in order to create a portrait of the offender.

Logical reasoning and problem solving

Logical reasoning is an essential component of intelligence and has been, and will continue to be, a primary focus of research in the field of artificial intelligence (AI). A theorem-proving programme that was built in 1955–1956 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of the Carnegie Mellon University is considered to be an important historical landmark in this field. The software, which came to be known as The Logic Theorist, was developed to establish theorems from the three-volume work Principia Mathematica (1910–13), which was written by the British philosopher-mathematicians Alfred North Whitehead and Bertrand Russell. In one particular instance, a proof that was generated by the software turned out to be more sophisticated than a proof that was presented in the books.

Following this, Newell, Simon, and Shaw went on to develop a more robust computer software known as the General Problem Solver (GPS). The first operational version of GPS was released in 1957, and the development of the system proceeded for around ten more years after that. Using a method of trial and error, GPS could answer an impressively wide array of mysteries. The Global Positioning System (GPS), along with other programs of its ilk that are incapable of any sort of learning, has been criticized for the fact that its intelligence is wholly secondhand, as it is derived solely from the information that the programmer chooses to include.

English dialogue

Eliza and Parry, two of the earliest and most well-known artificial intelligence (AI) programs, created an unnerving impression of intelligent dialogue. (In 1966, specifics on both were made public for the very first time.) Eliza, which was developed by Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory, imitated a human counselor. The human paranoiac was modeled after the author, psychiatrist Kenneth Colby of Stanford University.

Parry was written by him. When asked to determine whether they were speaking with Parry or a human paranoiac, psychiatrists were frequently unable to discern which party they were conversing with. Despite this, neither Parry nor Eliza could be considered intelligent under any reasonable circumstances. The remarks that Parry made to the conversation were canned, which means that the programmer had already constructed them in preparation and placed them away in the memory of the computer. Eliza, too, frequently used prewritten phrases and straightforward programming techniques.

AI programming languages

During the course of their work on the Logic Theorist and GPS, Newell, Simon, and Shaw developed their own computer language known as the Information Processing Language (IPL). This language was designed specifically for use in the programming of artificial intelligence. An extremely versatile data structure that they referred to as a list served as the foundation of IPL. A list is nothing more than an itemised and chronologically ordered sequence of data objects. It’s possible that some or all of the items in a list are also lists in and of themselves. This plan will ultimately result in very branched structures.

In 1960, John McCarthy produced the programming language known as LISP (List Processor) by combining aspects of IPL with the lambda calculus, which is a formal mathematical-logical system. LISP continues to be the primary language used for artificial intelligence development in the United States. (The lambda calculus itself was developed in 1936 by Alonzo Church, a logician at Princeton. At the time, Church was investigating the abstract Entscheidungsproblem, also known as the “decision problem,” for predicate logic. This was the same problem that Turing had been attempting to solve when he developed the universal Turing machine.)

 

Alain Colmerauer is credited with the creation of the logic programming language known as PROLOG (Programmation en Logique). The language was initially developed in 1973 at the University of Aix-Marseille in France, where Colmerauer was working at the time. Robert Kowalski, a logician who is a member of the artificial intelligence group at the University of Edinburgh, is credited with further developing PROLOG. This language takes use of a method for theorem-proving known as resolution, which was developed in 1963 by the British logician Alan Robinson at the Argonne National Laboratory in Illinois for the United States Atomic Energy Commission. PROLOG has the ability to detect whether or not a given assertion logically follows on from other statements that have been presented. For instance, if a PROLOG program is presented with the premises that “All logicians are rational” and “Robinson is a logician,” then it will offer a positive response to the question “Robinson is rational?” In the field of artificial intelligence (AI), PROLOG is utilized extensively, particularly in Europe and Japan.

PROLOG has been utilized by researchers at the Institute for New Generation Computer Technology in Tokyo as the foundation for the development of advanced logic programming languages. These are the languages that are used on nonnumerical parallel computers that were developed at the Institute. They are known as fifth-generation languages.

The development of languages for reasoning about time-dependent data, such as “the account was paid yesterday,” is another recent area of research and development. The use of tense logic, which enables statements to be placed in the context of the progression of time, is the foundation of these languages. (The philosopher Arthur Prior of the University of Canterbury in Christchurch, New Zealand, was the one who came up with the concept of tensed logic in 1953.)

Microworld programs

In order to make sense of the mind-boggling complexity of the real world, scientists frequently leave out less important features from their models. For instance, physicists frequently leave out the effects of friction and elasticity from their models. In 1970, researchers at the MIT Artificial Intelligence Laboratory led by Marvin Minsky and Seymour Papert advocated that AI research should concentrate on developing programmes that are capable of intelligent behaviour in more straightforward artificial settings known as microworlds. The so-called “blocks universe,” which is comprised of coloured blocks of various shapes and sizes arranged on a flat surface, has been the subject of a significant amount of investigation.

SHRDLU, a programme developed at MIT by Terry Winograd, is considered by many to be one of the first microworld programmes to be successful. (In 1972, information regarding the programme was made public.) A level platform was covered in play blocks, while SHRDLU operated a robot arm that was suspended above the surface. The arm and the blocks were both simulated in some way. SHRDLU would reply to commands input in English as it is spoken today, such as “Would you kindly stack up both of the red blocks and either a green cube or a pyramid?” The programme was also able to provide responses to questions on its own behaviour. Winograd quickly revealed that the SHRDLU programme was, in reality, a dead end, despite the fact that the research had at first been hailed as a significant breakthrough. The methodologies that were ground-breaking for the programme turned out to be inappropriate for use in bigger and more intriguing settings. In addition, SHRDLU only gave the impression of comprehending the microworld of the blocks and the English comments that were made about it; in reality, this was only an illusion. SHRDLU was completely oblivious to the concept of a green block.

 

Shakey, a mobile robot, was built at the Stanford Research Institute between the years 1968 and 1972 by Bertram Raphael, Nils Nilsson, and others as part of the microworld approach. Shakey was one of the products of the microworld approach. The robot resided in a custom-built miniature environment that was comprised of walls, doorways, and a few wooden blocks with straightforward configurations. Each wall had a baseboard that had been painstakingly painted so that the robot could “see” where the wall ended and the floor began (a simplification of reality that is typical of the microworld approach). Shakey possessed over a dozen fundamental talents, including TURN, PUSH, and CLIMB-RAMP, among others.

Critics pointed out that Shakey’s environment was extremely simple and stressed that, despite these simplifications, Shakey operated excruciatingly slowly. They noted that a set of acts that a human might plan out and carry out in a matter of minutes took Shakey days to carry out.

The application of a computer program known as an expert system, which will be discussed in the following chapter, has proven to be the method’s most fruitful application to date.

Expert systems

Expert systems live in a form of microworld that is self-contained and not overly intricate. One example of this would be a model of a ship’s hold and the cargo that is stored within. Because of this, when it comes to artificial intelligence systems, every attempt is made to combine all of the information about a certain niche that an expert (or group of experts) would know. As a result, a strong expert system can frequently outperform the performance of a single human expert. There is a wide variety of commercially available expert systems, such as software for medical diagnosis, chemical analysis, credit authorization, financial management, corporate planning, financial document routing, oil and mineral prospecting, genetic engineering, automobile design and manufacture, camera lens design, computer installation design, airline scheduling, cargo placement, and automatic help services for home computer owners. Some of these applications include genetic engineering, oil and mineral prospecting, financial management, corporate planning, financial document routing, oil and mineral prospecting, automobile design and manufacture, genetic engineering, and oil

Knowledge and inference

A knowledge base, often known as a KB, and an inference engine are the fundamental building blocks of an expert system. Interviewing persons who are knowledgeable in the field in issue allows for the collection of information that will later be placed in the KB. The information that is gleaned from the experts is then compiled by the interviewer, also known as a knowledge engineer, into a set of rules, which often take the form of a “if-then” structure. The rules that fall under this category are known as production rules. The expert system is able to derive conclusions from the rules contained in the KB thanks to the inference engine. For instance, if the knowledge base (KB) has the production rules “if x, then y” and “if y, then z,” the inference engine will be able to figure out “if x, then z.” After that, the expert system could pose a question to its user, such as “Is x true in the circumstance that we are thinking about?” The conclusion z will be drawn by the algorithm if the question is answered positively.

A few of today’s expert systems take advantage of fuzzy logic. There are just two possible truth values according to conventional logic: true and false. Because of this precise precision, characterization of nebulous qualities or circumstances can be challenging. (At what point exactly does a thinning head of hair transition into a bald head?) Because the rules that human experts apply frequently contain ambiguous terms, it is advantageous for the inference engine of an expert system to make use of fuzzy logic.

DENDRAL

Edward Feigenbaum, a researcher in artificial intelligence, and Joshua Lederberg, a geneticist at Stanford University, began work on Heuristic DENDRAL, which would subsequently be reduced to DENDRAL and be a chemical-analysis expert system, in 1965. Both men were affiliated with Stanford. For instance, the item that needs to be evaluated can be a complex chemical that consists of carbon, hydrogen, and nitrogen. DENDRAL would formulate a hypothesis regarding the molecular structure of the material by beginning with the spectrographic data obtained from the substance. The performance of DENDRAL was on par with that of chemists who were considered to be experts in this field, and the program was utilized both in industry and in academic settings.

MYCIN

The year 1972 marked the beginning of research and development at Stanford University for MYCIN, an expert system designed to cure blood infections. MYCIN would make an attempt to diagnose patients based on the symptoms that were described and the results of any relevant medical tests. In order to arrive at a likely diagnosis, the computer may first make a request for additional information pertaining to the patient and then advise additional laboratory testing; after these steps, it would then provide a recommendation on a treatment plan. In the event that it was asked, MYCIN would provide an explanation of the thought process that went into arriving at its diagnosis and suggestion. MYCIN performed at roughly the same level of competence as human specialists in blood infections and rather better than general practitioners by using approximately 500 production rules.

 

Despite this, expert systems lack both common sense and an awareness of the bounds to which their expertise can be applied. For instance, if MYCIN were informed that a patient who had received a gunshot wound was bleeding to death, the software would try to detect a bacterial explanation for the patient’s symptoms in an effort to determine the source of the patient’s condition. Expert systems are also able to respond to ludicrous clerical errors, such as prescribing a clearly erroneous dosage of a medicine to a patient whose weight and age data were unintentionally transposed. This would be an example of an absurd clerical error.

The CYC project

CYC is a massive experiment in symbolic artificial intelligence. The Microelectronics and Computer Technology Corporation, which is a partnership of computer, semiconductor, and electronics businesses, was the organisation that oversaw the project when it first got off the ground in 1984. In 1995, Douglas Lenat, who was serving as the director of the CYC project at the time, split off the project to form Cycorp, Inc., with headquarters in Austin, Texas. Cycorp’s most audacious objective was to create a knowledge base (KB) that included a sizeable portion of the commonsense information possessed by a human person. CYC contains the encoding of millions of commonsense statements, sometimes known as rules. It was hoped that once this “critical mass” was reached, the system would be able to mine additional rules directly from everyday language and eventually provide the basis for subsequent generations of expert systems.

 

Even though only a small portion of CYC’s commonsense knowledge base had been gathered, it was nevertheless able to make inferences that would overcome more straightforward systems. For instance, CYC might draw the conclusion that “Garcia is wet” from the fact that the statement “Garcia is finishing a marathon run” by applying its rules which state that running a marathon requires a high level of exertion, that people sweat when exposed to high levels of exertion, and that something that sweats is wet when it does so. Concerns around searching and finding solutions to difficulties also remain unresolved; for instance, it is not yet known how to conduct an automated search of the knowledge base for information that is pertinent to a particular situation. AI researchers have coined the term “frame issue” to refer to the challenge of efficiently finding, updating, and otherwise manipulating a big structure of symbols in a reasonable length of time. Some individuals who are opposed to symbolic AI are of the opinion that the frame problem is basically intractable. As a result, they say that the symbolic method will never produce systems that are actually intelligent. It is feasible, for instance, that CYC will fall victim to the frame problem a very long time before the system reaches human levels of intelligence.

Connectionism

Connectionism, also known as neuronlike computing, is a form of computing that emerged as a result of efforts to comprehend how the human brain operates at the neurological level, and more specifically, how humans acquire and recall information. An influential treatise on neural nets and automatons was written in 1943 by neurophysiologist Warren McCulloch of the University of Illinois and mathematician Walter Pitts of the University of Chicago. According to this treatise, each neuron in the brain is a straightforward digital processor, and the brain as a whole is a type of computing machine. After some time had passed, McCulloch expressed his thoughts on the matter as follows: “What we believed we were doing (and I think we did quite well) was treating the brain as a Turing machine.”

Creating an artificial neural network

However, it wasn’t until 1954 that MIT’s Belmont Farley and Wesley Clark were able to successfully run the first artificial neural network, despite the fact that computer memory prevented them from having more than 128 neurons in their network. They were successful in teaching their networks to identify relatively straightforward patterns. They also found that the performance of a trained network was unaffected by the random death of up to ten percent of the neurons in the network. This is a characteristic that is evocative of the brain’s ability to survive limited damage imposed by surgery, accident, or disease.

The fundamental principles of connectionism can be understood through the medium of the figure’s depiction of a straightforward neural network. The inputs are handled by four of the network’s five neurons, while the outputs are handled by the fifth neuron, which is coupled to each of the others. Every one of the neurons is either active (state 1) or inactive (state 0). (0). A “weight” is assigned to each link that ultimately leads to neuron N, which is the output neuron. To determine what is known as the total weighted input into N, simply add up the weights of all of the connections that are leading to N from neurons that are firing. This will give you the answer. Take, for instance, the scenario in which just two of the input neurons—X and Y—are actively firing. It follows that the total weighted input to N is 3.5 given that the weight of the connection from X to N is 1.5 and the weight of the connection from Y to N is 2. The figure demonstrates that N has a threshold of 4 for when it fires. To put it another way, if N’s total weighted input is equal to or greater than 4, then N fires, but if it is less than 4, then N does not fire. Therefore, for instance, neuron N will not fire if the only input neurons that fire are X and Y; but, neuron N will activate if X, Y, and Z all fire simultaneously.

 

There are two stages involved in training the network. First, the external agent feeds in a pattern and watches how N behaves. This takes place in step one. Second, the agent will modify the connection weights so that they are compliant with the rules as follows:

 

If the actual output is 0 and the desired output is 1, increase the weight of each connection leading to N from neurons that are firing by a small fixed amount (this will make it more likely that N will fire the next time the network is given the same pattern); if the actual output is 1 and the desired output is 0, do the opposite.

If the actual output is 1 and the desired output is 0, reduce the weight of each link that leads to the output neuron from neurons that are firing by the same modest amount (thus making it less likely that the output neuron will fire the next time the network is given that pattern as input).

This two-step process is carried out by the external agent, which is essentially a computer program. It is carried out with each pattern that is included in a training sample, which is then repeated a certain amount of times. During the course of these several repetitions, a pattern of connection weights is established, which enables the network to respond appropriately to each pattern. The fact that the learning process is purely mechanical and does not call for any kind of participation or modification from a human being is a notable aspect of it. The weights of the connections are either automatically increased or decreased by a predetermined amount, and the exact same learning mechanism is applied to all of the different tasks.

Artificial Intelligence | Definition, Examples, Types & more

Perceptrons

In 1957, Frank Rosenblatt of the Cornell Aeronautical Laboratory at Cornell University in Ithaca, New York, began researching artificial neural networks that he dubbed perceptrons. He called these networks “perceptrons.” He did this by conducting experimental research of the properties of neural networks (using computer simulations), as well as by doing comprehensive mathematical analysis. As a result, he made significant contributions to the field of artificial intelligence. Rosenblatt was a gifted communicator, and not long after his work, numerous research groups across the United States began focusing their attention on perceptrons. The term “connectionist” was coined by Rosenblatt and those who followed in his footsteps to describe their method in order to underline the significance, in the process of learning, of the formation and remodeling of connections between neurons. This word has been embraced by contemporary researchers.

One of the contributions that Rosenblatt made was to generalize the training technique that Farley and Clark had exclusively applied to two-layer networks so that it could also be applied to multilayer networks. This was one of Rosenblatt’s contributions. Rosenblatt described his process using the term “back-propagating error correction,” which he coined himself. The method, which has been significantly improved and extended thanks to the work of a large number of researchers, as well as the phrase “back-propagation,” are both now often used in connectionism.

Conjugating verbs

In a well-known connectionist experiment that was carried out at the University of California in San Diego (and published in 1986), David Rumelhart and James McClelland trained a network of 920 artificial neurons to form the past tenses of English verbs. The network was organised in two layers, each of which contained 460 neurons. One layer of neurons, known as the input layer, was shown root versions of verbs such as come, look, and sleep. A supervisory computer programme monitored the deviation between the actual response at the layer of output neurons and the desired response—say let’s came—and then mechanically adjusted the connections throughout the network in accordance with the procedure described above to provide the network with a gentle nudge in the direction of the appropriate response. This was done so that the network would produce the desired response. Approximately 400 distinct verbs were delivered one at a time to the network, and after each presentation, the connections were modified to reflect the new information. Following the completion of this entire process approximately 200 times with the same verbs, the network was able to correctly create the past tense of a large number of verbs that it was not familiar with in addition to the original verbs. For instance, when presented with guard for the first time, the network responded with guarded; when presented with weep, it sobbed; when presented with attach, it clung; and when presented with drip, it dripped (complete with double p). A striking example of learning that involves generalization may be seen here. (However, the quirks of the English language occasionally proved to be too much for the network, and there were times when it confused squawked with shipped, shape with membled, and mail with membled.)

The theory known as connectionism sometimes goes by the moniker parallel distributed processing, which highlights two key characteristics. First, there are a vast number of relatively simple processors that work in parallel. These are the neurons. Second, the information that is stored in neural networks is done so in a decentralized manner, with each individual connection contributing to the storage of a wide variety of various types of data. The information that, for example, allowed the past-tense network to generate wept from weep was not kept in a single position inside the network; rather, it was dispersed throughout the entirety of the pattern of link weights that was forged during training. Connectionist research is helping to efforts to understand how the human brain stores information, and it appears that the human brain stores information in a dispersed method as well.

Other neural networks

Other work on computing with a neuron-like structure includes the following:

Visual perception. Visual data can be used by networks to identify people’s faces and other objects. An artificial neural network developed by John Hummel and Irving Biederman at the University of Minnesota is capable of recognizing approximately ten different items from line drawings that are only simple. The network is able to distinguish the objects, which include a mug and a frying pan, even when they are drawn from different angles. Among the objects that can be recognized by the network is a mug. Networks that have been investigated by Tomaso Poggio of MIT are able to recognise bent-wire shapes drawn from different angles, faces photographed from different angles and showing different expressions, as well as objects from cartoon drawings with gray-scale shading that indicates depth and orientation.

The processing of language. The ability of neural networks to turn handwritten and typed content into electronic text is a significant advance. The Internal Revenue Service of the United States has given the go-ahead for the development of a neuron-like system that will read tax returns and correspondence automatically. In addition to this, neural networks may change written text into spoken language and spoken language into written text.

Analysis of the finances A wide variety of commercial applications, including the evaluation of loan risk, the valuation of real estate, the forecasting of bankruptcy and share price, and others, are increasingly making use of neural networks.

Medicine. The detection of lung nodules and cardiac arrhythmias, as well as the forecasting of bad drug reactions, are all potential medical uses.

Telecommunications. Control of telephone switching networks is one of the uses of neural networks in the field of telecommunications. Other applications of neural networks include echo cancellation in modems and on satellite lines.

Nouvelle AI

New foundations

Rodney Brooks, an Australian who worked at the MIT Artificial Intelligence Laboratory in the latter half of the 1980s, is credited with being the founder of the methodology that is today known as nouvelle AI. In contrast to strong AI, which focuses on human-level performance, the objective of nouvelle AI is to achieve insect-level performance. This is a more modest goal than strong AI’s focus on human-level performance. The symbolic AI’s emphasis on generating internal models of reality, such as those discussed in the section Microworld programmes, is something that nouvelle AI vehemently disagrees with on a very fundamental level. The adherents of the nouvelle AI school of thought claim that genuine intelligence necessitates the capacity to operate effectively in an authentic setting.

The notion that intelligence, as manifested by complex behaviour, “emerges” from the interaction of a few basic behaviours is one of the fundamental tenets of the nouvelle AI school of thought. For instance, a robot whose basic behaviours include avoiding collisions and moving toward a moving item will give the impression that it is following the object and stopping anytime it comes too close to it.

 

The bustling offices of the MIT AI Laboratory are the setting for one of the most well-known examples of nouvelle AI, which is the robot Herbert created by Brooks and named after Herbert Simon. Herbert walks around the office looking for empty soda cans, which it then picks up and transports away on its little cart. The combination of approximately 15 different simple behaviors gives the impression that the robot is working toward a certain goal. In more recent times, Brooks has been working on the construction of prototypes for mobile robots that will be used to explore the surface of Mars.

The frame problem, which was covered in the section titled “The CYC project,” is circumvented by Nouvelle AI. New systems do not have a sophisticated symbolic representation of their surroundings like older systems have. Instead, information is left “out in the world” until such time as the system requires it, which could be at any point in the future. A nouvelle system always refers to its sensors rather than to an internal representation of the world; it “reads off” from the external environment whatever information it needs at precisely the time it needs to have that information.

The situated approach

The majority of previous attempts at traditional AI have been directed at the construction of disembodied intelligences, whose only interaction with the outside world has been indirect (CYC, for example). On the other side, nouvelle AI takes an approach that is now commonly referred to as the situated approach in order to construct embodied intelligences that are integrated into the real environment. The brief sketches of the situational approach that Turing offered in 1948 and 1950 were cited favourably by Brooks, who also expressed his approval of them. Turing theorised that if a computer were outfitted “with the best sense organs that money can buy,” then it may be possible for the machine to be educated in a manner that would “follow the usual training of a kid” to “understand and speak English.” Turing contrasted this approach with one that focuses on abstract activities, like as playing chess, as an example of how AI should be developed. He encouraged that both techniques be pursued, but up until very recently, the situational approach received very little attention in the academic community.

 

In addition, the contextual method was predicted in the writings of Bert Dreyfus, a philosopher who taught at the University of California, Berkeley. Dreyfus began his opposition to the physical symbol system hypothesis in the early 1960s, stating that intelligent behavior cannot be completely captured by symbolic descriptions. He maintained this stance throughout his career. Dreyfus promoted a concept of intelligence that emphasized the necessity of a body that could move around and interact directly with tangible physical objects as an alternative. This view of intelligence is known as the “body view.” Dreyfus was once held in contempt by proponents of artificial intelligence, but today he is revered as a prophet of the situational approach.

Critics of new AI point out that it has not been possible to create a system that behaves in any way that even remotely resembles the complexity of behavior seen in real insects. Suggestions made by researchers regarding their novel systems, such as the possibility that they may soon be conscious and have language, appear to be utterly premature.

Is strong AI possible?

As was discussed in the earlier parts of this essay, it appears that both applied artificial intelligence and cognitive simulation will continue to be successful in the years to come. However, artificial intelligence that strives to mimic human intellectual capacities is still considered contentious. This type of AI is known as strong AI. Its reputation has suffered as a result of exaggerated claims of success that have been published in professional journals as well as in the popular press. Even an embodied system that is capable of demonstrating the total intellect of a cockroach is proving to be elusive at this point in time, let alone a system that is capable of competing with a human being. It is impossible to stress how challenging it is to build on the very small successes of AI. Research in symbolic artificial intelligence has been going on for the past half a century, but it has not been able to produce any concrete evidence that a symbol system can manifest human levels of general intelligence; connectionists are unable to model the nervous systems of even the most basic invertebrates; and critics of nouvelle AI regard as simply mystical the view that high-level behaviours such as language understanding, planning, and reasoning will somehow emerge from the interaction of basic behaviours such as obstacle avoidance.

 

However, this lack of significant development may simply be evidence of how difficult it is to create strong AI, and not that it is physically impossible to do so. Let us now focus our attention to the concept of robust artificial intelligence. Is it even possible for a machine to think? It has been suggested by Noam Chomsky that discussing this issue is fruitless because it is simply a matter of discretion as to whether or not the ordinary usage of the word think should be expanded to include machines. According to Chomsky, there is no factual question as to whether any decision of this kind is correct or incorrect, just as there is no question as to whether our decision to say that aeroplanes fly is correct or our decision not to say that ships swim is incorrect. Chomsky is of the opinion that there is no factual question regarding any decision of this nature. Having said that, this does appear to simplify things too much. The crucial question is as follows: Is it ever proper to state that computers think? If the answer is yes, then what requirements must a computer fulfil in order for it to be defined in such a manner?

 

The Turing test has been proposed as a definition of intelligence by a few different authors. However, Turing himself pointed out that a computer that deserves to be classified as intelligent could fail his test even if it were capable of successfully impersonating a human being. If the computer was unable to successfully imitate a human being, the test would fail. For instance, why is it necessary for a smart robot that was developed to supervise mining on the Moon to have the ability to fool others into thinking it is a human person when it engages in conversation? It is impossible for the exam to serve as a definition of intelligence if it is possible for an intelligent entity to do poorly on it. The information theorist Claude Shannon and the pioneer of artificial intelligence John McCarthy both pointed out in 1956 that it is even dubious whether or not completing the test would genuinely prove that a computer is clever. Shannon and McCarthy argued that it is possible, in theory, to design a machine that contains a complete set of canned responses to all of the questions that an interrogator could possibly ask during the fixed time span of the test. They claimed that this is something that could be accomplished in the allotted amount of time for the test. This computer, much like Parry, would respond to the interviewer’s questions by seeking up the right responses on a massive table. This counterargument appears to demonstrate that in theory, the Turing test may be passed by a machine that possesses absolutely no intellect at all.

In point of fact, artificial intelligence does not have a true definition of intelligence to offer, not even in the situation of subhuman intelligence. Although rats have a high level of intelligence, just what benchmarks must be met by an artificial intelligence system before its developers can declare it a success? There is currently no objective method for determining whether or not an artificial intelligence research program has been successful because there is not a reasonably exact criterion for determining when an artificial system may be considered intelligent. When researchers achieve one of AI’s goals — for instance, a program that can summarise newspaper articles or beat the world chess champion — critics are able to say, “That’s not intelligence!” because AI has not been able to produce a satisfactory criterion of intelligence. This is one result of AI’s failure to produce a satisfactory criterion of intelligence. Like Turing before him, Marvin Minsky’s solution to the difficulty of defining intelligence is to assert, as Turing did before him, that intelligence is merely our word for any problem-solving brain activity that we do not yet comprehend. [Citation needed] [Citation needed] Minsky compares intellect to the idea of “unexplored regions of Africa,” arguing that it is something that vanishes as soon as it is uncovered.

FAQS

What is the impact of artificial intelligence (AI) on society?

There is a lot of controversy over the effects that artificial intelligence will have on society. Many people believe that artificial intelligence (AI) makes life easier, safer, and more efficient by doing mundane and even complex jobs more effectively than humans can, thereby improving the quality of life overall. Others contend that artificial intelligence endangers people’s privacy, contributes to the perpetuation of racism by reducing individuals to stereotypes, and causes workers to lose their jobs, hence increasing the rate of unemployment. Visit ProCon.org to learn more about the arguments for and against the use of artificial intelligence.

Did You Know?

  • Artificial intelligence programs are used by both Siri and Google Translate. These programs are designed after the learning process that occurs in human neural networks.
  • As of 2016, American businesses controlled almost two-thirds of the world’s total investments in artificial intelligence.

Leave a Comment

Your email address will not be published. Required fields are marked *