Artificial intelligence (A.I) and its everything.[History , mission vision , future]

                                      Artificial intelligence


Artificial intelligence is a branch of computer science that seeks to mimic human intelligence and thinking power by computers. Artificial intelligence has now become a field of academic learning where teaching is how to create computers and software that will demonstrate intelligence.


Japanese robot Asimo

Artificial intelligence is the process of manipulating human intelligence and thinking power through technology. [1] The computer is brought into the mimics magnetic unit so that the computer can think like a human. Such as learning and problem-solving. Artificial Intelligence (AI) is the intelligence displayed by a machine. In the case of computer science, the field of AI research defines itself as the study of the "intelligent agent": any device that can perceive its environment and take certain steps that go far beyond its success in achieving certain goals. The term "artificial intelligence" is applied when a machine performs "cognitive" functions that are similar to other people's minds, such as "learning" and "problem-solving". Andreas Kaplan and Michael Henlin define artificial intelligence as "the ability to accurately interpret information outside of a system, to learn from such information and to focus on flexible adaptations using that learning." 





Intelligence needs to be removed from the definition of mental convenience as machines become increasingly capable. For example, when optical character recognition is no longer perceived as an example of "artificial intelligence", it becomes a routine technology. The capabilities that are currently classified include being able to successfully understand human speech, strategic game systems (such as chess and go) to participate in high-level competitions, automated driving, military simulations, and interpreting complex data.



AI research can be divided into a number of sub-disciplines that focus on specific problems, perspectives, use of specialized tools, or satisfaction of specific applications.


History

Artificial humans capable of thinking originally emerged as storytelling instruments, the idea of ​​trying to create an instrument to actually demonstrate effective reasoning probably started with Raman Loll (1300 AD). With his calculus ratiocinator, Gottfried Leibniz expanded the concept of the mathematical machine (Wilhelm Schickard first did an engineering workaround 1823), to conduct operations on concepts rather than numbers. Artificial humans have become commonplace in science fiction since the nineteenth century, such as Mary Style's Frankenstein or Carroll Kepek's R.U.R. (Rasos's Universal Robots).


The study of mechanical or "formal" reasoning began in ancient times with philosophers and mathematicians. The study of mathematical logic coincided with Alan Turing's theory of mathematics, which could make mathematical conclusions using a machine, the symbols "0" and "1". Through this insight, the digital computer that could mimic any process of formal reasoning came to be known as the Church-Touring Thesis. The discovery of neuroscience, information theory, and cybernetics increased the potential for researchers to build electric brains. The first work, now recognized as AI, was a complete "artificial neuron" for McCulloch and Pitts' 1943 touring.


The AI ​​research field was first established in 1956 in a workshop at Dartmouth College. Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT), and Arthur Samuel (IBM) became the founders and leaders of AI research. The newspaper described the program they and their students created as "wonderful": winning among computer checkers, solving word problems in algebra, proving logical theories, and being able to speak English. In the mid-1980s, the United States Department of Defense provided extensive funding for research and established worldwide laboratories. The founders of AI are optimistic about the future: Herbert Simon predicted, "The machine will be able to do what a man can do in twenty years." Marvin Minsky agreed, "In a generation ... the problem of creating artificial intelligence will be solved."



They failed to understand the difficulty of some of the remaining work. Progress was slow, and in 1974, in response to criticism from Sir James Lighthill, the AIA's research was halted due to ongoing pressure from the British government and the United States Congress. The next few years later it would be called “AI Winter” when funding for the AI ​​project was difficult.


AI research was revived in the early 1980s by the commercial success of the specialist system, a form of AI program that mimics the knowledge and analytical skills of human experts. By 1975, the AI ​​market had reached more than one billion dollars. At the same time, Japan's fifth-generation computer project inspired the US and British governments to return funding for academic research. However, at the beginning of the collapse of the Lisp Machine market in 1986, AI fell into corruption again and fell into the second-longest recession.


In the 1990s and early 21st century, AI began to be used for supply, data mining, medical diagnosis, and other areas. The success was due to the increase in computational power (see Moore's Law), the solution of specific problems, the new relationship between AI and other fields, and the greater emphasis on researchers' commitment to mathematical methods and scientific standards. Deep Blue became the first computer-controlled chess player to defeat Gary Kasparov, a chess champion, on June 11, 1998.


Advanced statistics techniques (loosely known as deep learning), access to large amounts of information, and rapid advances in machine learning and perception on computers. Until mid-2010, machine learning applications were used all over the world. A danger! Watson defeated two of the greatest champions, Brad Rutter and K Jennings, by a significant margin in a quiz show exhibition match on IBM's Question Answer System. Kinect, which provides a 3D body-motion interface for the Xbox 360 and Xbox One that uses algorithms derived from long AI research such as intelligent personal assistants on smartphones. In March 2016, Alfago Go won 4 of 5 games in a match with champion Lee Sedol, becoming the first computer go-systematizing system to defeat a professional Go player without handicaps. At the Future Go Conference 2016, Alfago won three games with KJ, who has been ranked number one in the world for two consecutive years.


According to Bloomberg's Jack Clark, 2015 was a milestone year for artificial intelligence, with the number of software projects using AI within Google increasing to "sporadic use" in more than 2,700 projects in 2012. Clark also pointed out that the error rate in image processing has dropped significantly since 2011. He is the result of the rise of cloud computing infrastructure and research equipment.

With the help of  A.I we can animate an old photo.



It emphasizes the growth of affordable neural networks due to the growth of computers and datasets. Other examples mentioned include the development of Microsoft's Skype system that can automatically translate from one language to another, and the Facebook system can describe images to blind people.


The goal

The goal of the overall study of artificial intelligence is to create technologies that enable computers and machines to operate in an intelligent manner. The general problems of intelligence production (or creation) have been divided into several sub-problems. Researchers hope to demonstrate an intelligent system that has special features or capabilities. The following descriptions have received the most attention.



Erich Sandwell emphasizes planning and learning that are relevant and applicable to a given situation.

Argument and problem solving

Early researchers have developed algorithms that rationalize step-by-step how people use them to solve problems or make logical cuts. In the late 1980s and late 1990s, AI research was developed to recruit ideas from uncertain or incomplete data, possibilities, and economics.


Algorithms for difficult problems may require a lot of computational resources the most experienced “connect-capable explosion”: the amount of memory or time required by a computer to solve a problem of a certain size. The search for more efficient problem-solving algorithms is gaining much more priority.



Instead of people initially making quick, self-determined decisions using step-by-step discounts, early AI research has been able to shape that model. AI has made progress using “sub-symbolic” problem solving: morphological agents emphasize sensory-motor efficiency from higher reasoning; Postgraduate research efforts to mimic the internal structures of the brain enhance these skills; The main goal of AI is to imitate human ability.


Representation of knowledge

Knowledge representation and knowledge engineering are central to AI research. The solution to many of the problems that are expected to be solved by machines will require extensive knowledge of the world. The type of subject that AI will represent is the relationship between objects, properties, categories, and objects; Situation, event, condition, and time; Cause and effect; Knowledge about knowledge (what we know is what other people know); And many other, less well-researched domains. Representation is “the one whose existence exists”: the machine knows about the set of objects, relationships, ideas, and so on. He is said to be the highest theorist who seeks to provide the basis for all other knowledge.


The most difficult problems in representing knowledge are:


Default logic and eligibility issues

What people know a lot is basically evaluated as a “work estimate”. For example, if a bird is discussed, people usually describe an animal that has a special shape, mark, and who can fly. None of these things are true of all birds. John McCarthy identified this problem in 1979 as a problem of eligibility: there are several exceptions to the AI ​​rule that AI researchers represent for a commonsense rule. Almost nothing that is required for abstract logic is true or false. AI research has traveled a lot to solve this problem.


Expansion of commonsense knowledge

Everyone knows that the nuclear issue is very big. Research projects require a conventional amount of hard-working engineering to develop complete knowledge based on common sense (e.g., psyche) - they will certainly solve complex concepts by hand. The main goal is that by reading from sources like the computer, the computer must have enough ideas to understand the necessary concepts, and thus be able to add to its own ontology.


The symbolic form of some common sense

Most of what people know is not presented as “facts” or “statements” that they can express verbally. For example, a chess master would avoid a certain chess range because it “feels too revealing” or an art critic might think it is fake just by looking at the look of a statue. These are the unconscious and sub-symbolic forms or tendencies of the human brain. Providing such knowledge, support is essentially given a context for symbolic and conscious knowledge. Along with sub-symbolic rational problems it is expected that located AI, computational intelligence, or statistical AI will provide ways to represent such knowledge.



Plan

The intelligent representative must be able to set goals and achieve them. They need to have some way to be able to predict future visibility - representing world conditions and how their actions will change - and be able to choose the most (or "standard") of available choices.


Among the classical planning problems the agent can assume that this is why it is the only system that works around the world that allows the agent to specify the outcome of his actions. However, if the agent is not the only factor, it can cause the agent uncertainty. This calls for an agent not only to be able to evaluate and make predictions about his or her environment but also to evaluate his or her predictions and comply with his or her assessments.


Many agents use the collaboration and competition of many agents to achieve a specific goal in the plan. Emergency behavior, for example, is used by evolutionary algorithms and concentration intelligence.


Education

Machine learning is a basic concept of AI research that has been studying computer algorithms since its inception which is able to improve automatically through experience.


Inadequate teaching is the ability to find patterns in the input. Supervised teaching includes both classification and numerical regression. Used to determine which category belongs to which category. Regression is an attempt to create a function that describes the relationship between input and output and predicts how the input changes, Tips should be changed. Agents in maintenance learning are rewarded for good feedback and punished for bad feedback. The agent uses this sequence of rewards and punishments to form a strategy in place of his problem. These three types of learning decisions can be analyzed on the basis of theories using concepts like utilities. Mathematical analysis of machine learning algorithms and their performance is known as computational learning theory which is a branch of theoretical computer science.






Comments