Home>>Tech News>>What is Artificial Intelligence in Computer
-What-is-Artificial-Intelligence-in-Computer
Tech News

What is Artificial Intelligence in Computer

Artificial Intelligence In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals.

Leading AI textbooks define the field as the study of intelligent agents.

ess than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and it’s subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

What is Artificial Intelligence in Computer
What is Artificial Intelligence in Computer

Artificial Intelligence Definition

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is an endeavour to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.

Artificial Intelligence Future

The first two ideas concern thought processes and reasoning, while others deal with the behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.

The field was founded on the assumption that human intelligence “can be so precisely described that a machine can be made to simulate is. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence. These issues have been explored

by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.

Artificial intelligence Examples

  • Narrow AI: Sometimes referred to as “Weak AI,” this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence. 
     
  • Artificial General Intelligence (AGI): AGI, sometimes referred to as “Strong AI,” is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem. 
What is Artificial Intelligence in Computer
What is Artificial Intelligence in Computer
  • Smart assistants (like Siri and Alexa)
  • Disease mapping and prediction tools
  • Manufacturing and drone robots
  • Optimized, personalized healthcare treatment recommendations
  • Conversational bots for marketing and customer service
  • Robo-advisors for stock trading
  • Spam filters on email
  • Social media monitoring tools for dangerous content or false news
  • Song or TV show recommendations from Spotify and Netflix

Narrow Artificial Intelligence

Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had “significant societal benefits and have contributed to the economic vitality of the nation,” according to “Preparing for the Future of Artificial Intelligence,” a 2016 report released by the Obama Administration. 

A few examples of Narrow AI include: 

  • Google search
  • Image recognition software
  • Siri, Alexa and other personal assistants
  • Self-driving cars
  • IBM’s Watson 

Machine Learning & Deep Learning 

Much of Narrow AI is powered by breakthroughs in machine learning and deep learning. Understanding the difference between artificial intelligence, machine learning and deep learning can be confusing. Venture capitalist Frank Chen provides a good overview of how to distinguish between them, noting

BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE

WHY RESEARCH AI SAFETY?

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.

HOW CAN AI BE DANGEROUS?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI but grows as levels of AI intelligence and autonomy increase.
  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with an ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
What is Artificial Intelligence in Computer
What is Artificial Intelligence in Computer

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

Reasoning, problem solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.

These algorithms proved to be insufficient for solving large reasoning problems because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger. In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgments.

Also Read: How To install Download WordPress

HISTORY OF Artificial Intelligence

Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle’s development of the syllogism and its use of deductive reasoning was a key moment in mankind’s quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI. 

1943
  • Warren McCullough and Walter Pitts published “A Logical Calculus of Ideas Immanent in Nervous Activity.” The paper proposed the first mathematic model for building a neural network. 
1949
  • In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they’re used. Hebbian learning continues to be an important model in AI.
1950
  • Alan Turing publishes “Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent. 
  • Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.
  • Claude Shannon publishes the paper “Programming a Computer for Playing Chess.”
  • Isaac Asimov publishes the “Three Laws of Robotics.”  
1952
  • Arthur Samuel develops a self-learning program to play checkers. 
1954
  • The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English. 
1956
  • The phrase artificial intelligence is coined at the “Dartmouth Summer Research Project on Artificial Intelligence.” Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today. 
  • Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program. 
1958
  • John McCarthy develops the AI programming language Lisp and publishes the paper “Programs with Common Sense.” The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.  
1959
  • Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving. 
  • Herbert Gelernter develops the Geometry Theorem Prover program.
  • Arthur Samuel coins the term machine learning while at IBM.
  • John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.
1963
  • John McCarthy starts the AI Lab at Stanford.
1966
  • The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects. 
1969
  • The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.
1972
  • The logic programming language PROLOG is created.
1973
  • The “Lighthill Report,” detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects. 
1974-1980
  • Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year’s “Lighthill Report,” artificial intelligence funding dries up and research stalls. This period is known as the “First AI Winter.” 
1980
  • Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first “AI Winter.”
1982
  • Japan’s Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.
1983
  • In response to Japan’s FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence. 
1985
  • Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp. 
1987-1993
  • As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.” During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.
  • Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.
  • DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations. 
1991
  • U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.
1997
  • IBM’s Deep Blue beats world chess champion Gary Kasparov
2005
  • STANLEY, a self-driving car, wins the DARPA Grand Challenge.
  • The U.S. military begins investing in autonomous robots like Boston Dynamic’s “Big Dog” and iRobot’s “PackBot.”
2008
  • Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app. 
2011
  • IBM’s Watson trounces the competition on Jeopardy!. 
2012
  • Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in breakthrough era for neural networks and deep learning funding.
2014
  • Google makes first self-driving car to pass a state driving test. 
2016
  • Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.

Leave a Reply

Your email address will not be published. Required fields are marked *