Chapter.01/07
What Is Artificial Intelligence?
What Is Artificial Intelligence?
Chapter 01/07
What Is Artificial Intelligence?
We hear about AI all the time, whether in a news story about the latest information technology or in a science fiction movie about a near-future dystopia.
What the term really refers to is the theory and development of computer systems that able are to perform tasks that would typically require human intelligence. This could include visual perception, speech recognition, and decision-making.
If it sounds like computers are getting better at doing the kinds of things that people do, that’s the idea.
AI is the core technology that enables route planning and turn-by-turn navigation. view citation[1] It’s the secret ingredient in better weather forecasting, view citation[2] more relevant internet searches view citation[3] and smarter digital assistants that understand verbal commands even when your roommates or kids are talking in the background. view citation[4]
If it sounds like computers are getting better at doing the kinds of things that people do, that’s the idea. Scientists are trying to develop computers that can understand the intrinsic natures of objects and ideas, time and space. They’re attempting to create computers that can correctly perceive and respond to human emotion—machines that can learn and remember with even more ease and agility than the human brain.
That’s where our discomfort comes in. Most of us aren’t sure how advanced AI would fit into everyday life. What do we do with a machine that can read emotions? A therapy device for people on the autism spectrum could be one possibility. But what dangers could such technology pose? We’re simply not sure yet. view citation[5]
On the other hand, the benefits of AI are easy to see. Data has never been easier to collect and store, and gains in processing power have made it possible for computers to perceive patterns and connections within that data that are essentially invisible to humans. view citation[6] That’s where the strength of today’s AI applications lies, enabling researchers to refine AI processes to be smarter, work faster and deliver more useful results.
Defining AI
AI exists in many different forms, making it difficult to define, but we do have an explanation for what it does: AI processes large, complex datasets to answer a question or perform a task.
The key is “data.” AI is very good at finding connections between seemingly disparate, unrelated data points and using that information to make inferences about a particular situation or activity.
The “artificial” part is fairly straightforward, generally understood to mean something human-made that doesn’t independently exist in nature. "Intelligence" gums things up, however: Does it mean an exceptionally efficient, capable computer? A program that can understand what makes a joke funny and then write its own punch lines?
Scientists are trying to marry those two characteristics. Two current thrusts in AI research today are improving autonomy (the ability to perform tasks without constant direction from a human user) and learning (using current and previous experiences to improve future performance).
Top AI Jobs for 2019
Learn MoreAnother definition of AI comes from David Poole, Alan Mackworth and Randy Goebel in their 1998 text Computational Intelligence: AI perceives its environment, translates that environment into data it can use and takes actions that maximize its chances of achieving its goals, learning and adapting as it goes.
AI is such a new field that much of the important work in it remains at the theoretical level, speculating on what the capabilities of AI could be if the right technological breakthroughs take place. For now, AI falls into three main categories—one in existence today, and two yet to come: narrow AI, artificial general intelligence and artificial superintelligence.
Narrow AI
Many of the early failures in AI occurred because the systems were not constrained enough. Eager to demonstrate AI’s smarts and usefulness, researchers overpromised and underdelivered on capability. A lack of data to inform computations played an important role in those failures.
Many of the early failures in AI occurred because the systems were not constrained enough.
Now, with the ubiquity of mobile phones, connected devices and sensors of all types pumping constant streams of information into inexpensive storage servers, the computing world is awash with data. Researchers dusted off decades-old ideas, like artificial neural networks, and were pleasantly surprised to find that AI works best when it can use a lot of information.
But the processes still need to be clearly defined—i.e., narrow—and most AI being used today falls into this category: systems or processes designed to focus on and solve one narrowly defined task. view citation[7] Narrow AI is also known as “weak AI” in the sense that it’s not as broad or adaptable as human intelligence.
Correct!
Incorrect
The History of Artificial IntelligenceThere are also two main divisions within narrow AI. view citation[8] Symbolic AI, also called “Good Old-Fashioned AI,” is a logical, reason-based approach that uses rules and relatively static data to make predictions and generate answers to queries. IBM’s Deep Blue chess-playing computer is one famous example. view citation[9] Nonsymbolic AI, the focus of much of today’s AI research and development, is a learning-based approach that attempts to mimic the function of the human brain. The areas of machine learning, neural networks and deep learning are all types of nonsymbolic AI.
We all know of at least a handful of applications whose usefulness is driven by narrow AI: internet search, digital assistants like Siri, nonplayer characters in video games and self-driving cars. IBM’s Jeopardy!-winning Watson supercomputer relied on a narrow AI. Though endowed with a clever natural-language processing capability, it answered questions simply by combing quickly through a deep catalog of information.
Narrow AIs don’t think and are not able to reason outside the scope of their designated problem, although they’re getting much more sophisticated (listen to Google’s Duplex AI make a salon appointment). Even Google’s highly adaptable AlphaGo Zero system, which learned how to play chess and Go from scratch simply by playing itself—instead of studying millions of human-played games, as its predecessor, AlphaGo, did—is merely a powerful predictor.
Eventually, developers hope to apply these types of algorithms to more significant matters, such as predicting new kinds of protein-folding mechanisms view citation[10] (necessary for developing new pharmaceuticals), finding ways to reduce energy consumption view citation[11] or even developing new materials that don’t yet exist. view citation[12]
Correct!
Incorrect
Machine Learning and Why It MattersArtificial General Intelligence
One of the primary goals of AI all along has been to develop machines that can think and reason as well as humans. Scientists and engineers call this capability artificial general intelligence (AGI), or “strong AI,” because these forms of intelligence would be able to tackle multiple complex problems without guidance or training. This is the AI of science fiction and fearmongering—and it doesn’t exist yet.
One of the primary goals of AI all along has been to develop machines that can think and reason as well as humans.
Realistic assistants like Duplex dazzle us with their utility and capability, but Gigaom CEO Byron Reese writes in his book The Fourth Age that true AGI must not only be factually clever; it must also be socially and emotionally intelligent, as well as original and creative.
“Is an AGI really something different than just better narrow AI? Could we get an AGI just by bolting together more and more narrow AIs until we had covered the entire realm of human experience, creating an AI that is at least as smart and versatile as a person?” Reese writes.
No, he concludes. Intelligence is not about being able to do a multitude of things; it’s about combining those abilities into new configurations. It’s true that the algorithms are getting more elegant, but they’re still a long way from being able to do it all, even when you patch them together.
Futurist Peter Voss writes that AGI should be able to learn new skills in real time, reason abstractly, explain its conclusions, understand its own context and others, and apply previously learned skills to novel situations—and do it all with limited prior knowledge.
Voss points to research in cognitive architectures—a reasoning framework for computers modeled on the structure and function of the human mind view citation[13] —as a path toward the development of AGI.
Artificial Superintelligence
Again still firmly in the realm of sci-fi, artificial superintelligence is a term used for computers that have surpassed human capacity.
Thinkers like Australian philosopher David Chalmers say that once we achieve AGI, it’s a very short step to superintelligence—a moment sometimes referred to as the “technological singularity” beyond which the future ramifications cannot be seen or predicted. view citation[14] Prominent AI researchers like Ray Kurzweil, head of engineering at Google, predicts that computers will have human intelligence by 2029, we will reach singularity by 2045 and superintelligent machines will exist by 2049. view citation[15]
Again still firmly in the realm of sci-fi, artificial superintelligence is a term used for computers that have surpassed human capacity.
But to get there, computational algorithms must become more efficient and accurate, hardware must continue to improve and we must demystify the workings of the human mind. The singularity, Kurzweil has said, is inevitable—but not undesirable.
Other thought leaders such as Tesla’s Elon Musk, Microsoft’s Bill Gates and physicist Stephen Hawking have argued otherwise. Such powerful AI should not be allowed to evolve without careful planning and study, they’ve stated. To that end, Musk, Gates and other tech figures have helped establish groups including the Future of Life Institute and OpenAI, whose primary missions are to support the development of safe, ethical AGI—and any other technologies that may follow.
References
-
“AI is Google’s secret weapon for remaking its oldest and most popular apps.” The Verge. May 2018. View Source
-
“AI Might Be the Future for Weather Forecasting.” Interesting Engineering. March 2019. View Source
-
“Artificial Intelligence applications in search engines.” AI Marketing Magazine via Medium. December 2018. View Source
-
“Solutions.” Vocalize.ai. View Source
-
“The American public is already worried about AI catastrophe.” Vox. January 2019. View Source
-
“Forget algorithms. The future of AI is hardware!” HuffPo. January 2018. View Source
-
“Why the leap to general AI still can’t happen yet.” The Next Web. May 2018. View Source
-
“Artificial Intelligence: Definition, Types, Examples, Technologies.” Chethan Kumar GN via Medium. August 2018. View Source
-
“General vs Narrow AI.” HACKERNOON. June 2018. View Source
-
“How one scientist coped when AI beat him at his life’s work.” Vox. February 2019. View Source
-
“How artificial intelligence will affect the future of energy and climate.” Brookings Institution. January 2019. View Source
-
“How AI is helping us discover materials faster than ever.” The Verge. April 2018. View More
-
“Cognitive Architecture.” Institute for Creative Technologies, University of Southern California. View Source
-
”‘There’s Just No Doubt That It Will Change the World’: David Chalmers on V.R. and A.I.” The New York Times. June 2019. View Source
-
“The Dawn of the Singularity: A Visual Timeline of Ray Kurzweil’s Predictions.” Futurism.com. View Source
Next Section
The Impact of Artificial Intelligence
Chapter 02 of 07
Explore the many ways in which artificial intelligence (AI) permeates daily life. Learn how businesses, governments and schools are using AI to streamline processes, save money and enhance outcomes.