Wednesday, October 16, 2013

Divining the Future: Special Report: The Rapid Advance of Artificial Intelligence

The jubilant and occasionally squealing attendees appeared to have no idea that next door a group of real-world wizards was demonstrating technology that only a few years ago might have seemed as magical.

The scientists and engineers at the Computer Vision and Pattern Recognition conference are creating a world in which cars drive themselves, machines recognize people and “understand” their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues.

C.V.P.R., as it is known, is an annual gathering of computer vision scientists, students, roboticists, software hackers — and increasingly in recent years, business and entrepreneurial types looking for another great technological leap forward.

The growing power of computer vision is a crucial first step for the next generation of computing, robotic and artificial intelligence systems. Once machines can identify objects and understand their environments, they can be freed to move around in the world. And once robots become mobile they will be increasingly capable of extending the reach of humans or replacing them.

Self-driving cars, factory robots and a new class of farm hands known as ag-robots are already demonstrating what increasingly mobile machines can do. Indeed, the rapid advance of computer vision is just one of a set of artificial intelligence-oriented technologies — others include speech recognition, dexterous manipulation and navigation — that underscore a sea change beyond personal computing and the Internet, the technologies that have defined the last three decades of the computing world.

“During the next decade we’re going to see smarts put into everything,” said Ed Lazowska, a computer scientist at the University of Washington who is a specialist in Big Data. “Smart homes, smart cars, smart health, smart robots, smart science, smart crowds and smart computer-human interactions.”

The enormous amount of data being generated by inexpensive sensors has been a significant factor in altering the center of gravity of the computing world, he said, making it possible to use centralized computers in data centers — referred to as the cloud — to take artificial intelligence technologies like machine-learning and spread computer intelligence far beyond desktop computers.

Apple was the most successful early innovator in popularizing what is today described as ubiquitous computing. The idea, first proposed by Mark Weiser, a computer scientist with Xerox, involves embedding powerful microprocessor chips in everyday objects.

Steve Jobs, during his second tenure at Apple, was quick to understand the implications of the falling cost of computer intelligence. Taking advantage of it, he first created a digital music player, the iPod, and then transformed mobile communication with the iPhone. Now such innovation is rapidly accelerating into all consumer products.

“The most important new computer maker in Silicon Valley isn’t a computer maker at all, it’s Tesla,” the electric car manufacturer, said Paul Saffo, a managing director at Discern Analytics, a research firm based in San Francisco. “The car has become a node in the network and a computer in its own right. It’s a primitive robot that wraps around you.”

Here are several areas in which next-generation computing systems and more powerful software algorithms could transform the world in the next half-decade.

Artificial Intelligence

With increasing frequency, the voice on the other end of the line is a computer.

It has been two years since Watson, the artificial intelligence program created by I.B.M., beat two of the world’s best “Jeopardy” players. Watson, which has access to roughly 200 million pages of information, is able to understand natural language queries and answer questions.

The computer maker had initially planned to test the system as an expert adviser to doctors; the idea was that Watson’s encyclopedic knowledge of medical conditions could aid a human expert in diagnosing illnesses, as well as contributing computer expertise elsewhere in medicine.

No comments:

Post a Comment