Google's Project Glass
Google's
Project Glass demo is certainly the coolest hardware demo so far this
year. Behind the scenes is something equally intriguing:
artificial-intelligence software.
The augmented-reality glasses, which Google co-founder Sergey Brin was spotted wearing yesterday, created a huge buzz Wednesday when Google released a video showing, from the wearer's perspective, how they could be used.
In the video,
the small screen on the glasses flashes information right on cue,
allowing the wearer to set up meetings with friends, get directions in
the city, find a book in a store, and even videoconference with a
friend. The device itself has a small screen above the right eye on
wrap-around glasses which have no lenses.
For
the most part, the augmented-reality glasses do what a person could do
with a smartphone, such as look up information and socialize. But the
demo also shows glimpses of an artificial-intelligence (AI) system
working behind the scenes. It's the AI system that could make mobile
devices, including wearable computers, far more powerful and take on
more complex tasks, according to an expert.
"The
new thing that Google was showing was the interaction model using new
hardware, rather than truly showing the potential of such a device,"
said Lars Hard, the chief technology officer of AI software company Expertmaker. "AI can actually enhance and improve different decision situations."
Although
there isn't a precise, agreed-upon definition, artificial intelligence
describes computer systems that accommodate human-like behaviors,
through features such as speech and gesture recognition, and mimic human
thinking. Working with a mobile device, artificial-intelligence systems
can perform tasks in the background and bring highly relevant
information to users, Hard said.
The
Project Glass hardware was operated primarily by voice commands, an
indicator of Google's work on voice recognition for mobile devices like
Apple's Siri. Siri, which has been well-received, translates spoken
commands into actions for the iPhone, such as looking up information or making appointments. Google is reportedly working on voice-recognition software for Android.
The
makers of Project Glass said the hardware is designed to help "you
explore and share your world, putting you back in the moment," according
to a Google Plus post.
"We think technology should work for you--to be there when you need it
and get out of your way when you don't," said Babak Parviz, Steve Lee,
and Sebastian Thrun, three employees from Google's secretive Google X Labs, on the post introducing Project Glass.
In
one scene of the video, for example, the wearer takes a picture of a
poster by pressing a button on the glasses and sends it to himself. This
new type of user interaction is quicker than, say, pulling a phone or
camera out of a pocket.
The
demo also shows that the software operating the glasses is location
aware. A notification tells the wearer that the No. 6 subway is shut
down as he walks up and the system suggests an alternate route for
getting to his destination.
To
have a wearable computer aware of its physical surroundings and present
personalized information to the user requires artificial intelligence
and machine-learning software in the background, noted Hard. It turns
out Thrun, a Google fellow and member of the Project Glass team, is an
artificial intelligence and robotics expert who is instrumental in
another Google X project, the driverless car.
"This
puts Google out in front of Apple; they are a long ways ahead at this
point, Michael Liebhold, a senior researcher specializing in wearable
computing at the Institute for the Futuretold The New York Times.
"In addition to having a superstar team of scientists who specialize in
wearable, they also have the needed data elements, including Google
Maps."
AI in the cloud
A more sophisticated AI platform with a wearable computer could do much more than find friends online and provide maps, said Hard.
A more sophisticated AI platform with a wearable computer could do much more than find friends online and provide maps, said Hard.
Having
wearable screens could help doctors make diagnoses, be used in business
negotiations, or in service industries, such as retail, Hard said.
Although an augmented-reality screen is smaller than a smartphone, it
has the potential to present the "right information at the right time"
and show complex data such as diagrams, he said.
The
hope for AI software is that it will process information in the
background and present targeted information as needed, he said. In
shopping, for example, the AI system would sift through lots of data to
come up with very granular and personalized recommendations, rather than
recommendations based on past purchases as computers do today.
"Even
though the technologies today deliver this type of service, they are
relatively crude and boring in many respects," Hard said. "We're going
to see lots of changes to that, using big data and machine learning."
Another
Project Glass contributor, Babak Parviz, is a bionanotechnology expert
at the University of Washington who foresees wearable devices used for
medical diagnostics. In 2009, he wrote an essay atIEEE Spectrum
describing how augmented-reality contact lenses could be equipped with
biosensors to detect and communicate information on blood sugar levels
from eye fluids.
Judging
from the enthusiastic reception of Project Glass, wearing
augmented-reality glasses may become the ultimate fashion statement for
technology fans in the near future.
But there are plenty of skeptics who fear what a poorly done system would look like.
A video released yesterday from Tom Scott called Google Glasses: A new way to hurt yourself,
showed a steady stream of information distracting the wearer and the
voice recognition backfiring. Another video from Rebellious Pixels
superimposes ads, based on Google searches, on Google's demo video popping up incessantly.
Apple's
Siri has given millions of people their first taste of the
artificial-intelligence concept where a digital personal assistant does a
few tasks and provides an alternate interface to touch or typing. Now
with Google's Project Glass we get a hint of the potential of bringing
that AI to a wearable device.
0 comments:
Post a Comment