INVITED TALK
Symbiotic AI: an approach to artificial intelligence through first-person sensing
Georgia Institute of Technology
Abstract
If we make a wearable computer, like Google's Glass, that sees as we see and hears as we hear, might it provide new insight into our daily lives? Going further, suppose we have the computer monitor the motion and manipulations of our hands and listen to our speech. Perhaps with enough data it can infer how we interact with the world. Might we create a symbiotic arrangement, where an intelligent computer assistant lives alongside our lives, providing useful functionality in exchange for occasional tips on the meaning of patterns and correlations it observes?
For over a decade at Georgia Tech, we have been capturing "first person" views of everyday human interactions with others and objects in the world with wearable computers equipped with cameras, microphones, and gesture sensors. Our goal is to automatically cluster large databases of time-varying signals into groups of actions (e.g. reaching into a pocket, pressing a button, opening a door, turning a key in lock, shifting gears, steering, braking, etc.) and then reveal higher level patterns by discovering grammars of lower level actions with these objects through time (e.g. driving to work at 9am everyday). By asking the user of the wearable computer to name these grammars (e.g. morning coffee, buying groceries, driving home), the wearable computer can begin to communicate with its user in more human terms and provide useful information and suggestions ("if you are about to drive home, do you need to buy groceries for your morning coffee?"). Through watching the wearable computer user, we can gain a new perspective for difficult computer vision and robotic problems by identifying objects by how they are used (turning pages indicates a book), not how they appear (the cover of Foley and van Dam versus the cover of Wired magazine). By creating increasingly more observant and useful intelligent assistants, we encourage wearable computer use and a cooperative framework for creating intelligence grounded in everyday interactions.
Short bio
Thad Starner is a wearable computing pioneer and a Professor in the School of Interactive Computing at the Georgia Institute of Technology. He is also a Technical Lead on Google's Glass, a self-contained wearable computer.
Thad received a PhD from the MIT Media Laboratory, where he founded the MIT Wearable Computing Project. Starner was perhaps the first to integrate a wearable computer into his everyday life as a personal assistant, and he coined the term "augmented reality" in 1990 to describe the types of interfaces he envisioned at the time. His groups' prototypes on mobile context-based search, gesture-based interfaces, mobile MP3 players, and mobile instant messaging foreshadowed now commonplace devices and services.
Thad has authored over 130 peer-reviewed scientific publications with over 100 co-authors on mobile Human Computer Interaction (HCI), machine learning, energy harvesting for mobile devices, and gesture recognition. He is listed as an inventor on over 80 United States patents awarded or in process. Thad is a founder of the annual International Symposium on Wearable Computers, and his work has been discussed in many forums including CNN, NPR, the BBC, CBS's 60 Minutes, ABC's 48 Hours, the New York Times, and the Wall Street Journal.