Invited talks
 
IRIT, University of Toulouse and National Center for Scientific Research (CNRS)
 
Abstract
 
Formal models for multi-agent systems (MAS) have been introduced and studied in various areas, not only in distributed systems and AI, but also in economics and the social sciences. I will focus on logical models of the central concepts of knowledge, belief, time, and action, and will give an overview of the existing logics for MAS from a knowledge representation point of view. I will classify various MAS logics according to the epistemic and the action dimension, distinguishing individual and group action and individual and group knowledge. I will highlight problematic aspects of each of the standard accounts, including frame axioms, strategy contexts and uniform strategies.
 
Short bio
 
Andreas Herzig is a senior researcher at the French National Center for Scientific Research (CNRS) and works at the the Toulouse Computer Science Research Institute (IRIT). He studied computer science in Darmstadt and Toulouse and obtained a Ph.D. (1989) and a habilitation (1999) in Computer Science, both from Paul Sabatier University in Toulouse. He works at CNRS since 1990.
 
Andreas's main research topic is the investigation of logical models of interaction, with a focus on logics for reasoning about knowledge, belief, time, action, intention and obligation, and the development of theorem proving methods for them. He currently investigates the integration of logics of belief and group belief with theories of action, in particular in the framework of dynamic epistemic logics.
 
 
Cornell University
 
Abstract
 
The growth of social media and on-line social networks has opened up a set of fascinating new challenges and directions for researchers in both computing and the social sciences, and an active interface is growing between these areas. We discuss a set of basic questions that arise in the design and analysis of systems supporting on-line social interactions, focusing on two main issues: the role of network structure in the adoption of social media sites, and the analysis of textual data as a way to study properties of on-line social interaction.
 
Short bio
 
Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on issues at the interface of networks and information, with an emphasis on the social and information networks that underpin the Web and other on-line media. He is a member of the U.S. National Academy of Sciences, the U.S. National Academy of Engineering, and the American Academy of Arts and Sciences; he is the recipient of research fellowships from the MacArthur, Packard, Simons, and Sloan Foundations, as well as awards including the Nevanlinna Prize from the International Mathematical Union and the ACM-Infosys Foundation Award in the Computing Sciences.
 
 
Beihang University, China
 
Abstract
 
R-calculus is an inference system in first order logic for revision. It is developed to identify and refute propositions of formal theories that are inconsistent with given evidences and is capable of deriving all maximal revisions. R-calculus consists of one axiom, one cut rule, and some rules for logical connectives and quantifiers. It has been proved to be sound and complete.
As examples of application of R-calculus, we show how the processes of discovery of Einstein’s special theory of relativity and Darwin’s theory of evolution of species can be verified by R-calculus. It is demonstrated that the special theory of relativity is the only and correct choice for mechanics, given the experimental evidence at that time. For the biology of Darwin’s time, however, R-calculus derives three logically correct yet different theories, which are all consistent with the empirical evidence provided by Darwin and his principle of natural selection. Darwin’s theory of evolution matches one of the derived theories. The existence of the other two theories may be a reason for the controversy about Darwin’s theory.
 
Short bio
 
Wei Li is Professor at Beihang University, China and Director of the State Key Laboratory of Software Development Environment. He obtained his Ph.D. degree in computer science from the University of Edinburgh in 1983. He is Member and Director of the Standing Committee of the Academic Division of Information Technological Sciences of the Chinese Academy of Sciences. He was Senior Visiting Research Fellow at the Universities of Edinburgh, New Castle, Bremen, Saarland and Minnesota. He served as President of Beihang University from 2002 to 2009.
Professor Li’s current research interests include logical and computational foundation for scientific discovery, automated program debugging, crowd sourcing approach to software development and massive open online courses.
 
 
Artificial Intelligence Laboratory, University of Zurich, Switzerland
National Competence Center Research in Robotics, Switzerland
 
Abstract
 
Researchers from robotics and artificial intelligence increasingly agree that ideas from biology and self-organization can strongly benefit the design of autonomous robots. Biological organisms have evolved to perform and survive in a world characterized by rapid changes, high uncertainty, indefinite richness, and limited availability of information. The term "Soft Robotics" designates a new generation of robots capable of functioning in the real world by capitalizing on "soft" designs at various levels: surface (skin), movement mechanisms (muscles, tendons), and interaction with other agents (smooth, friendly interaction). Industrial robots, in contrast, operate in highly controlled environments with no or very little uncertainty. By "outsourcing" functionality to morphological and material characteristics - e.g. to the elasticity of the muscle-tendon system - the distinction between control and to-be-controlled, which is at the heart of manufacturing and control theory, breaks down and entirely new concepts will be required. In this lecture I will argue that the next generation of intelligent machines – robots – will be of the “soft” kind and I will explore the theoretical and practical implications, whose importance can hardly be over-estimated. I will be using many examples and case studies. In particular I will be introducing the tendon-driven “soft” robot “Roboy” that we have been developing in our laboratory over the last few months. (Hopefully, we will manage to put on a nice demonstration.) Although many challenges remain, concepts from biologically inspired “soft” robotics will eventually enable researchers to engineer machines for the real world that possess at least some of the desirable properties of biological organisms, such as adaptivity, robustness, and versatility.
 
Short bio
 
Master’s degree in physics and mathematics and Ph.D. in computer science (1979) from the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland. Three years as a post- doctoral fellow at Carnegie-Mellon and at Yale University in the US. Since 1987: professor of computer science at the Department of Informatics, University of Zurich, and director of the Artificial Intelligence Laboratory. Visiting professor and research fellow at the Free University of Brussels, the MIT Artificial Intelligence Laboratory in Cambridge, Mass., the Neurosciences Institute (NSI) in San Diego, the Beijing Open Laboratory for Cognitive Science, and the Sony Computer Science Laboratory in Paris. Elected "21st Century COE Professor, Information Science and Technology" at the University of Tokyo in 2004. In 2009: visiting professor at the Scuola Superiore Sant'Anna in Pisa, and at Shanghai Jiao Tong University in China; appointed "Fellow of the School of Engineering" at the University of Tokyo. Currently: Deputy Director of the NCCR Robotics, the "National Competence Center for Research in Robotics" in Switzerland. Research interests: embodied intelligence, biorobotics, morphological computation, modular robotics, self-assembly and educational technology. Authored books: "Understanding Intelligence", MIT Press, 1999 (with C. Scheier), "How the body shapes the way we think: a new view of intelligence," 2007 (with Josh Bongard) MIT Press (popular science style), "Designing intelligence - why brains aren't enough" (short version - with Josh Bongard and Don Berry, e-book), and "La révolution de l'intelligence du corps", 2012 ("The revolution of embodied intelligence"; with Alexandre Pitti) (in French). Lecture series: “The ShanghAI Lectures”, a global mixed-reality lecture series on embodied intelligence, broadcast in 2012 from the University of Zurich, and Shanghai Jiao Tong University, China in cooperation with other universities from around the globe.
World exhibition: ROBOTS ON TOUR - World Congress and Exhibition of Robots, Humanoids, Cyborgs, and more. 9 March 2013, Zurich (Puls 5): robotsontour.org Recent project: Roboy, a “soft” tendon-driven small humanoid: roboy.org ailab.ifi.uzh.ch
 
 
AT&T Labs-Research
 
Abstract
 
We are in the midst of a "sixth extinction" -- during our and our children's lifetimes, habitat destruction and other human-induced threats to biodiversity are on track to extirpate species on a similar scale to the five previous major extinction events, including the end of the dinosaurs. The situation is only exacerbated by climate change. I will describe two general ways in which computational learning and optimization methods can be used to help reduce the loss of biodiversity. First, it is critical to know where species live and why in order to conserve them. Given the limited information available for the vast majority of species, sophisticated modeling methods are required to infer species distributions from the available data. Second, funding for conservation is limited, so optimization methods are essential for designing and managing protected areas that most efficiently preserve biodiversity. Optimization methods must incorporate spatial and temporal aspects of species movement and distribution, for example when designing corridors through which species can migrate.
 
Short bio
 
Steven Phillips obtained a Ph.D. in Computer Science from Stanford University in 1993, and has worked ever since as a researcher at AT&T Bell Labs, now AT&T Labs-Research. His research focuses on computational sustainability, with particular applications in conservation biology and ecology. He pioneered the use of machine learning methods for modeling species geographic distributions, and co-authored the widely-used Maxent species distribution modeling software. He has applied integer optimization and network flow methods to plan protected corridors for species whose range is shifting due to climate change and to optimize the spatial configuration of wetland restoration and management.
 
 
Georgia Institute of Technology
 
Abstract
 
If we make a wearable computer, like Google's Glass, that sees as we see and hears as we hear, might it provide new insight into our daily lives? Going further, suppose we have the computer monitor the motion and manipulations of our hands and listen to our speech. Perhaps with enough data it can infer how we interact with the world. Might we create a symbiotic arrangement, where an intelligent computer assistant lives alongside our lives, providing useful functionality in exchange for occasional tips on the meaning of patterns and correlations it observes?
 
For over a decade at Georgia Tech, we have been capturing "first person" views of everyday human interactions with others and objects in the world with wearable computers equipped with cameras, microphones, and gesture sensors. Our goal is to automatically cluster large databases of time-varying signals into groups of actions (e.g. reaching into a pocket, pressing a button, opening a door, turning a key in lock, shifting gears, steering, braking, etc.) and then reveal higher level patterns by discovering grammars of lower level actions with these objects through time (e.g. driving to work at 9am everyday). By asking the user of the wearable computer to name these grammars (e.g. morning coffee, buying groceries, driving home), the wearable computer can begin to communicate with its user in more human terms and provide useful information and suggestions ("if you are about to drive home, do you need to buy groceries for your morning coffee?"). Through watching the wearable computer user, we can gain a new perspective for difficult computer vision and robotic problems by identifying objects by how they are used (turning pages indicates a book), not how they appear (the cover of Foley and van Dam versus the cover of Wired magazine). By creating increasingly more observant and useful intelligent assistants, we encourage wearable computer use and a cooperative framework for creating intelligence grounded in everyday interactions.
 
Short bio
 
Thad Starner is a wearable computing pioneer and a Professor in the School of Interactive Computing at the Georgia Institute of Technology. He is also a Technical Lead on Google's Glass, a self-contained wearable computer.
 
Thad received a PhD from the MIT Media Laboratory, where he founded the MIT Wearable Computing Project. Starner was perhaps the first to integrate a wearable computer into his everyday life as a personal assistant, and he coined the term "augmented reality" in 1990 to describe the types of interfaces he envisioned at the time. His groups' prototypes on mobile context-based search, gesture-based interfaces, mobile MP3 players, and mobile instant messaging foreshadowed now commonplace devices and services.
 
Thad has authored over 130 peer-reviewed scientific publications with over 100 co-authors on mobile Human Computer Interaction (HCI), machine learning, energy harvesting for mobile devices, and gesture recognition. He is listed as an inventor on over 80 United States patents awarded or in process. Thad is a founder of the annual International Symposium on Wearable Computers, and his work has been discussed in many forums including CNN, NPR, the BBC, CBS's 60 Minutes, ABC's 48 Hours, the New York Times, and the Wall Street Journal.
 
 
Massachusetts Institute of Technology
 
Abstract
 
What do people know about the world, and how do they come to know it? I will talk about recent work in cognitive science attempting to answer these questions in computational terms -- terms suitable for both reverse-engineering human intelligence and also building more human-like AI systems. This work follows in a long tradition of viewing knowledge as some kind of program, where learning then becomes a kind of program induction or program synthesis. By formalizing this classic idea using new tools for probabilistic programming, it becomes more powerful both as a machine-learning approach and as a framework for describing and explaining human learning. I will discuss working models in several domains, including learning word meanings, learning the structural form of a data set or a covariance kernel for nonparametric regression, learning motor programs, learning physical laws, and bootstrap learning (or "learning to learn").
 
Short bio
 
Josh Tenenbaum studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. His current work focuses on building probabilistic models to explain how people come to be able to learn new concepts from very sparse data, how we learn to learn, and the nature and origins of people's intuitive theories about the physical and social worlds. He is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his Ph.D. from MIT in 1999, and was a member of the Stanford University faculty in Psychology and (by courtesy) Computer Science from 1999 to 2002. His papers have received awards at numerous conferences, including CVPR (the IEEE Computer Vision and Pattern Recognition conference), ICDL (the International Conference on Learning and Development), NIPS, UAI, IJCAI and the Annual Conference of the Cognitive Science Society. He is the recipient of early career awards from the Society for Mathematical Psychology (2005), the Society of Experimental Psychologists, and the American Psychological Association (2008), and the Troland Research Award from the National Academy of Sciences (2011).
 
 
NICTA and University of Melbourne
 
Abstract
 
The frequency and intensity of natural disasters has significantly increased over the past decades and this trend is predicted to continue. Natural disasters have dramatic impacts on human lives and on the socio-economic welfare of entire regions. They are identified as one of the major risks of the East Asia and Pacific region, which represents 85 percents of all people affected since 2007. Moreover, this exposure will likely double by 2050 due to rapid urbanization and climate change. Dramatic events such as Hurricane Katrina and the Tohoku tsunami have also emphasized the strong need for optimization tools in preparing, mitigating, responding, and recovering from disasters, complementing the role of situational awareness that had been the focus in the past. This talk presents some of the progress accomplished in the last 5 years and deployed to assist the response to hurricanes such as Irene and Sandy. It describes the computational challenges ahead in optimization, simulation, and the modeling of complex inter-dependent infrastructures, and sketches the disaster management platform built at NICTA.
 
Short bio
 
Pascal Van Hentenryck leads the optimization research group at NICTA and is professor of Computing and Information Sciences in the School of Engineering at the University of Melbourne. He is the main designer and implementor of the CHIP programming system, the foundation of all modern constraint programming systems. Van Hentenryck developed a number of influential optimization systems, including the Numerica system, the optimisation programming language OPL, and the programming language Comet, which have been widely used in industry and academia. His research on disaster planning, response, and recovery has also been deployed to help federal agencies in the United States to mitigate the effects of hurricanes on coastal areas. Van Hentenryck is the author of five books, all published by the MIT Press, and over 230 scientific publications.He is the recipient of an NSF Young Investigator award in 1993, the 2002 INFORMS Computing Society (ICS) prize, the 2006 ACP award, and honorary doctoral degrees from the University of Louvain in 2008 and from the University of Nantes in 2011. He is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and the recipient of the Philip J. Bray Award for Teaching Excellence at Brown University and is a Ulam fellow at the Center of Nonlinear Studies at Los Alamos in 2011 and 2012.
 
This talk has been cancelled
 
Institute of Cognitive Sciences and Technologies. National Research Council, Rome, Italy
 
Abstract
 
The current structure/mechanism of computing is "social", that's why we need modeling sociality in intelligence and behavior. First, computation is distributed and collective; based on communication, exchange, cooperation, among different intelligent entities: a multiplicity of competences and local data.
On the other side, computers mediate and support human cooperation, networking, and organization: could they really mediate among humans and intelligently support their social interactions and structures without any "understanding" of what is happening, of what their are working about, of the rules of their "games"? Even information and its relevance is socially based.
Third, the CH Interaction has to become more human like, accessible, supporting, and it will be intended in an much broader way: the interaction with smart environments or with robots cannot be a-social: they have to understand us (and vice versa) and be proactive.
Fourth, we need true emotional interactions, with their appropriate contents and mind-reading; not just faked affective responses. But sociality consists in what? What are its basic principles? And how should they be procedurally captured and modeled for "social artificial intelligences"? Which are the foundational cognitive and pragmatic representations ("mediators") necessary for social relations, institutions, and actions?
Why to cooperate and what cooperation is? What should motivate social actions? How do Power relations emerge from interaction? How do Norms work? How to build Trust and why? What is a Collective and how it works an Organization? How to rich agreements among divergent interests? What are the "contents" of social emotions and their function? Is society fully understood and intended by the interacting actors? Or there are complex emergent dynamics and structures only partially intended but systematically reproduced and reproducing the supporting behaviors and minds? Which is the correct relation between Social AI and theories and models provided by the Social Sciences?
 
Short bio
 
Born Roma, Italy 1944
Full Prof. of Cognitive Science, University of Siena (2001-2011)
Director of the Institute for Sciences and Technologies, National Research Council (2002-2011)
Prof. Psychology and Economics, LUISS Univ. Roma
 
Editorial Boards
EB member of Mind & Society (with K. Arrow, J. Elster, D. Sperber, A. Goldman, E. Shafir, P. Slovic, C. Camerer, A. Cicourel, J. McClelland, J. Fodor, J. Searle, K. Binmore, P. Hogarth), J. of Autonomous Agents and MAS (with J. Rosenschein, M. Wooldridge, C. Sierra), J. of Artificial Societies and Social Simulation (with R. Axtell, N. Gilbert, R. Hegselmann)
  • Program Co-Chair of the first International Conference on Autonomous Agents & Multi-Agent Systems 2002
  • General Co-Chair of 8th International Conference on Autonomous Agents and Multiagent Systems. Budapest
  • Chair of several Conferences and WS: MAAMAW, ATAL, Trust and Deception, AAAI ws, ...
  • Member of the directive board of Italian Association for AI AI*IA; responsible of the Agent interest group.
  • 2003: ECCAI (Eu. Coordination Comittee for Artificial Intelligence) fellow, for pioneering work in Artificial Intelligence.
  • 2007: Mind & Brain prize, Univ. of Turin (co-awarded with M. Tomasello), for pioneering the integration of psychology in cognitive science and breakthroughs on autonomous agents and their interactions.
  • Emeritus Member, Board of Directors of the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS; 2002-present)
 
Invited speaker at many Conferences and WS in AI and Cognitive Science: IJCAI'97; AI&LAW; Artificial Economics; DEON; EuroCogSci2007; national AI conferences in Germany, France, Danmark, Portugal, Brazil, Spain, ..
 
More than 200 papers in journals, books, proceedings of international conferences
12 books with Italian publishers
7 books with international publishers:
  • Cognitive and Social Action (co-author R. Conte), UCL Press (Taylor& Francis), London, 1995. Artificial Social Systems, (co-editor E. Werner), Springer, LNAI, Berlin, 1994; From Reaction to Cognition (co-editor J.P.Muller), Springer, LNAI,Berlin, 1995; Trust and Deception in Virtual Societies (co-editor Y-H. Tan), Kluwer, Dordrecht, 2002
  • Intelligent Agents VII: Theories, Architectures, Languages (co-editor Y. Lesperance), Springer, LNAI, Berlin 2002
  • Autonomous Agents and Multi-Agent Systems '09 (co-editors Decker, K., Sichman, J., Sierra, C) Budapest, 2009
  • Trust Theory (co-author R. Falcone) John Wiley & Sons, UK, 2010.
Overall citation analysis (from PublishOrPerish, consulted on April 1, 2011)
412 contributions, 7582 citations (per year: 176), h-index=44, g-index=79