Prof. Ana Bazzan
Affiliation: Universidade Federal do Rio Grande do Sul
Website: http://www.inf.ufrgs.br/~bazzan/
Bio: Ana L. C. Bazzan holds a PhD degree from the University of Karlsruhe in Germany, and is a full professor at the Informatics Institute of the Federal University of Rio Grande do Sul (UFRGS) in Brazil. She has served as general co-chair of the AAMAS 2014 and is serving as one of the PC chairs of the PRIMA 2017 and one of the area chair of the IJCAI 2017 conference. She has served several times as member of the AAMAS (and other conferences) program committee (as PC member or senior PC member) and as an associated editor for: J. of Autonomous Agents and Multiagent Systems, Advances in Complex systems, and Mutiagent and Grid Systems. She is a member of the IFAAMAS board (2004-2008 and 2014-). She co-organized the Workshop on Synergies between Multiagent Systems, Machine Learning, and Complex Systems (TRI 2015), held together with IJCAI 2015, and the Workshop Agents in Traffic and Transportation (ATT) series. Her research interests include MAS, ABMS, machine learning, multiagent reinforcement learning, evolutionary game theory, swarm intelligence, and complex systems. Her work is mainly applied in domains related to traffic and transportation.
Talk Title: Beyond Reinforcement Learning in Multiagent Systems
Talk Abstract: Learning is an important component of an agent's decision making process. Despite the diversity of approaches in the machine learning area, in the multiagent community, learning is associated mostly with reinforcement learning. Given this background, this talk has two aims: to revisit the old days motivations for multiagent learning, and to describe some of the work addressing the frontiers of multiagent systems and machine learning. The intention of the latter task is to try to motivate people to address the issues that are involved in the application of techniques from multiagent systems in machine learning and vice-versa.
Prof. Thore Graepel
Affiliations: DeepMind and University College London
Website: https://deepmind.com/
Bio: Thore Graepel is a research group lead at DeepMind and holds a part-time position as Chair of Machine Learning at University College London. He studied physics at the University of Hamburg, Imperial College London, and Technical University of Berlin, where he also obtained his PhD in machine learning in 2001. He spent time as a postdoctoral researcher at ETH Zurich and Royal Holloway College, University of London, before joining Microsoft Research in Cambridge in 2003, where he co-founded the Online Services and Advertising group. Major applications of Thore's work include Xbox Live's TrueSkill system for ranking and matchmaking, the AdPredictor framework for click-through rate prediction in Bing, and the Matchbox recommender system which inspired the recommendation engine of Xbox Live Marketplace. More recently, Thore's work on the predictability of private attributes from digital records of human behaviour has been the subject of intense discussion among privacy experts and the general public. Thore's research interests are in artificial intelligence and machine learning and include probabilistic graphical models, reinforcement learning, game theory, and multi-agent systems. He has published over one hundred peer-reviewed papers, is a named co-inventor on dozens of patents, serves on the editorial boards of JMLR and MLJ, and is a founding editor of the book series Machine Learning & Pattern Recognition at Chapman & Hall/CRC. At DeepMind, Thore has returned to his original passion of understanding and creating intelligence, and recently contributed to creating AlphaGo, the first computer program to defeat a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Talk Title: The role of multi-agent learning in artificial intelligence research
Talk Abstract: We consider intelligence to be the ability of an agent to achieve goals in a wide range of environments (Legg & Hutter). This notion motivates an approach towards artificial intelligence research in which we progress on two fronts: On one side, we define wider and more difficult sets of tasks or environments, and on the other side we develop agents capable of learning to succeed within these ever more challenging environments. Thinking in evolutionary/ecological terms, the richest environments for a given agent are themselves evolving collections of agents, be that biological organisms within their ecological niche or companies within a given market. When thinking about a route towards artificial intelligence it is therefore crucial to go beyond the reinforcement learning (RL) paradigm of agent and environment, and consider evolving multi-agent systems. In this talk, I will discuss the important role multi-agent learning has to play in artificial intelligence research and the challenges it presents. I will discuss three examples from our work, including i) the role of self-play in AlphaGo, ii) the emergence of cooperation of self-interested agents in sequential social dilemmas, and iii) the use of evolutionary principles to channel gradient descent in super neural networks (PathNet).