Keynote Speakers

Virginia Dignum
Virginia Dignum
Umeå University, Sweden

Responsible AI: from principles to action

Every day we see news about advances and the societal impact of AI. AI is changing the way we work, live and solve challenges but concerns about fairness, transparency or privacy are also growing. Ensuring AI ethics is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional and legal methods and tools which provide concrete support to AI practitioners, as well as awareness and training to enable participation of all, to ensure the alignment of AI systems with our societies’ principles and values.
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden and associated with the TU Delft in the Netherlands. She is the director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software, the largest Swedish national research program on fundamental multidisciplinary research on the societal and human impact of AI. She is a member of the Royal Swedish Academy of Engineering Sciences, and a Fellow of the European Artificial Intelligence Association (EURAI). Her current research focus is on the specification, verification and monitoring of ethical and societal principles for intelligent autonomous systems. She is committed to policy and awareness efforts towards the responsible development and use of AI, as member of the European Commission High Level Expert Group on Artificial Intelligence, the working group on Responsible AI of the Global Partnership on AI (GPAI), the World Economic Forum’s Global Artificial Intelligence Council, lead for UNICEF's guidance for AI and children, the Executive Committee of the IEEE Initiative on Ethically Aligned Design, and as founding member of ALLAI, the Dutch AI Alliance. Her book “Responsible Artificial Intelligence: developing and using AI in a responsible way” was published by Springer-Nature in 2019.

Shimon Whiteson
Shimon Whiteson
University of Oxford, UK

Factored Value Functions for Cooperative Multi-Agent Reinforcement Learning

Cooperative multi-agent reinforcement learning (MARL) considers how teams of agents can coordinate their behaviour to efficiently achieve common goals. A key challenge therein is how to learn cooperative policies in a centralised fashion that nonetheless can be executed in a decentralised fashion. In this talk, I will discuss QMIX, a simple but powerful cooperative MARL algorithm that relies on factored value functions both to make learning efficient and to ensure decentralisability. Extensive results on the StarCraft Multi-Agent Challenge (SMAC), a benchmark we have developed, confirm that QMIX outperforms alternative approaches, though further analysis shows that this is not always for the reasons we expected.
Shimon Whiteson is a Professor of Computer Science at the University of Oxford and the Head of Research at Waymo UK. His research focuses on deep reinforcement learning and learning from demonstration, with applications in robotics and video games. He completed his doctorate at the University of Texas at Austin in 2007. He spent eight years as an Assistant and then an Associate Professor at the University of Amsterdam before joining Oxford as an Associate Professor in 2015. He was awarded a Starting Grant from the European Research Council in 2014, a Google Faculty Research Award in 2017, and a JPMorgan Faculty Award in 2019.

Fredrik Heintz
Lucia Specia
Imperial College London, UK

Multimodal Simultaneous Machine Translation

Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible. Therefore, translation has to start with an incomplete source text, which is read progressively, creating the need for anticipation. In this talk I will present work where we seek to understand whether the addition of visual information can compensate for the missing source context. We analyse the impact of different multimodal approaches and visual features on state-of-the-art SiMT frameworks, including fixed and dynamic policy approaches using reinforcement learning. Our results show that visual context is helpful and that visually-grounded models based on explicit object region information perform the best. Our qualitative analysis illustrates cases where only the multimodal systems are able to translate correctly from English into gender-marked languages, as well as deal with differences in word order, such as adjective-noun placement between English and French.
Lucia Specia is Professor of Natural Language Processing at Imperial College London, with part-time appointments at the University of Sheffield and Dublin City University. Her research focuses on various aspects of data-driven approaches to language processing, with a particular interest in multimodal and multilingual context models and work at the intersection of language and vision. Her work has been applied to various tasks such as machine translation, image captioning, text adaptation and quality estimation. She is also interested in making machine translation useful for end-users, where tools like quality estimation and automatic post-editing play a big role. She is the recipient of the MultiMT ERC Starting Grant on Multimodal Machine Translation (2016-2021) and is also currently involved in research projects on in-browser machine translation and quality estimation, as well as multilingual referential grounding. In the past she worked as Senior Lecturer at the University of Wolverhampton (2010-2011), and research engineer at the Xerox Research Centre, France (2008-2009, now Naver Labs). She received a PhD in Computer Science from the University of São Paulo, Brazil, in 2008.

Fredrik Heintz
Fredrik Heintz
Linköping University, Sweden

Trustworthy Human-Centric AI -- The European Approach

Europe has taken a clear stand that we want AI, but we do not want just any AI. We want AI that we can trust and that puts people at the center. This talk presents the European approach to Trustworthy Human-Centric AI including the main EU initiatives and the European AI ecosystem such as the different major projects and organizations. The talk will also touch upon some of the research challenges related to Trustworthy AI from the H2020 ICT-48 network TAILOR which has the goal of developing the scientific foundations for Trustworthy AI through integrating learning, optimisation, and reasoning. The talk is based on my involvement in many of the ongoing initiatives including CLAIRE, EurAI, ADRA, and TAILOR.
Dr. Fredrik Heintz is an Associate Professor of Computer Science at Linköping University, Sweden. He leads the Reasoning and Learning lab. His research focus is artificial intelligence especially Trustworthy AI and the intersection between knowledge representation and machine learning. He is the Director of the Graduate School for the Wallenberg AI, Autonomous Systems and Software Program (WASP), coordinator of the TAILOR ICT-48 network developing the scientific foundations of Trustworthy AI, and the President of the Swedish AI Society. He is also very active in education activities both at the university level and in promoting AI, computer science and computational thinking in primary, secondary and professional education. Fellow of the Royal Swedish Academy of Engineering Sciences (IVA).