Invited talks


Rules of Social Nature for MAS

“Aula Magna” room
October 29th, 2019 | 14:30-15:30
Chair: Mehdi Dastani

For many years people have been using norms to create flexible regulations for the interactions between agents in MAS. Just like concepts from human psychology and biology have been inspirational for the modelling of agent reasoning and evolution of MAS, we have drawn inspiration from social science to model concepts such as norms. However, they have been quite resistant to any kind of uniform representation and/or use. Moreover, other social mechanisms (such as conventions, practices, etc.) exist that can regulate social behaviour. In this presentation, I will discuss the reasons for the inherent difficulties to model and implement all these concepts in a uniform way. Based on this analysis I will indicate in which kind of situations each of these types of social rules can be most useful and describe some first steps towards a methodology for using them properly for MAS.

Frank Dignum is Professor of Socially Aware AI at Umeå University, Sweden and associated with Utrecht University in The Netherlands and the Czech University of Technology in Prague. He is a fellow of the European Artificial Intelligence Association (EURAI). He has been active in the field of Multi-Agent Systems and Social Simulations for 30 years. He has not only published in the most important conferences in AI, but also actively organized major conferences such as AAMAS and ECAI, besides many specialized workshops. His main research interests have been in social aspects of MAS. He has made major contributions in the formalization and implementation of norms for MAS, but also has looked at the role of communication in MAS. In the last few years he has published about the use of culture, values and norms in agent based social simulations.



Law, Ethics, and the Governance of AI

“Aula Magna” room
October 30th, 2019 | 11:30-12:30
Chair: Beishui Liao

In order to boost investment and define ethical guidelines in the field of Artificial Intelligence (AI), the European Commission set up three different groups of experts, or 'HLEG,' on 25 April 2018. They focus on (i) the ethics of AI; (ii) possible amendments to the directive on liability for defective products; and, (iii) a general framework for liability and new technologies formation. In light of the documents and reports of such HLEGs, the aim of this paper is to grasp how these proposals may fit different forms of legal regulation and governance. A first option is to consider the ethical principles of current declarations and guidelines as the principles that will be embraced and enforced through the top-down tools of the law. A second possibility is to conceive the work of such HLEGs as the way in which we should complement and strengthen existing regulations. A third prospect has to do with further forms of governance and legal regulation, such as the EU model of co-regulation in the field of data protection, which should be contemplated in addition, or as an alternative, to the top-down approaches of the previous stances. A final choice regards the context-dependency of the issues we're dealing with, in both the fields of applied ethics and legal regulation, so that focus should be on the specific normative challenges brought about by, say, self-driving cars, drones, robot doctors, and the like. As a result of these multiple options, it follows that current discussions on the regulation of AI do not only regard the set of rules, values, principles, standards, protocols, or guidelines that should govern it. We also have to take sides in the kind of governance model, within which such rules, values, principles, etc. should operate.

A former lawyer and professor of Jurisprudence at the Department of Law, University of Turin (Italy), Vice President of the Italian Association of Legal Informatics, Ugo Pagallo is Faculty at the Georgetown's Law School transnational center in London, UK (CTLS), and member of the Expert Group set up by the EU Commission on liability and new technology/new technologies formation. Ugo is also chairing the 2019 AI4People project, namely the first global forum in Europe on the Social Impacts of Artificial Intelligence set up by the European Institute for Science, Media, and Democracy (Atomium) in Bruxelles (2019). Ugo is professor at the Joint International Doctoral (PhD) degree program in Law, Science and Technology, an interdisciplinary integrated doctorate of the EU’s Erasmus Mundus EMJDs programs. Author of eleven monographs and numerous essays in scholarly journals and book chapters, his main interests are Artificial Intelligence & law, network and legal theory, governance, and information technology law (specially data protection law and copyright).