[rede.APPIA] CFP: Thematic Track on MultiAgent Systems: Theory and Applications (MASTA@EPIA 2020)

**Our apologies if you receive multiple copies of this CFP** 
 
MASTA 2020: 11th Thematic Track on MultiAgent Systems: Theory and Applications
Oeiras, Portugal, September 7-9, 2020
Conference website https://epia2020.inesc-id.pt/?page_id=101
Submission deadline April 15, 2020
Paper acceptance notification May 31, 2020
Camera-ready deadline June 15, 2020

Research on Multi-Agent Systems (MAS) has a vigorous, exciting tradition and has led to important theories and systems. However, new trends and concerns are still emerging and form the basis of current and future research. The 11th thematic track on “MultiAgent Systems: Theory and Applications”, to take place at EPIA 2020, and will provide a discussion forum on the most recent and innovative work in all areas of MAS.

The unifying focus of the thematic track will be on methodological aspects. Both theoretical and practical research should be situated in the context of existing or new methodologies. This will not preclude any specific topic, but preference will be given to research work that establishes some connection with the methodological aspects or to successful applications built upon some methodology.

List of Topics

Topics of interest include, but are not limited to:

  • Agent theories, architectures and models
  • Agent-based systems Interoperability
  • Agreement technologies
  • Applications of agents and MAS (industrial and commercial)
  • Artificial social systems
  • Automated negotiation and computational argumentation
  • Cognitive models, including emotions and philosophies
  • Communication: languages, semantics, protocols, and conversations
  • Cooperation, coordination and teamwork in MAS
  • Ethical and legal issues raised by autonomous agents and MAS
  • Formal methods for modelling agents and agent-based systems
  • Human-agent interaction
  • Learning in MAS
  • Multiagent evolution, emergent behavior and adaptation
  • Multiagent modelling and simulation
  • Scalability and performance of MAS
  • Societal and ethical issues: organizations, institutions, norms, socio-technical systems
  • Trust, reputation, privacy and security

Submission and Reviewing

All papers should be submitted in PDF format through the EPIA 2020 EasyChair submission page. Prospective authors should select the thematic track to which their paper is to be submitted. The papers should be prepared according to the Springer LNCS format, with a maximum of 12 pages. Submitted papers will be subject to a double-blind review process and will be peer-reviewed by at least three members of the Program Committee. It is the responsibility of the authors to remove names and affiliations from the submitted papers, and to take reasonable care to assure anonymity during the review process.

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, if the paper is accepted, the corresponding author, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.

Proceedings and Presentations

Accepted papers will be included in the conference proceedings (a volume of Springer’s LNAI-Lecture Notes in Artificial Intelligence), provided that at least one author is registered in EPIA 2020 by the early registration deadline. EPIA 2020 proceedings are indexed in Thomson Reuters ISI Web of Science, Scopus, DBLP and Google Scholar.

Each accepted paper must be presented by one of the authors in the track session.

Committees

Organizing committee

Steering committee

Program Committee

  • Adriana Giret, Universitat Politècnica de València, Spain
  • Alberto Sardinha, University of Lisbon,  Portugal
  • Alejandro Guerra-Hernández, Universidad Veracruzana, Mexico
  • Andrea Omicini, Alma Mater Studiorum–Università di Bologna, Italy
  • Antonio J. M. Castro, LIACC, University of Porto, Portugal
  • Carlos Carrascosa, GTI-IA DSIC Universidad Politecnica de Valencia, Spain
  • Carlos Martinho, University of Lisbon, Portugal
  • Daniel Castro Silva, FEUP-DEI / LIACC, Portugal
  • Dave De Jonge, IIIA-CSIC, Spain
  • Diana Adamatti, Universidade Federal do Rio Grande (FURG), Brazil
  • Francisco Grimaldo, Departament d'Informàtica – Universitat de València, Spain
  • Henrique Lopes Cardoso, University of Porto, Portugal
  • Javier Carbo, Univ. Carlos III of Madrid, Spain
  • Joao Leite, Universidade NOVA de Lisboa, Portugal
  • John-Jules Meyer, Utrecht University, The Netherlands
  • Jordi Sabater Mir, IIIA-CSIC, Spain
  • Jorge Gomez-Sanz, Universidad Complutense de Madrid, Spain
  • Juan Carlos Burguillo, University of Vigo, Spain
  • Juan Corchado, University of Salamanca, Spain
  • Lars Braubach, University of Hamburg, Germany
  • Luís Correia, Universidade de Lisboa, Portugal
  • Luis Macedo, University of Coimbra, Portugal
  • Luís Nunes, Iscte – Instituto Universitário de Lisboa, Portugal
  • Marin Lujak, IMT Lille Douai, France
  • Michael Ignaz Schumacher, University of Applied Sciences Western Switzerland, Switzerland
  • Olivier Boissier, Mines Saint-Etienne, Institut Henri Fayol, Laboratoire Hubert Curien, France
  • Paulo Leitao, Polythecnic Institute of Braganca, Portugal
  • Paulo Novais, University of Minho, Portugal
  • Rafael H. Bordini, Pontífica Universidade Católica do Rio Grande do Sul, Brazil
  • Ramon Hermoso, University of Zaragoza, Spain
  • Reyhan Aydogan, Delft University of Technology, Turkey
  • Rosa Vicari, Universidade Federal do Rio Grande do Sul, Brazil
  • Viviane Silva, IBM Research Brazil, Brazil

Venue

The conference will be held in Instituto Superior Técnico – TagusPark, Oeiras, Portugal

[rede.APPIA] FROM THE MACHINERY OF MORALITY TO MACHINE MORALS – OTC


FROM THE MACHINERY OF MORALITY TO MACHINE MORALS


FROM THE MACHINERY OF MORALITY TO MACHINE MORALS

Luís Moniz Pereira
Laboratory for Computer Science and Informatics (NOVA-LINCS)
Faculty of Sciences and Technology
NOVA University of Lisbon

Human beings have always been aware of the risks associated with both knowledge and the technologies associated with it. Not only in Greek mythology are the signs and warnings for these dangers, but also in the founding myths of the Judeo-Christian religions. However, such warnings and fears have never made more sense than they do today. This results from the emergence of machines capable of claiming cognitive functions that until recently were performed exclusively by human beings. The cognitive revolution brought about by the development of AI, in addition to the technical problems associated with its design and conceptualization, raises social and economic problems with a direct impact on humanity in general, and on its constituent groups. That is why its approach from the moral point of view is urgent and imperative. The ethical problems to be considered are of two kinds: on the one hand, those associated with the type of society we want to promote through automation, complexification and the power of data processing available today; on the other hand, how to program machines for decision making according to moral principles acceptable to human beings who share knowledge and action with them.[1]

In the Hellenistic period (323-31 BC), Hero of Alexandria and other brilliant Greek engineers had been bringing together a variety of machines, powered either hydraulically or pneumatically. The Greeks recognized that automata and other artefacts with natural shapes – imaginary or real – could be both harmless and dangerous. They would also be able to be used for work, sex, entertainment or religion, or to inflict pain or death. Clearly, biotechnology, real and imaginary, already fascinated the Ancients[2].

Today’s deep learning algorithms allow AI computers to extract patterns from vast data, extrapolate them to new situations and make decisions without any human guidance. Inevitably, AI entities will develop the ability to question themselves, and will answers to questions that they discover. Today’s computers have already shown how to develop altruism, but also how to deceive others on their own. The question therefore makes perfect sense: why an ethics for machines?

  • Because computational agents have become more sophisticated, more autonomous, they act in groups, and form populations that include humans.
  • They are being developed in a variety of areas, where complex issues of responsibility require greater attention, namely in situations of ethical choice.
  • As their autonomy is increasing, the requirement that they function responsibly, ethically, and safely, is an increasing concern.

Autonomous and judicious deliberation calls for rules and principles of a moral nature applicable to the relationship between machines, the relationship between machines and human beings, and the consequences and results of the entry of these machines into the world of work and society in general. The present state of development of AI, both in its ability to elucidate the cognitive processes emerging in evolution, and in its technological aptitude for the conception and production of computer programs and intelligent artefacts, is the greatest intellectual challenge of our time. The complexity of these issues is summarized with a scheme, shown below ― called “The carousel of ethical machinery”. In summary, we are at a crossroads, that of AI, the ethics of machines, and their social shock.

The topic of morals has two major domains. The first is called “cognitive”, that is, around the need to clarify how we think in moral terms. In order to behave morally, it is necessary to consider possibilities: should I behave in this way or behave rather in another way? It is necessary to evaluate the various scenarios, the various hypotheses. It is necessary to compare these hypotheses to see which are the most desirable, what are their respective consequences, what are their side effects. This ability is essential for living in society, for the domain of the “collective”.

I studied certain cognitive abilities to see later if they were promoters of moral cooperation, in a population of computer beings, i.e. of computer programs that live together. Wherein a program is a set of strategies defined by rules; that is, in a given situation, a program develops a certain action dictated by its current strategy, and in which the other programs also have actions dictated by their respective strategic rules. It is as if they were agents living together, each with possibly different objective options. We study the “if”, and the “how” this population will be able to evolve in a good gregarious sense, and if that sense is stable, i.e., if it is maintained over time[3].

A very important instrument for this investigation is the so-called Evolutionary Game Theory [4], which consists of seeing how, in a game with well-defined rules, a population evolves through social learning. Society is governed by a set of precepts of group functioning, say rules of a game in which it is allowed to do certain things, but not others. The game indicates the gains or losses of each player in each move, depending on how he plays. Social learning means that a player starts to imitate the strategy of another whose results indicate that he has been more successful. Having defined certain rules, how does the social game evolve? Here we could enter the field of ideology, but we will not go that far. We are still studying the viability of morals. We assume that morality is evolutionary, that it developed with our species. As we have changed, for hundreds of thousands of years, we have been improving the rules of coexistence and improving our own intellectual capacities and knowing how to use these rules of coexistence. Not always conveniently, that is, social rules should be so that we all benefit, although there is always a temptation for some to want, unfairly and unjustly, more than others – to enjoy the benefits without paying for the costs. This is the essential problem of cooperation: how it becomes possible and, at the same time, how those who want to abuse it are kept under control. In order for our species to get to where it is today, evolution itself had to go on selecting us in terms of a moral of coexistence beneficial to gregariousness.

The problem of progress in cooperation and the emergence of collective behaviour, spanning disciplines as diverse as Economics, Physics, Biology, Psychology, Political Science, Cognitive Science and Computing, is still one of the greatest interdisciplinary challenges that science faces today. Mathematical and simulation techniques from Evolutionary Game Theory have been shown to be useful for studying such subjects. In order to better understand the evolutionary mechanisms that promote and maintain cooperative behaviour in various societies, it is important to take into account the intrinsic complexity of the participating individuals, that is, their intricate cognitive processes in decision making. The outcome of many social and economic interactions is defined, not only by the predictions that individuals make about the behaviour and intentions of other individuals, but also by the cognitive mechanism that others adopt to make their own decisions.

The research, based on abstract mathematical models for this purpose, showed that the way the decision process is modelled has a varied influence on the balance to be achieved in the dynamics of collective collaboration. Evidence abounds showing that humans (and many other species) have complex cognitive abilities: theory-of-mind; recognition of intentions; hypothetical, counterfactual and reactive reasoning; emotional guidance; learning; preferences; commitments; and morality. In order to better understand how all of these mechanisms make cooperation possible, they need to be modelled within the context of evolutionary processes. In other words, we must try to understand how the cognitive systems used to explain human behaviour – successfully developed by Artificial Intelligence and the Cognitive Sciences – deal with Darwin’s evolutionary theory, and in this way understand and justify its appearance in terms of the existence of a cooperation dynamic, or the absence of it.

It must be emphasized, however, that we are facing Terra Incognita. There is an entire continent to be explored, the contours of which we can only glimpse. We do not yet know enough about our own morals, nor is our knowledge accurate enough to be programmed on machines. In fact, there are several ethical theories, antagonistic to each other, but which also complement each other. Philosophy and Jurisprudence study Ethics, that is, the problematic of defining a system of values ​​articulated in principles. Each ethics in particular is the substrate that supports the rules and legislation that justify, in each context, the specific rules that will be applied and used on the ground by that ethics. As a result, and depending on cultures and circumstances, moral rules are arrived at. In each context, we use abstract ethical principles to arrive at concrete moral rules. In practice, a morality, a set of moral rules, results from a historical, contextual and philosophical combination of ethical theories that have evolved over time.

The Carousel of the Ethical Machinery, below, is a way of summarizing the complexity of the problem of moral machinery. In the central carousel, one wishes to identify those factors that concern what to do, or how to act. “What to do” is surrounded by other carousels, each having to do with the ethical use of machines.

Regarding Ethical Use, we have already heard of fake news and algorithms that influence elections: it is a misuse of machines, which must be subject to moral rules. For example, it is a negative, immoral practice, for a program to pretend to be a human being. There are, of course, other examples of immoral uses of machines. Among the darkest will be that of drones with autonomous capacity to kill individuals. It is necessary to remember, therefore, that the deregulation of use is increasingly subordinated to the capacity of the machine itself, precisely because it has an increasing autonomy, which, consequently, amplifies the issues of its moral use.

This allows us to think that machines should also protect us from their unethical use by humans. Suppose someone orders a program to execute an instruction to act to cause harm to human beings – the program itself could refuse to do so[5]. That will be the second reason why we need to introduce morals in machines, so that they do not comply with everything they are merely programmed to do. We don’t want the machine to be in a situation of simply stating “I did it because I was told”. A metaphor for the position of Nazi war criminals in Nuremberg, saying “I only followed orders, did what I was told” as if they had no critical sense and could not disobey orders. The challenge arises of building machines capable of disobeying when justified.

Another circle of the carousel is that of Human Values. Basically, we intend to give machines our values, because they will live with us. These will have to be ethically reconcilable with the population where they are active.

otc.pt/wp/wp-content/uploads/2020/02/New-Graph-267×300.png 267w” sizes=”(max-width: 248px) 100vw, 248px” class=”float auxiliary left” style=”max-width: 100%; margin: 1.4em auto; height: auto; display: block; clear: both; font-size: 0.75em; line-height: 1.4em; color: rgba(0, 0, 0, 0.65098); float: none;”>

Legislation is highlighted in another circle of the merry-go-round because, at the end of the process, everything will have to be translated into laws, norms and standards, about what is allowed or prohibited. Just as cars have regulations on pollution, machines will also have to comply with certain criteria, approved by an entity qualified to do so. It is often asked who is responsible if a driverless car runs over a pedestrian when it might not have done so? The owner, the manufacturer? But the Legislator is not mentioned. However, someone had to say “this car without a driver is road-fit”. It will be up to a government to legislate which tests a driverless car must pass. If it turns out that such tests were not sufficiently exhaustive, the entity that authorized the circulation of those vehicles will also be responsible.

Another circle is that of Technical Issues. Everything always involves the effective construction of the machines, for whatever purpose. Not everything is technically possible. For example, we have not yet been able to think in terms of computerized proofs that a machine is not going to do ethically incorrect things. Not to mention the cases in which any hacker can enter the system and force the machine to do wrong things. This is a security problem to be solved technically.

Finally, and not least, are the Social Impacts of machines with autonomy. When we refer to machines we are talking about both robots and software. The latter is much more dangerous as it spreads and reproduces easily anywhere in the world. As for the robot, it will be much more difficult to reproduce, implies a much higher cost, and brings the material limitations inherent to the possession of a volumetric body. With regard to the social impact, it is expected that soon we will have robots cooking hamburgers and serving us at the table, with the resulting implications for the labour market. Although not much intelligence is required for these tasks, the challenges inherent in fine eye-brain-hand coordination are not to be overlooked. Something that machines still don’t have in the same degree as humans, but on that front too, robots are advancing very fast. When it comes to software, the issue is more worrying because, deep down, programs are reaching cognitive levels that have hitherto been our monopoly. That is why people feel much more concerned. To wit, until now there were things that only a human being knew how to do. However, little by little, the machines started to play chess, to make medical diagnoses, etc., and more and more they will perform more sophisticated mental activities, and they will also, successively, learn to do it.

This door that opens creates, for the first time, an open competition with humans. Competition that could cause – depending on social organization, ideology, and politics – the replacement of humans by machines. Because, to do the same thing, there will be cheaper instruments than the human. Thence, as the human becomes expendable, wages will decrease, and machine owners will become ever wealthier. The present wealth gap, which is increasing and shows that the rich are getting richer and the poor are getting poorer, is a gap that AI is already digging into, and will widen further. To the point that a new social contract will be required, under penalty of cataclysm. The way we work in terms of capital and labour, and the way each of the two things are equated, will have to be completely overhauled. There is a risk that, if this does not happen, asymmetries in wealth will cause, sooner or later, a great revolt, insurrection, and social breakdown. This will happen when the caste system induced by AI advances generates its own implosion.

Currently, we already have robots in hospitals, we have drones that fly by themselves, we have autonomous motorboats, we have driverless cars, and we even have interactive moral games, capable of teaching morals. In a game I developed a robot aims to save a princess, for which purpose it combines several ethical approaches[6]. This even exemplifies that, in practice, morality is not one, but rather a multiple combination. We ourselves do not follow exclusively the knight-errant’s morality, or utilitarian morality, or Kantian morality, or Gandhi’s morality. Our ethics are a mixture of them, which evolves. This game’s program shows how the robot’s morality evolves. We must, therefore, assume that the programming of morals in machines must allow for its own evolution[7]. There is no fixed, frozen morality. Morality is an evolutionary thing, and throughout the history of the species, both remote and near, has been developing collectively.

It is clear that the machines are increasingly autonomous, and we have to ensure that they can live with us on our terms and with our rules. There is, therefore, a new ethical paradigm that says that morality must also be computational. I mean, we have to be able to program morals. This has a positive side, because when programming morals in machines, we understand better our own human ethics.

Consider the following example, in scientific work on guilt. When it is introduced in populations of computer agents, they start to partake of this ability. To feel coerced when they do something that harms another element, resulting in a kind of self-punishment, a change in behaviour, in order to avoid future blame. It is not guilt in the existential, Freudian sense, but in a more pragmatic aspect of not being satisfied with what they did when harming others.

Insert a dose of guilt – neither too much nor too little – in just a few agents, in a population of them interacting inside a computer, in an evolutionary game. Without the existence of this guilt component, the majority will tend to play selfishly, each wanting to win more than the others, thus failing to reach a level where everyone could win even more. But this desirable result is already possible with a dose of initial blame that changes behaviour and spreads as a good strategy to the entire population. We have shown, mathematically, that a certain amount of a guilt component is advantageous and promotes cooperation. But also, that one should not feel guilty towards those who do not feel guilty in turn, as this will be allowing oneself to be abused[8].

This is, in fact, the great central abstract problem of morality and gregariousness, which naturally also affects the case of machines: how to avoid the pure egoism of the agents who opportunistically want to take advantage of the gregariousness of others without, in turn, contributing to it? In other words, how can we demonstrate, through computational mathematical models, under what circumstances is evolutionary gregariousness possible, stable and advantageous? And to be able to use the computer itself to better understand how the machinery of guilt works, between which values ​​of which parameters, and to vary these parameters to understand how to best use them evolutionarily. When creating artificial agents that have a certain amount of guilt, we give, at the same time, arguments to the fact that guilt is a useful function, a result of our evolution.

As will have been realized, we are dealing with a subject of interdisciplinary nature that lives halfway between Philosophy, Jurisprudence, Psychology, Anthropology, Economics, etc., in which inspiration drawn from those various fields is important. It is very important to pay attention to the fact that one of the problems we have is that Jurisprudence is not advancing sufficiently, in view of the urgency to legislate on moral machines. When legislation is made with respect to machines, we will have to start by defining and using new concepts, without which it will be impossible to make laws, as they must perforce appeal to the concepts of Jurisprudence. And it is relevant to recognize that the Legislator is very late in keeping up with technique. This is worrying because confusion is common: the notion that technical progress is the same as social progress. In fact, technical progress is not being accompanied by desirable and concomitant social progress. Technical know-how should be used in the service of human values, and these values ​​must be enjoyed equally by all, with the wealth created being fairly distributed.

History can give us important references and general lines of action. For example, if we consider the great progress of Greek civilization in its heyday in the 5th and 4th centuries BC, we will see that it was only possible because it was supported in a legion of slaves, without citizenship rights or the possibility of social ascension, and constituted by the conquered armies and foreign citizens. Now, similarly, we have the possibility to make more and more use of slave machines that are already here, to free us from the effort that can be made by them. But we would like everyone to be freed and gain equally from it, through a fair distribution of the wealth produced by such machines. However, the opposite is happening. Machines replace people, resulting in increased profits for their owners. The proper counterpart, that is, a fair distribution of the additional wealth, is increasingly far from happening. Whereas the universe of situations in which the human has no chance of competing with machines continues to increase. Hence, a new social contract is indispensable, in which the relationship between work and capital is reformulated and updated, as a result of the social impact of new technologies, namely the increased sophistication of machines with cognition and autonomy.

If a machine is going to replace me, that must be done completely (even in social obligations). In the sense that, I, by positioning myself in a work activity, contribute to Social Security that supports current retirees; I contribute to the National Health Service; I contribute with the IRS to make governance possible and development of the country, etc. So, if a machine completely replaces me, eliminating me from a job whose activity it maintains, it must also pay the taxes that I was paying to support the current social contract. To replace, has to mean replacing in all these respects!

It is not possible to reduce the problems of machine ethics to a code of ethics that computer engineers must follow. Precisely because of the impact that this has on human values ​​and social organization, and on our civilizational path. For this reason, the issue of values ​​is inescapable, and it cannot be reduced to mere technical standards.

There are several and repeated study reports, in close agreement, from unsuspected entities, namely McKinsey & Company, the Pew Research Center, OECD, PricewaterhouseCoopers, and others, which point towards a final increase of between 15 and 20% of additional unemployment in 2030, only because of AI. The topic of unemployment caused by AI in the AI ​​superpowers themselves, and which elsewhere will be more serious, is well analysed in the recent book by Kai-Fu Lee[9].

If we do not take the right actions at present, we can imagine the outline of a future that will be far from promising. We must not forget that doctors are now training machines to read X-ray images, interpret analyses, examine symptoms and so on. Around the world, a multitude of highly trained professionals, from medicine to economics to law, are passing human knowledge to machines that will know how to replicate and use it. People are teaching whoever are going to replace them.

The dangers of AI are not in the possibility of an Exterminator appearing. The risks are embodied in the fact that, at this moment, simple machines are taking decisions that affect us. However, because we call them “smart machines” people think they are doing a good job. This current over-selling of AI is pernicious. In addition, AI ​​that is now being sold does not reach one tenth of what AI is already or may in fact be in the future. Serious AI is yet to come, and it will be much more sophisticated than most current programs, those using so-called deep learning. These are rather simplistic programs, and not so much power should be given to such simple machines. But since they replace humans, and will replace radiologists, car and truck drivers, call-centre employees, security guards in shopping centres, they are sold as a panacea.

The author is part of project “Incentives for Safety Agreement Compliance in AI Race” [10] sponsored by the Future of Life Institute[11], a non-profit organization. The project, in the area of ​​software security, addresses the issue of the urgency to reach the market shown by companies that develop AI products. More specifically, it analyses the consequences of neglecting the safety requirements of these products. The urgency is such that safety is put aside, because it costs money and time, and delays the arrival to the market before competitors. The aim of the project is to establish rules of the game so that there is no one turning a blind eye on security, as if that were not essential[12]. For this, regulatory and monitoring entities are needed, as well as a “National Ethics Committee for AI,” including Robotics, by analogy to the “National Bioethics Committee”. We cannot accept, as is heard in Europe and the USA, that companies that make driverless cars are solely responsible, and that if a problem arises than we shall see. Governments would therefore not be responsible for the tests that such cars must undergo, they simply delegate to the companies themselves. It is indeed noteworthy that in the case of the recent accidents that occurred with the Boeing 737-Max aircraft[13], the American Federal Aviation Authority (FAA) delegated quality checks to Boeing itself!

In the European Union, responsibility for security appears to be more disguised. A high-level AI and Ethics Committee was created to provide recommendations[14]. What they propose in summary is to lay down “(…) some recommendations that development firms can follow” and to accredit “private audit firms that will inspect those firms”. We will, perhaps, fall into a scheme of auditors with interests in the entities that they examine, since they will also commission studies. Recall the case of banks and the 2008 financial crisis.

We will not close this text without shortly summarizing a proposed set of “terms of reference” to be dealt with by the AI ​​scientific community, and not only by it:

  • We need to know more about our own moral facets to be able to pass them on to machines. However, we do not yet know enough about human morality. In this sense, it is important to foster its study by The Humanities and Social Sciences.
  • Morality is not just about avoiding evil, but also about how to work for the good. Greater good for more people. The problem of unemployment is inherent to this consideration.
  • Universities are the appropriate place to address all of these issues, due to their spirit of independence, their reasoning and discussion practice. And they harbour, in their faculties, the necessary interdisciplinarity.
  • We will not have machines with a general moral capacity anytime soon. We will have machines that know how to respect standards in a hospital, in a prison, and even the rules of war. These are even the most well-known and accepted worldwide. As they are well specified, they are less ambiguous and are closer to being programmable.
  • We will start by automating the standards and their exceptions, little by little, expanding the generality and the capacity of a machine to learn new standards, and to expand, evolving, its areas of competence, with all the essential security.

From the point of view of criteria for action, the morals bequeathed from the past on a hook in the sky are confronted with a new perspective on nascent moral systems, studied in the framework of evolutionary psychology and deepened through testable models in artificial scenarios, as is now made possible with computers. As the investigation progresses, we can better understand the processes inherent in moral decision, to the point that they can be “taught” to autonomous machines, made capable of manifesting ethical discernment.

In the field of Economics, there is a vast problematic associated with the impact on labour and its inherent dignity, as well as with the creation and distribution of wealth; that is, a whole reconfiguration of economic relations that will result, not only from the automation of routine activities, but fundamentally from the coming on the scene of robots and software that can replace doctors, teachers, or assistants in nursing homes (to single out professions where replacement of humans is not commonly regarded as easily feasible). Acknowledging this question is especially relevant, demanding a stance that will support the need for an updated social morality, and a renewed social contract.

The issue of computational morals thus comes into existence in a context in which the ecosystem of knowledge will be greatly enriched, as it will have to incorporate non-biological agents with the capacity to become active players in dimensions that, until now, were exclusively attributed to humans.

These being very difficult subjects, the sooner the better to start dealing with them!

Lisbon, 15 February 2020

Acknowledgements

Thanks are due to Frederico Carvalho for the English translation, overseen by the author, from the Portuguese original, which is to be published in the e-book arising from the Diálogos Intergeracionais initiative:  https://www.cnc.pt/dialogos-intergeracionais-mesa-redonda-4/. Author support is acknowledged from grant RFP2-154 of the “Future of Life Institute”, USA; and from project FCT/MEC NOVA LINCS PEst UID/CEC/04516/2019 of the “Fundação para a Ciência e a Tecnologia”, Portugal.

References

[1] The topic is developed in our recent book in Portuguese Máquinas Éticas: Da Moral da Máquina à Maquinaria, by Luís Moniz Pereira & António Barata Lopes, published by NOVA.FCT Editorial, Lisbon: 2020.
Published also in English and titled Machine Ethics: From Machine Morals to the Machinery of Morality, by Luís Moniz Pereira & António Barata Lopes, Springer SAPERE series, vol. 53, Cham: 2020.

[2] We consulted Gods and Robots – Myths, Machines, and Ancient Dreams of Technology by Adrienne Mayor, Princeton, NJ: Princeton U. Press, 2018, and also A Brief Guide to the Greek Myths de Stephen P. Kershaw, London: Constable & Robinson Ltd., 2007.

[3] To deepen the subject, visit the author’s webpage at: https://userweb.fct.unl.pt//~lmp/publications/Biblio.html

[4] https://en.wikipedia.org/wiki/Evolutionary_game_theory

[5] This has to do with the 1st of the Three Laws of Robotics idealized by Isaac Asimov, condensed in Law Zero: A robot cannot cause harm to humanity or, by default, allow humanity to suffer any harm.

[6] Can be seen in the link: https://drive.google.com/file/d/0B9QirqaWp7gPUXBpbmtDYzJpbTQ/view?usp=sharing

Detailed explanation here: https://userweb.fct.unl.pt//~lmp/publications/online-papers/lp_app_mach_ethics.pdf

[7] The robot shows in a balloon what it is thinking, and it is shown how the user gives it new moral rules to add to the previous ones, sometimes supplanting them when there is a contradiction between them.

[8] Technical details here: L. M. PereiraT. LenaertsL. A. Martinez-VaqueroT. A. HanSocial Manifestation of Guilt Leads to Stable Cooperation in Multi-Agent Systems, in: Procs. 16th Intl. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2017), Das, S. et al. (Eds.), pp. 1422-1430, 8–12 May 2017, São Paulo, Brazil.

Here: https://userweb.fct.unl.pt/~lmp/publications/online-papers/guiltEGT.pdf

[9]Kai-Fu Lee, AI super-powers ‒ China, Silicon Valley, and the New World Order, NY: Houghton Mifflin Harcourt, 2018.

[10] https://drive.google.com/open?id=1j59rhP7op3nBpvaxpeCdaBVObJAbzWBJ

[11] https://futureoflife.org

[12] Project results here: T. A. HanL. M. PereiraT. Lenaerts, Modelling and Influencing the AI Bidding War: A Research Agenda, in the proceedings of AAAAI/ACM Conference on AI, Ethics, and Society, (AIES 2019),  27-28 January 2019, Honolulu, Hawaii, USA. And here: https://userweb.fct.unl.pt/~lmp/publications/online-papers/AI race modelling.pdf

[13] To save costs. See: “Boeing’s 737-Max software outsourced to Rs 620-an-hour engineers”. Here: https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/boeings-737-max-software-outsourced-to-rs-620-an-hour-indian-engineers/articleshow/69999513.cms?from=mdr

[14] https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top

______________________________________________________________________________

About the Author

otc.pt/wp/wp-content/uploads/2020/02/FOTO-Luís-1-241×300.jpg 241w” sizes=”(max-width: 109px) 100vw, 109px” class=”float auxiliary left” style=”max-width: 100%; margin: 1.4em auto; height: auto; display: block; clear: both; font-size: 0.75em; line-height: 1.4em; color: rgba(0, 0, 0, 0.65098); float: none;”>

Luís Moniz Pereira is Emeritus Professor of Computer Science at the Universidade Nova de Lisboa (UNL), Portugal. He is a member of the NOVA Laboratory for Computer Science and Informatics (NOVA-LINCS) of the Informatics Department. In 2001 he was elected Fellow of the European Association of Artificial Intelligence (EurAI). In 2006 he was awarded a Doctor Honoris Causa by the Technische Universität Dresden. He has been a member of the Board of Trustees and Scientific Advisory Board of IMDEA, the Madrid Institute of Software Advanced Studies, since 2006. In 1984 he became the founding President of the Portuguese Association for Artificial Intelligence (APPIA). His research focuses on the representation of knowledge and reasoning, logic programming, cognitive sciences and evolutionary game theory. In 2019 he received the National Medal of Scientific Merit. More information, including other awards and his publications at: http://userweb.fct.unl.pt/~lmp/

Luís Moniz Pereira is a member of the Governing Bodies of OTC

[rede.APPIA] [CFP]EPIA_2202_Artificial Intelligence in Power and Energy Systems

Dear colleagues,

 

We would like to invite you to submit a paper to the Thematic Track on Artificial Intelligence in Power and Energy Systems of EPIA 2020, to be held in Lisbon, Portugal between 7 and 9 September 2020 (https://epia2020.inesc-id.pt).

 

Thematic Track Organizers:

Zita Vale – Polytechnic of Porto (Portugal)

Tiago Pinto – Polytechnic of Porto (Portugal)

Pedro Faria – Polytechnic of Porto (Portugal)

Elena Mocanu – University of Twente (The Netherlands)

Decebal Constantin Mocanu – Technical University of Eindhoven (The Netherlands)

 

Important Deadlines:

Deadline for full paper submission: 15th April, 2020

Notification of acceptance: 31st May, 2020

Camera-Ready papers: 15th June, 2020

Conference: 7-9 September 2020

 

Submissions:

AIPES welcomes full length papers (of up to 12 pages) and also short papers (up to 6 pages), demonstrating practical applications.

Authors should consult Springer’s authors’ guidelines and use their proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers. In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made.

Accepted papers will be included in EPIA 2020 proceedings (a volume of Springer’s LNAI-Lecture Notes in Artificial Intelligence). EPIA 2020 proceedings are indexed in Thomson Reuters ISI Web of Science, Scopus, DBLP and Google Scholar.

Authors of selected papers will be invited to submit an extended and improved version to a special issues published with Energies international journal (Impact Factor: 2.707). Article processing charges (APC) will be waived for the BEST PAPER. All other papers accepted in AIPES will be offered a 15% discount for inclusion in this special issue.

 

 

Scope:

The Thematic Track on Artificial Intelligence in Power and Energy Systems aims at providing an advanced discussion forum on recent and innovative work on the application of artificial intelligence approaches in the field of power and energy systems, including agent-based systems, data-mining, machine learning methodologies, forecasting and optimization.

 

Submission Topics:

·       Agent-based Smart Grid Simulation

·       Big Data Applications for Energy Systems

·       Coalitions and Aggregations of Smart Grid and Market Players

·       Consumer Profiling

·       Context Aware Systems

·       Data-Mining Approaches in Smart Grids

·       Decision Support Approaches for Smart Grids

·       Demand Response Aggregation

·       Demand Response Integration in the Market

·       Demand Response Remuneration Methods

·       Electric vehicles

·       Electricity Market Modelling and Simulation

·       Electricity Market Negotiation Strategies

·       Energy Resource Management in Buildings

·       Information technology applications

·       Innovative Demand Response Models and Programs

·       Innovative Energy Tariffs

·       Integration of Electric Vehicles in the Power System

·       Intelligent Approaches for Microgrid Management

·       Intelligent Home Management Systems

·       Intelligent methods for Demand Management

·       Intelligent Resources Scheduling

·       Intelligent Supervisory Control Systems

·       Knowledge-based approaches for Power and Energy Systems

·       Load Forecast

·       Market Models for Variable Renewable Energy

·       Multi-Agent Applications for Smart Grids

·       Multi-Agent Systems in Power and Energy Systems

·       Other Artificial Intelligence-based Methods for Power and Energy Systems

·       Phasor Measurement Units Applications

·       Real-time simulation

·       Reliability, Protection and Network Security Methods

·       Renewable Energy Forecast using Computational Intelligence

·       Semantic communication and data

·       Smart Sensors and Advanced Metering Infrastructure

 

Best regards

AIPES organizers

[rede.APPIA] CFP: Big Data & Deep Learning in HPC (IEEE Xplore) @Porto, Portugal

Workshop on BIG DATA & DEEP LEARNING in HIGH PERFORMANCE COMPUTING (sbac2020.dcc.fc.up.pt/bdl2020/)
in conjunction with the IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2020) (sbac2020.dcc.fc.up.pt/)
Porto, Portugal
The city of Porto is famous for its Port wine and beautiful scenery, architecture and cultural events.
Portugal has again been awarded the best European Tourist Destination by the World Travel Awards, the Oscars equivalent in the field of tourism.
———————————— WORKSHOP ON BIG DATA & DEEP LEARNING IN HIGH PERFORMANCE COMPUTING ————————————
The number of very large data repositories (big data) is increasing in a rapid pace. Analysis of such repositories using the “traditional” sequential implementations of ML and emerging techniques, like deep learning, that model high-level abstractions in data by using multiple processing layers, requires expensive computational resources and long running times. Parallel or distributed computing are possible approaches that can make analysis of very large repositories and exploration of high-level representations feasible. Taking advantage of a parallel or a distributed execution of a ML/statistical system may: i) increase its speed; ii) learn hidden representations; iii) search a larger space and reach a better solution or; iv) increase the range of applications where it can be used (because it can process more data, for example). Parallel and distributed computing is therefore of high importance to extract knowledge from massive amounts of data and learn hidden representations.
The workshop will be concerned with the exchange of experience among academics, researchers and the industry whose work in big data and deep learning require high performance computing to achieve goals. Participants will present recently developed algorithms/systems, on going work and applications taking advantage of such parallel or distributed environments.
———————————— LIST OF TOPICS ————————————
All novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets Big Data and Deep Learning are of interest to the workshop. Examples of topics include but not limited to: – parallel algorithms for data-intensive applications; – scalable data and text mining and information retrieval; – using Hadoop, MapReduce, Spark, Storm, Streaming to analyze Big Data; – energy-efficient data-intensive computing; – deep-learning with massive-scale datasets; – querying and visualization of large network datasets; – processing large-scale datasets on clusters of multicore and manycore processors, and accelerators; – heterogeneous computing for Big Data architectures; – Big Data in the Cloud; – processing and analyzing high-resolution images using high-performance computing; – using hybrid infrastructures for Big Data analysis. – New algorithms for parallel/distributed execution of ML systems; – applications of big data and deep learning to real-life problems.
———————————— KEY DATES ————————————
Deadline for paper submission: May 25, 2020
Author notification: July 1, 2020
Camera-ready version of papers: July 25, 2020
———————————— SUBMISSION ————————————
We invite authors to submit original work to BDL. All papers will be peer reviewed and accepted papers will be published in IEEE Xplore.
Submissions must be in English, limited to 8 pages in the IEEE conference format (see www.ieee.org/conferences/publishing/templates.html)
All submissions should be made electronically through the EasyChair system: easychair.org/conferences/?conf=bdl2020
———————————— REGISTRATION ————————————
A full registration to the workshop and presentation are needed in order to have your paper included in the workshop proceedings.
The Workshop fee is 300 euros.
Registration system available in sbac2020.dcc.fc.up.pt/bdl2020/registration.html
———————————— VENUE ————————————
Department of Computer Science, Faculty of Sciences, University of Porto
Rua do Campo Alegre 1021/1055 4169-007 Porto, Portugal
The city of Porto is famous for its Port wine and beautiful scenery, architecture and cultural events.
Portugal has again been awarded the best European Tourist Destination by the World Travel Awards, the Oscars equivalent in the field of tourism.
———————————— ORGANIZATION ————————————
Carlos Ferreira (LIAAD – INESC TEC LA and Polytechnic Institute of Porto) João Gama (LIAAD – INESC TEC LA and University of Porto) Albert Bifet (Telecom ParisTech) Miguel Areias (CRACS – INESC TEC LA and University of Porto) Rui Camacho (LIAAD -INESC TEC LA and University of Porto)

Carlos Ferreira
ISEP | Instituto Superior de Engenharia do Porto Rua Dr. António Bernardino de Almeida, 431 4249-015 Porto – PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail@isep.ipp.pt | www.isep.ipp.pt

[rede.APPIA] Summer School on Machine Learning and Big Data with Quantum Computing @ Porto (Portugal), 7-8 September 2020

Summer School on Machine Learning and Big Data with Quantum Computing (SMBQ 2020)
Porto, Portugal, September 7-8, 2020
#####################################
Machine Learning (ML) is an Artificial Intelligence (AI) branch, is about teaching computers how to learn from data to make decisions or predictions. Deep Learning (DL) is part of a broader family of ML algorithms, that is based on artificial neural networks. Arguably, DL techniques demand for big amounts of data and, as such, they require huge computational resources and advanced processing techniques.
Cloud Computing is a well-known alternative to deal with big amounts of data, since its elasticity allows for an efficient scalability of huge computational resources, such as, data storage and processing power. On the other hand, Quantum Computing is an advanced processing technique, that uses the fundamentals of quantum mechanics to accelerate the process of solving highly complex problems.
SMBQ 2020 addresses the current trends in AI and in the computational techniques that deal with big data demands, together with, a powerful processing technique that will shape the future of computation.
During 2 days, from 7-8 September 2020, we will introduce concepts, discuss the current trends and provide direct practical experience in hands-on lessons.
For more information visit: smbq2020.dcc.fc.up.pt/
#####################################
Venue:
Department of Computer Science, Faculty of Sciences, University of Porto
Rua do Campo Alegre 1021/1055 4169-007 Porto, Portugal
The city of Porto is famous for its Port wine and beautiful scenery, architecture and cultural events.
Portugal has again been awarded the best European Tourist Destination by the World Travel Awards, the Oscars equivalent in the field of tourism.
#####################################
Registration:
Registrations include attendance to all sessions, coffee breaks, lunches, wi-fi internet and access to a desktop during the hands-on sections. We have a limit on the number of registrations (60 attendees), please register early!
Early Fees: 150 euros
#####################################
Contact Persons:
Carlos Ferreira, Polytechnic Institute of Porto, LIAAD – INESC TEC, E-mail: cgf@isep.ipp.pt Miguel Areias, University of Porto, CRACS – INESC TEC, E-mail: miguel-areias@dcc.fc.up.pt

Carlos Ferreira
ISEP | Instituto Superior de Engenharia do Porto Rua Dr. António Bernardino de Almeida, 431 4249-015 Porto – PORTUGAL tel. +351 228 340 500 | fax +351 228 321 159 mail@isep.ipp.pt | www.isep.ipp.pt