[rede.APPIA] FROM THE MACHINERY OF MORALITY TO MACHINE MORALS – OTC


FROM THE MACHINERY OF MORALITY TO MACHINE MORALS


FROM THE MACHINERY OF MORALITY TO MACHINE MORALS

Luís Moniz Pereira
Laboratory for Computer Science and Informatics (NOVA-LINCS)
Faculty of Sciences and Technology
NOVA University of Lisbon

Human beings have always been aware of the risks associated with both knowledge and the technologies associated with it. Not only in Greek mythology are the signs and warnings for these dangers, but also in the founding myths of the Judeo-Christian religions. However, such warnings and fears have never made more sense than they do today. This results from the emergence of machines capable of claiming cognitive functions that until recently were performed exclusively by human beings. The cognitive revolution brought about by the development of AI, in addition to the technical problems associated with its design and conceptualization, raises social and economic problems with a direct impact on humanity in general, and on its constituent groups. That is why its approach from the moral point of view is urgent and imperative. The ethical problems to be considered are of two kinds: on the one hand, those associated with the type of society we want to promote through automation, complexification and the power of data processing available today; on the other hand, how to program machines for decision making according to moral principles acceptable to human beings who share knowledge and action with them.[1]

In the Hellenistic period (323-31 BC), Hero of Alexandria and other brilliant Greek engineers had been bringing together a variety of machines, powered either hydraulically or pneumatically. The Greeks recognized that automata and other artefacts with natural shapes – imaginary or real – could be both harmless and dangerous. They would also be able to be used for work, sex, entertainment or religion, or to inflict pain or death. Clearly, biotechnology, real and imaginary, already fascinated the Ancients[2].

Today’s deep learning algorithms allow AI computers to extract patterns from vast data, extrapolate them to new situations and make decisions without any human guidance. Inevitably, AI entities will develop the ability to question themselves, and will answers to questions that they discover. Today’s computers have already shown how to develop altruism, but also how to deceive others on their own. The question therefore makes perfect sense: why an ethics for machines?

  • Because computational agents have become more sophisticated, more autonomous, they act in groups, and form populations that include humans.
  • They are being developed in a variety of areas, where complex issues of responsibility require greater attention, namely in situations of ethical choice.
  • As their autonomy is increasing, the requirement that they function responsibly, ethically, and safely, is an increasing concern.

Autonomous and judicious deliberation calls for rules and principles of a moral nature applicable to the relationship between machines, the relationship between machines and human beings, and the consequences and results of the entry of these machines into the world of work and society in general. The present state of development of AI, both in its ability to elucidate the cognitive processes emerging in evolution, and in its technological aptitude for the conception and production of computer programs and intelligent artefacts, is the greatest intellectual challenge of our time. The complexity of these issues is summarized with a scheme, shown below ― called “The carousel of ethical machinery”. In summary, we are at a crossroads, that of AI, the ethics of machines, and their social shock.

The topic of morals has two major domains. The first is called “cognitive”, that is, around the need to clarify how we think in moral terms. In order to behave morally, it is necessary to consider possibilities: should I behave in this way or behave rather in another way? It is necessary to evaluate the various scenarios, the various hypotheses. It is necessary to compare these hypotheses to see which are the most desirable, what are their respective consequences, what are their side effects. This ability is essential for living in society, for the domain of the “collective”.

I studied certain cognitive abilities to see later if they were promoters of moral cooperation, in a population of computer beings, i.e. of computer programs that live together. Wherein a program is a set of strategies defined by rules; that is, in a given situation, a program develops a certain action dictated by its current strategy, and in which the other programs also have actions dictated by their respective strategic rules. It is as if they were agents living together, each with possibly different objective options. We study the “if”, and the “how” this population will be able to evolve in a good gregarious sense, and if that sense is stable, i.e., if it is maintained over time[3].

A very important instrument for this investigation is the so-called Evolutionary Game Theory [4], which consists of seeing how, in a game with well-defined rules, a population evolves through social learning. Society is governed by a set of precepts of group functioning, say rules of a game in which it is allowed to do certain things, but not others. The game indicates the gains or losses of each player in each move, depending on how he plays. Social learning means that a player starts to imitate the strategy of another whose results indicate that he has been more successful. Having defined certain rules, how does the social game evolve? Here we could enter the field of ideology, but we will not go that far. We are still studying the viability of morals. We assume that morality is evolutionary, that it developed with our species. As we have changed, for hundreds of thousands of years, we have been improving the rules of coexistence and improving our own intellectual capacities and knowing how to use these rules of coexistence. Not always conveniently, that is, social rules should be so that we all benefit, although there is always a temptation for some to want, unfairly and unjustly, more than others – to enjoy the benefits without paying for the costs. This is the essential problem of cooperation: how it becomes possible and, at the same time, how those who want to abuse it are kept under control. In order for our species to get to where it is today, evolution itself had to go on selecting us in terms of a moral of coexistence beneficial to gregariousness.

The problem of progress in cooperation and the emergence of collective behaviour, spanning disciplines as diverse as Economics, Physics, Biology, Psychology, Political Science, Cognitive Science and Computing, is still one of the greatest interdisciplinary challenges that science faces today. Mathematical and simulation techniques from Evolutionary Game Theory have been shown to be useful for studying such subjects. In order to better understand the evolutionary mechanisms that promote and maintain cooperative behaviour in various societies, it is important to take into account the intrinsic complexity of the participating individuals, that is, their intricate cognitive processes in decision making. The outcome of many social and economic interactions is defined, not only by the predictions that individuals make about the behaviour and intentions of other individuals, but also by the cognitive mechanism that others adopt to make their own decisions.

The research, based on abstract mathematical models for this purpose, showed that the way the decision process is modelled has a varied influence on the balance to be achieved in the dynamics of collective collaboration. Evidence abounds showing that humans (and many other species) have complex cognitive abilities: theory-of-mind; recognition of intentions; hypothetical, counterfactual and reactive reasoning; emotional guidance; learning; preferences; commitments; and morality. In order to better understand how all of these mechanisms make cooperation possible, they need to be modelled within the context of evolutionary processes. In other words, we must try to understand how the cognitive systems used to explain human behaviour – successfully developed by Artificial Intelligence and the Cognitive Sciences – deal with Darwin’s evolutionary theory, and in this way understand and justify its appearance in terms of the existence of a cooperation dynamic, or the absence of it.

It must be emphasized, however, that we are facing Terra Incognita. There is an entire continent to be explored, the contours of which we can only glimpse. We do not yet know enough about our own morals, nor is our knowledge accurate enough to be programmed on machines. In fact, there are several ethical theories, antagonistic to each other, but which also complement each other. Philosophy and Jurisprudence study Ethics, that is, the problematic of defining a system of values ​​articulated in principles. Each ethics in particular is the substrate that supports the rules and legislation that justify, in each context, the specific rules that will be applied and used on the ground by that ethics. As a result, and depending on cultures and circumstances, moral rules are arrived at. In each context, we use abstract ethical principles to arrive at concrete moral rules. In practice, a morality, a set of moral rules, results from a historical, contextual and philosophical combination of ethical theories that have evolved over time.

The Carousel of the Ethical Machinery, below, is a way of summarizing the complexity of the problem of moral machinery. In the central carousel, one wishes to identify those factors that concern what to do, or how to act. “What to do” is surrounded by other carousels, each having to do with the ethical use of machines.

Regarding Ethical Use, we have already heard of fake news and algorithms that influence elections: it is a misuse of machines, which must be subject to moral rules. For example, it is a negative, immoral practice, for a program to pretend to be a human being. There are, of course, other examples of immoral uses of machines. Among the darkest will be that of drones with autonomous capacity to kill individuals. It is necessary to remember, therefore, that the deregulation of use is increasingly subordinated to the capacity of the machine itself, precisely because it has an increasing autonomy, which, consequently, amplifies the issues of its moral use.

This allows us to think that machines should also protect us from their unethical use by humans. Suppose someone orders a program to execute an instruction to act to cause harm to human beings – the program itself could refuse to do so[5]. That will be the second reason why we need to introduce morals in machines, so that they do not comply with everything they are merely programmed to do. We don’t want the machine to be in a situation of simply stating “I did it because I was told”. A metaphor for the position of Nazi war criminals in Nuremberg, saying “I only followed orders, did what I was told” as if they had no critical sense and could not disobey orders. The challenge arises of building machines capable of disobeying when justified.

Another circle of the carousel is that of Human Values. Basically, we intend to give machines our values, because they will live with us. These will have to be ethically reconcilable with the population where they are active.

otc.pt/wp/wp-content/uploads/2020/02/New-Graph-267×300.png 267w” sizes=”(max-width: 248px) 100vw, 248px” class=”float auxiliary left” style=”max-width: 100%; margin: 1.4em auto; height: auto; display: block; clear: both; font-size: 0.75em; line-height: 1.4em; color: rgba(0, 0, 0, 0.65098); float: none;”>

Legislation is highlighted in another circle of the merry-go-round because, at the end of the process, everything will have to be translated into laws, norms and standards, about what is allowed or prohibited. Just as cars have regulations on pollution, machines will also have to comply with certain criteria, approved by an entity qualified to do so. It is often asked who is responsible if a driverless car runs over a pedestrian when it might not have done so? The owner, the manufacturer? But the Legislator is not mentioned. However, someone had to say “this car without a driver is road-fit”. It will be up to a government to legislate which tests a driverless car must pass. If it turns out that such tests were not sufficiently exhaustive, the entity that authorized the circulation of those vehicles will also be responsible.

Another circle is that of Technical Issues. Everything always involves the effective construction of the machines, for whatever purpose. Not everything is technically possible. For example, we have not yet been able to think in terms of computerized proofs that a machine is not going to do ethically incorrect things. Not to mention the cases in which any hacker can enter the system and force the machine to do wrong things. This is a security problem to be solved technically.

Finally, and not least, are the Social Impacts of machines with autonomy. When we refer to machines we are talking about both robots and software. The latter is much more dangerous as it spreads and reproduces easily anywhere in the world. As for the robot, it will be much more difficult to reproduce, implies a much higher cost, and brings the material limitations inherent to the possession of a volumetric body. With regard to the social impact, it is expected that soon we will have robots cooking hamburgers and serving us at the table, with the resulting implications for the labour market. Although not much intelligence is required for these tasks, the challenges inherent in fine eye-brain-hand coordination are not to be overlooked. Something that machines still don’t have in the same degree as humans, but on that front too, robots are advancing very fast. When it comes to software, the issue is more worrying because, deep down, programs are reaching cognitive levels that have hitherto been our monopoly. That is why people feel much more concerned. To wit, until now there were things that only a human being knew how to do. However, little by little, the machines started to play chess, to make medical diagnoses, etc., and more and more they will perform more sophisticated mental activities, and they will also, successively, learn to do it.

This door that opens creates, for the first time, an open competition with humans. Competition that could cause – depending on social organization, ideology, and politics – the replacement of humans by machines. Because, to do the same thing, there will be cheaper instruments than the human. Thence, as the human becomes expendable, wages will decrease, and machine owners will become ever wealthier. The present wealth gap, which is increasing and shows that the rich are getting richer and the poor are getting poorer, is a gap that AI is already digging into, and will widen further. To the point that a new social contract will be required, under penalty of cataclysm. The way we work in terms of capital and labour, and the way each of the two things are equated, will have to be completely overhauled. There is a risk that, if this does not happen, asymmetries in wealth will cause, sooner or later, a great revolt, insurrection, and social breakdown. This will happen when the caste system induced by AI advances generates its own implosion.

Currently, we already have robots in hospitals, we have drones that fly by themselves, we have autonomous motorboats, we have driverless cars, and we even have interactive moral games, capable of teaching morals. In a game I developed a robot aims to save a princess, for which purpose it combines several ethical approaches[6]. This even exemplifies that, in practice, morality is not one, but rather a multiple combination. We ourselves do not follow exclusively the knight-errant’s morality, or utilitarian morality, or Kantian morality, or Gandhi’s morality. Our ethics are a mixture of them, which evolves. This game’s program shows how the robot’s morality evolves. We must, therefore, assume that the programming of morals in machines must allow for its own evolution[7]. There is no fixed, frozen morality. Morality is an evolutionary thing, and throughout the history of the species, both remote and near, has been developing collectively.

It is clear that the machines are increasingly autonomous, and we have to ensure that they can live with us on our terms and with our rules. There is, therefore, a new ethical paradigm that says that morality must also be computational. I mean, we have to be able to program morals. This has a positive side, because when programming morals in machines, we understand better our own human ethics.

Consider the following example, in scientific work on guilt. When it is introduced in populations of computer agents, they start to partake of this ability. To feel coerced when they do something that harms another element, resulting in a kind of self-punishment, a change in behaviour, in order to avoid future blame. It is not guilt in the existential, Freudian sense, but in a more pragmatic aspect of not being satisfied with what they did when harming others.

Insert a dose of guilt – neither too much nor too little – in just a few agents, in a population of them interacting inside a computer, in an evolutionary game. Without the existence of this guilt component, the majority will tend to play selfishly, each wanting to win more than the others, thus failing to reach a level where everyone could win even more. But this desirable result is already possible with a dose of initial blame that changes behaviour and spreads as a good strategy to the entire population. We have shown, mathematically, that a certain amount of a guilt component is advantageous and promotes cooperation. But also, that one should not feel guilty towards those who do not feel guilty in turn, as this will be allowing oneself to be abused[8].

This is, in fact, the great central abstract problem of morality and gregariousness, which naturally also affects the case of machines: how to avoid the pure egoism of the agents who opportunistically want to take advantage of the gregariousness of others without, in turn, contributing to it? In other words, how can we demonstrate, through computational mathematical models, under what circumstances is evolutionary gregariousness possible, stable and advantageous? And to be able to use the computer itself to better understand how the machinery of guilt works, between which values ​​of which parameters, and to vary these parameters to understand how to best use them evolutionarily. When creating artificial agents that have a certain amount of guilt, we give, at the same time, arguments to the fact that guilt is a useful function, a result of our evolution.

As will have been realized, we are dealing with a subject of interdisciplinary nature that lives halfway between Philosophy, Jurisprudence, Psychology, Anthropology, Economics, etc., in which inspiration drawn from those various fields is important. It is very important to pay attention to the fact that one of the problems we have is that Jurisprudence is not advancing sufficiently, in view of the urgency to legislate on moral machines. When legislation is made with respect to machines, we will have to start by defining and using new concepts, without which it will be impossible to make laws, as they must perforce appeal to the concepts of Jurisprudence. And it is relevant to recognize that the Legislator is very late in keeping up with technique. This is worrying because confusion is common: the notion that technical progress is the same as social progress. In fact, technical progress is not being accompanied by desirable and concomitant social progress. Technical know-how should be used in the service of human values, and these values ​​must be enjoyed equally by all, with the wealth created being fairly distributed.

History can give us important references and general lines of action. For example, if we consider the great progress of Greek civilization in its heyday in the 5th and 4th centuries BC, we will see that it was only possible because it was supported in a legion of slaves, without citizenship rights or the possibility of social ascension, and constituted by the conquered armies and foreign citizens. Now, similarly, we have the possibility to make more and more use of slave machines that are already here, to free us from the effort that can be made by them. But we would like everyone to be freed and gain equally from it, through a fair distribution of the wealth produced by such machines. However, the opposite is happening. Machines replace people, resulting in increased profits for their owners. The proper counterpart, that is, a fair distribution of the additional wealth, is increasingly far from happening. Whereas the universe of situations in which the human has no chance of competing with machines continues to increase. Hence, a new social contract is indispensable, in which the relationship between work and capital is reformulated and updated, as a result of the social impact of new technologies, namely the increased sophistication of machines with cognition and autonomy.

If a machine is going to replace me, that must be done completely (even in social obligations). In the sense that, I, by positioning myself in a work activity, contribute to Social Security that supports current retirees; I contribute to the National Health Service; I contribute with the IRS to make governance possible and development of the country, etc. So, if a machine completely replaces me, eliminating me from a job whose activity it maintains, it must also pay the taxes that I was paying to support the current social contract. To replace, has to mean replacing in all these respects!

It is not possible to reduce the problems of machine ethics to a code of ethics that computer engineers must follow. Precisely because of the impact that this has on human values ​​and social organization, and on our civilizational path. For this reason, the issue of values ​​is inescapable, and it cannot be reduced to mere technical standards.

There are several and repeated study reports, in close agreement, from unsuspected entities, namely McKinsey & Company, the Pew Research Center, OECD, PricewaterhouseCoopers, and others, which point towards a final increase of between 15 and 20% of additional unemployment in 2030, only because of AI. The topic of unemployment caused by AI in the AI ​​superpowers themselves, and which elsewhere will be more serious, is well analysed in the recent book by Kai-Fu Lee[9].

If we do not take the right actions at present, we can imagine the outline of a future that will be far from promising. We must not forget that doctors are now training machines to read X-ray images, interpret analyses, examine symptoms and so on. Around the world, a multitude of highly trained professionals, from medicine to economics to law, are passing human knowledge to machines that will know how to replicate and use it. People are teaching whoever are going to replace them.

The dangers of AI are not in the possibility of an Exterminator appearing. The risks are embodied in the fact that, at this moment, simple machines are taking decisions that affect us. However, because we call them “smart machines” people think they are doing a good job. This current over-selling of AI is pernicious. In addition, AI ​​that is now being sold does not reach one tenth of what AI is already or may in fact be in the future. Serious AI is yet to come, and it will be much more sophisticated than most current programs, those using so-called deep learning. These are rather simplistic programs, and not so much power should be given to such simple machines. But since they replace humans, and will replace radiologists, car and truck drivers, call-centre employees, security guards in shopping centres, they are sold as a panacea.

The author is part of project “Incentives for Safety Agreement Compliance in AI Race” [10] sponsored by the Future of Life Institute[11], a non-profit organization. The project, in the area of ​​software security, addresses the issue of the urgency to reach the market shown by companies that develop AI products. More specifically, it analyses the consequences of neglecting the safety requirements of these products. The urgency is such that safety is put aside, because it costs money and time, and delays the arrival to the market before competitors. The aim of the project is to establish rules of the game so that there is no one turning a blind eye on security, as if that were not essential[12]. For this, regulatory and monitoring entities are needed, as well as a “National Ethics Committee for AI,” including Robotics, by analogy to the “National Bioethics Committee”. We cannot accept, as is heard in Europe and the USA, that companies that make driverless cars are solely responsible, and that if a problem arises than we shall see. Governments would therefore not be responsible for the tests that such cars must undergo, they simply delegate to the companies themselves. It is indeed noteworthy that in the case of the recent accidents that occurred with the Boeing 737-Max aircraft[13], the American Federal Aviation Authority (FAA) delegated quality checks to Boeing itself!

In the European Union, responsibility for security appears to be more disguised. A high-level AI and Ethics Committee was created to provide recommendations[14]. What they propose in summary is to lay down “(…) some recommendations that development firms can follow” and to accredit “private audit firms that will inspect those firms”. We will, perhaps, fall into a scheme of auditors with interests in the entities that they examine, since they will also commission studies. Recall the case of banks and the 2008 financial crisis.

We will not close this text without shortly summarizing a proposed set of “terms of reference” to be dealt with by the AI ​​scientific community, and not only by it:

  • We need to know more about our own moral facets to be able to pass them on to machines. However, we do not yet know enough about human morality. In this sense, it is important to foster its study by The Humanities and Social Sciences.
  • Morality is not just about avoiding evil, but also about how to work for the good. Greater good for more people. The problem of unemployment is inherent to this consideration.
  • Universities are the appropriate place to address all of these issues, due to their spirit of independence, their reasoning and discussion practice. And they harbour, in their faculties, the necessary interdisciplinarity.
  • We will not have machines with a general moral capacity anytime soon. We will have machines that know how to respect standards in a hospital, in a prison, and even the rules of war. These are even the most well-known and accepted worldwide. As they are well specified, they are less ambiguous and are closer to being programmable.
  • We will start by automating the standards and their exceptions, little by little, expanding the generality and the capacity of a machine to learn new standards, and to expand, evolving, its areas of competence, with all the essential security.

From the point of view of criteria for action, the morals bequeathed from the past on a hook in the sky are confronted with a new perspective on nascent moral systems, studied in the framework of evolutionary psychology and deepened through testable models in artificial scenarios, as is now made possible with computers. As the investigation progresses, we can better understand the processes inherent in moral decision, to the point that they can be “taught” to autonomous machines, made capable of manifesting ethical discernment.

In the field of Economics, there is a vast problematic associated with the impact on labour and its inherent dignity, as well as with the creation and distribution of wealth; that is, a whole reconfiguration of economic relations that will result, not only from the automation of routine activities, but fundamentally from the coming on the scene of robots and software that can replace doctors, teachers, or assistants in nursing homes (to single out professions where replacement of humans is not commonly regarded as easily feasible). Acknowledging this question is especially relevant, demanding a stance that will support the need for an updated social morality, and a renewed social contract.

The issue of computational morals thus comes into existence in a context in which the ecosystem of knowledge will be greatly enriched, as it will have to incorporate non-biological agents with the capacity to become active players in dimensions that, until now, were exclusively attributed to humans.

These being very difficult subjects, the sooner the better to start dealing with them!

Lisbon, 15 February 2020

Acknowledgements

Thanks are due to Frederico Carvalho for the English translation, overseen by the author, from the Portuguese original, which is to be published in the e-book arising from the Diálogos Intergeracionais initiative:  https://www.cnc.pt/dialogos-intergeracionais-mesa-redonda-4/. Author support is acknowledged from grant RFP2-154 of the “Future of Life Institute”, USA; and from project FCT/MEC NOVA LINCS PEst UID/CEC/04516/2019 of the “Fundação para a Ciência e a Tecnologia”, Portugal.

References

[1] The topic is developed in our recent book in Portuguese Máquinas Éticas: Da Moral da Máquina à Maquinaria, by Luís Moniz Pereira & António Barata Lopes, published by NOVA.FCT Editorial, Lisbon: 2020.
Published also in English and titled Machine Ethics: From Machine Morals to the Machinery of Morality, by Luís Moniz Pereira & António Barata Lopes, Springer SAPERE series, vol. 53, Cham: 2020.

[2] We consulted Gods and Robots – Myths, Machines, and Ancient Dreams of Technology by Adrienne Mayor, Princeton, NJ: Princeton U. Press, 2018, and also A Brief Guide to the Greek Myths de Stephen P. Kershaw, London: Constable & Robinson Ltd., 2007.

[3] To deepen the subject, visit the author’s webpage at: https://userweb.fct.unl.pt//~lmp/publications/Biblio.html

[4] https://en.wikipedia.org/wiki/Evolutionary_game_theory

[5] This has to do with the 1st of the Three Laws of Robotics idealized by Isaac Asimov, condensed in Law Zero: A robot cannot cause harm to humanity or, by default, allow humanity to suffer any harm.

[6] Can be seen in the link: https://drive.google.com/file/d/0B9QirqaWp7gPUXBpbmtDYzJpbTQ/view?usp=sharing

Detailed explanation here: https://userweb.fct.unl.pt//~lmp/publications/online-papers/lp_app_mach_ethics.pdf

[7] The robot shows in a balloon what it is thinking, and it is shown how the user gives it new moral rules to add to the previous ones, sometimes supplanting them when there is a contradiction between them.

[8] Technical details here: L. M. PereiraT. LenaertsL. A. Martinez-VaqueroT. A. HanSocial Manifestation of Guilt Leads to Stable Cooperation in Multi-Agent Systems, in: Procs. 16th Intl. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2017), Das, S. et al. (Eds.), pp. 1422-1430, 8–12 May 2017, São Paulo, Brazil.

Here: https://userweb.fct.unl.pt/~lmp/publications/online-papers/guiltEGT.pdf

[9]Kai-Fu Lee, AI super-powers ‒ China, Silicon Valley, and the New World Order, NY: Houghton Mifflin Harcourt, 2018.

[10] https://drive.google.com/open?id=1j59rhP7op3nBpvaxpeCdaBVObJAbzWBJ

[11] https://futureoflife.org

[12] Project results here: T. A. HanL. M. PereiraT. Lenaerts, Modelling and Influencing the AI Bidding War: A Research Agenda, in the proceedings of AAAAI/ACM Conference on AI, Ethics, and Society, (AIES 2019),  27-28 January 2019, Honolulu, Hawaii, USA. And here: https://userweb.fct.unl.pt/~lmp/publications/online-papers/AI race modelling.pdf

[13] To save costs. See: “Boeing’s 737-Max software outsourced to Rs 620-an-hour engineers”. Here: https://economictimes.indiatimes.com/industry/transportation/airlines-/-aviation/boeings-737-max-software-outsourced-to-rs-620-an-hour-indian-engineers/articleshow/69999513.cms?from=mdr

[14] https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top

______________________________________________________________________________

About the Author

otc.pt/wp/wp-content/uploads/2020/02/FOTO-Luís-1-241×300.jpg 241w” sizes=”(max-width: 109px) 100vw, 109px” class=”float auxiliary left” style=”max-width: 100%; margin: 1.4em auto; height: auto; display: block; clear: both; font-size: 0.75em; line-height: 1.4em; color: rgba(0, 0, 0, 0.65098); float: none;”>

Luís Moniz Pereira is Emeritus Professor of Computer Science at the Universidade Nova de Lisboa (UNL), Portugal. He is a member of the NOVA Laboratory for Computer Science and Informatics (NOVA-LINCS) of the Informatics Department. In 2001 he was elected Fellow of the European Association of Artificial Intelligence (EurAI). In 2006 he was awarded a Doctor Honoris Causa by the Technische Universität Dresden. He has been a member of the Board of Trustees and Scientific Advisory Board of IMDEA, the Madrid Institute of Software Advanced Studies, since 2006. In 1984 he became the founding President of the Portuguese Association for Artificial Intelligence (APPIA). His research focuses on the representation of knowledge and reasoning, logic programming, cognitive sciences and evolutionary game theory. In 2019 he received the National Medal of Scientific Merit. More information, including other awards and his publications at: http://userweb.fct.unl.pt/~lmp/

Luís Moniz Pereira is a member of the Governing Bodies of OTC

[rede.APPIA] Machine Ethics – From Machine Morals to the Machinery of Morality | Luís Moniz Pereira | Springer

The English version of our new book is available for preorder, in Springer’s SAPERE series:

Machine Ethics

From Machine Morals to the Machinery of Morality

Luís Moniz Pereira is Emeritus Professor of Computer Science at the Universidade Nova de Lisboa (UNL), Portugal. He is a member of the NOVA Laboratory for Computer Science and Informatics (NOVA-LINCS) of the Informatics Department. In 2001 he was elected Fellow of the European Association of Artificial Intelligence (EurAI). In 2006 he was awarded a Doctor Honoris Causa by the Dresden Technical University. He has been a member of the Board of Trustees and Scientific Advisory Board of IMDEA, the Madrid Institute of Software Advanced Studies, since 2006. In 1984 he became the founding President of the Portuguese Association for Artificial Intelligence (APPIA). His research focuses on the representation of knowledge and reasoning, logic programming, cognitive sciences and evolutionary game theory. In 2019 he received the National Medal of Scientific Merit. More information, including other awards and his publications at: http://userweb.fct.unl.pt/~lmp/

António Lopes has a Licentiate’s degree in Philosophy from the Portuguese Catholic University, and a Master in the same area from the Nova University of Lisbon. He is a philosophy teacher at the Anselmo de Andrade High School, where he is Coordinator of the Department of Social and Human Sciences. He published – with the Parsifal editions – the novels Como se Fosse a Última Vez (As If It Were the Last Time) and O Vale da Tentação (The Valley of Temptation). He is co-author of the book Animais que Ficaram para a História (Animals that Went Down in History), recently published by Editora Manuscrito, an imprint of the Grupo Presença. He collaborated in the book A Máquina Iluminada – Cognição e Computação (The Enlightened Machine – Cognition and Computation), authored by Luís Moniz Pereira, published by Fronteira do Caos Editores, 2016.

[rede.APPIA] historical LP documents

Dear all


I’ve been meaning to do it for some time…
Finally I went to my basement, found and scanned some historical LP documents I was involved in.
They can be found here:

L. M. Pereira, A. Porto, L. MonteiroM. Filgueiras (Eds.), Procs. Logic Programming Workshop’83, 630 pp., Praia da Falésia, Albufeira, Algarve, Portugal, June 26-July 1, 1983.  Published by Departamento de Informática, Universidade Nova de Lisboa, Portugal, 1983.     Proceedings and Poster

L. M. Pereira, A. Nerode (Eds.), Logic Programming and Non–monotonic Reasoning (LPNMR’93), Procs. of the Second Intl. Workshop, The MIT Press, 1993.   Poster

L. M. Pereira (Org.), 6th Int. Conf. on Logic Programming (ICLP’89), The MIT Press, 1989.  Poster



Best regards
Luís Moniz Pereira

[rede.APPIA] Fwd: 23 CO-fund postdoc positions at ULB

FYI


Begin forwarded message:


From: Tom Lenaerts <tlenaert@gmail.com>
Subject: 23 CO-fund postdoc positions at ULB
Date: 15 June 2019 at 13:08:03 WEST
To: Tom Lenaerts <tlenaert@gmail.com>


Dear colleagues.


There are again opportunities for postdoctoral positions at the ULB.  If you think someone would be interested in joining either the ULB Machine Learning Group (http://mlg.ulb.ac.be)  or the Interuniversity Institute of Bioinformatics in Brussels (http://www.ibsquare.be), feel free to forward them this message.  

Kind regards
Tom
——————–
We are pleased to inform you that the second call of our COFUND, IF@ULB will be open from June 15th 14.00 AM to October 1st 2019 17.00 PM (Brussels Time – UTC +2). 

This programme offers a unique opportunity to welcome researchers in international mobility in your research team. IF@ULB will enable ULB to recruit 63 international postdoctoral researchers in total. 21 postdoc researchers were granted a fellowship for the first call. They will start their project from April 1st 2020. 

All the information related to the second call are available on the programme website: http://if-at-ulb.ulb.be

The website details all the information related to the second call (e.g. application procedure and file). 

Do not hesitate to contact the IF@ULB management team for any further information: ulb-cofund@ulb.ac.be 

Best regards,

IF@ULB Management Team