[rede.APPIA] Fwd: 1st International Workshop on New Foundations for Human-Centered AI



Begin forwarded message:


From: Anthony Cohn <A.G.Cohn@LEEDS.AC.UK>
Subject: 1st International Workshop on New Foundations for Human-Centered AI
Date: 9 February 2020 at 02:16:43 WET
Reply-To: Anthony Cohn <A.G.Cohn@LEEDS.AC.UK>


1st International Workshop on New Foundations for Human-Centered AI

 

                    * Second Call for Papers *

 

SYNOPSIS

——–

 

In June 2018, the European Commission has appointed a “AI High Level

Expert Group” (AI-HLEG) to support the implementation of the European

Strategy on Artificial Intelligence.  One of the first results of the

AI-HLEG has been to deliver ethics guidelines on Artificial Intelligence

(https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines).

These guidelines put forward a human-centered approach to AI, and list

seven key requirements that human-centered, trustworthy AI systems

should meet, summarized by the following headers:

 

1. Human agency and oversight

2. Technical robustness and safety

3. Privacy and data governance

4. Transparency

5. Diversity, non-discrimination and fairness

6. Societal and environmental wellbeing

7. Accountability

 

Many of today’s most popular AI methods, however, fail to meet these

guidelines: making them compliant is a scientific endeavor that is as

crucial as it is challenging and stimulating.  Systems based on deep

learning are a case in point: while these systems often provide

impressive results, their ability to _explain_ these results to the user

is very limited, challenging requirements 4 and 7; in most cases we lack

ways to formally _verify_ their correctness and assess their boundary

conditions, challenging requirement 2; and we don’t yet have methods to

allow humans to _collaboratively_ influence or question their decisions,

challenging requirement 1.  Similar criticalities are present in many

other popular AI methods.

 

This full day workshop will collectively address the fundamental

questions of what are the scientific and technological gaps that we have

to fill in order to make AI systems _human-centered_ in terms of the

above guidelines.

 

PAPER SUBMISSION

—————-

 

Contributions are seeked on new foundations for building Human-Centered

AI systems, able to comply with AI-HLEG recommendations.  Contributions

may present mature results, but position papers and reports of relevant

ongoing work may also be acceptable.  More specific topics include, but

are not limited to:

 

– Explainable AI

– Verifiable AI

– Technical robustness and safety of AI systems

– Collaboration between humans and AI systems

– Integrating model-based and data-driven AI

– Integrating symbolic- and sub-symbolic AI

– Mixed initiative AI-Human systems

– Proactive AI systems in human environments

– Understanding and naturally interacting with humans

– Understanding and interaction in complex social settings

– Reflexivity and expectation managament

– Integrating Learning, Reasoning and Acting in AI systems

 

Papers should be formatted according to the ECAI2020 formatting style,

available at the ECAI2020 website (ecai2020.eu), and should not exceed

six pages.  Submissions are not anonymous.

 

Submit your paper by February 25 via Easychair here:

 

  https://easychair.org/conferences/?conf=nehuai2020

 

ORGANIZERS

———-

 

Alessandro Saffiotti (Orebro University, Sweden)

Luciano Serafini (Fondazione Bruno Kessler, Trento, Italy)

Paul Lukowicz (DFKI, Kaiserslautern, Germany)

 

SUPPORTING ORGANIZATIONS

————————

 

This worskhop is jointly organized by AI4EU (ai4eu.eu), the EU landmark

project to develop a European AI on-demand platform and ecosystem; and

by Humane-AI (humane-ai.eu), the EU FET preparatory action devoted to

designing a European research agenda for Human Centered AI.

 

MORE INFORMATION

—————-

 

http://nehuai2020.aass.oru.se/

 

 

 



To unsubscribe from the AI-SGES list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=AI-SGES&A=1