[rede.APPIA] Prémio Melhor Tese de Doutoramento em Inteligência Artificial 2023: Deadline 15 de julho de 2023

Melhor Tese de Doutoramento em Inteligência Artificial 2023
Prémio da Associação Portuguesa para a Inteligência Artificial 

A APPIA institui o Prémio para a Melhor Tese de Doutoramento em Inteligência Artificial 2023, com a finalidade de distinguir trabalhos doutoramento de elevado mérito na área da Inteligência Artificial e que tenham sido obtidos numa instituição de ensino superior portuguesa durante o ano de 2023.

Em anexo segue o regulamento do prémio, sendo que as candidaturas devem ser efetuadas via preenchimento deste formulário até à data limite: 15 de julho de 2023.


O prémio tem um valor simbólico de 1000 euros, sendo que o candidato (ou seu representante) receberá o certificado do Prémio de Melhor Tese de Doutoramento em Inteligência Artificial 2023, em Setembro de 2024, durante a realização da 23th EPIA Conference on Artificial Intelligence (EPIA 2024, https:// epia2024.pt)


Organização:
Goreti Marreiros, Instituto Politécnico do Porto
João Leite, Universidade Nova de Lisboa

Goreti Marreiros

ISEP | Instituto Superior de Engenharia do Porto
Rua Dr. António Bernardino de Almeida, 431
4249-015 Porto – PORTUGAL
tel. +351 228 340 500 | fax +351 228 321 159
mail@isep.ipp.pt | www.isep.ipp.pt

[rede.APPIA] Falecimento do Professor José Maia Neves

Boa tarde,

É com profundo pesar que informamos o falecimento do Professor José Maia Neves, Professor Emérito da Universidade do Minho.

O Professor Maia Neves, um dos fundadores da APPIA, o seu sócio número 5, foi um pilar na comunidade académica e científica. A sua contribuição inestimável para o avanço da Inteligência Artificial em Portugal, assim como o seu compromisso com a educação e a formação de estudantes e investigadores, deixaram um legado duradouro que continuará a influenciar e inspirar muitos.

A comunidade académica e científica perde não só um professor e investigador de exceção, mas também um mentor e amigo dedicado. À família, amigos e colegas, endereçamos as nossas mais sinceras condolências neste momento de dor e perda.

Informamos que as cerimónias fúnebres, decorrerão manhã, dia 17 de junho, no Tanatório do Cemitério Monte de Arcos (Braga).entre as 14h e as 16h.

Que a memória do Professor José Maia Neves permaneça viva entre todos aqueles que tiveram o privilégio de o conhecer e trabalhar ao seu lado.

—A direção da APPIA

Goreti Marreiros

ISEP | Instituto Superior de Engenharia do Porto
Rua Dr. António Bernardino de Almeida, 431
4249-015 Porto – PORTUGAL
tel. +351 228 340 500 | fax +351 228 321 159
mail@isep.ipp.pt | www.isep.ipp.pt

[rede.APPIA] Fwd: LUHME – Language Understanding in the Human-Machine Era | Call for papers | DEADLINE EXTENSION

;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none”>

Dear all, 
SECOND AND LAST DEADLINE EXTENSION
Given the high interest shown in the ECAI workshop LUHME – Language Understanding in the Human-Era, and further to several requests received by interested participants, we decided to extend the paper submission deadline. The new, extended and final submission deadline is 29 June 2024
PROCEEDINGS
We would like to remind you that the workshop proceedings will be published as part of the ACL Anthology. 
We look forward to receiving your submission. 
All best
Rui Sousa-Silva (on behalf of the workshop organisers)
* * * * *
Call for Papers: Language Understanding in the Human-Machine Era (LUHME)
 
The LUHME 2024 workshop Language Understanding in the Human-Machine Era is part of
the 27th European Conference on Artificial Intelligence, ECAI 2024 (https://www.ecai2024.eu/).
This Workshop is scheduled for 20 October 2024.
 
 
Workshop description
Large language models (LLMs) have revolutionized the development of interactional artificial intelligence (AI) systems by democratizing their use. These models have shown remarkable advancements in various applications such as conversational AI and machine translation, marking the undeniable advent of the human-machine era. However, despite their significant achievements, state-of-the-art systems still exhibit shortcomings in language understanding, raising questions about their true comprehension of human languages.
The concept of language understanding has always been contentious, as meaning-making depends not only on form and immediate meaning but also on context. Therefore, understanding natural language involves more than just parsing form and meaning; it requires access to grounding for true comprehension. Equipping language models with linguistics-grounded capabilities remains a complex task, given the importance of discourse, pragmatics, and social context in language understanding.
Understanding language is a doubly challenging task as it necessitates not only grasping the intrinsic capabilities of LLMs but also examining their impact and requirements in real-world applications. While LLMs have shown effectiveness in various applications, the lack of supporting theories raises concerns about ethical implications, particularly in applications involving human interaction.
The “Language Understanding in the Human-Machine Era” (LUHME) workshop aims to reignite the debate on the role of understanding in natural language use and its applications. It seeks to explore the necessity of language understanding in computational tasks like machine translation and natural language generation, as well as the contributions of language professionals in enhancing computational language understanding.
 
Topics of Interest
Topics of interest include, but are not limited to:
·       Language understanding in LLMs
·       Language grounding
·       Psycholinguistic approaches to language understanding
·       Discourse, pragmatics and language understanding
·       Evaluation of language understanding
·       Multi-modality and language understanding
·       Socio-cultural aspects in understanding language
·       Effects of language misunderstanding by computational models
·       Manifestations of language understanding
·       Distributional semantics and language understanding
·       Linguistic theory and language understanding by machines
·       Linguistic, world, and common sense knowledge in language understanding
·       Machine translation and/or interpreting and language understanding
·       Human vs. machine language understanding
·       Role of language professionals in the LLMs era
·       Understanding language and explainable AI
 
Ethics Statement
Research reported at ECAI and LUHME workshop should avoid harm, be honest and trustworthy, fair and non-discriminatory, and respect privacy and intellectual property. Where relevant, authors can include in the main body of their paper, or on the reference page, a short ethics statement that addresses ethical issues regarding the research being reported and the broader ethical impact of the work. Reviewers will be asked to flag possible violations of relevant ethical principles. Such flagged submissions will be reviewed by a senior member of the programme committee. Authors may be required to revise their paper to include a discussion of possible ethical concerns and their mitigation.
 
Submission Instructions
Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via OpenReview: https://openreview.net/group?id=eurai.org/ECAI/2024/Workshop/LUHME
 
Excessive use of typesetting tricks to make things fit is not permitted. Please do not modify the style files or layout parameters. You can resubmit any number of times until the submission deadline. The workshop papers will be published in the proceedings (further information will be provided soon).
 
Important Dates
·       Paper submission: 31 May 2024 29 June 2024
·       Notification of acceptance: 15 July 2024
·       Camera-ready papers: 31 July 2024
·       LUHME workshop: 20 October 2024
 
Invited Speakers
·       Alexander Koller, Saarland University
·       Anders Søgaard, University of Copenhagen
 
Organisation
This workshop is jointly organised by the chairs of working groups 1 (Computational Linguistics) and 7 (Language Work, Language Professionals) of the COST Action LITHME – Language in the Human-Machine Era (CA19102).
 
Workshop Organisers
·       Rui Sousa-Silva (University of Porto, Portugal)
·       Henrique Lopes Cardoso (University of Porto, Portugal)
·       Maarit Koponen (University of Eastern Finland, Finland)
·       Antonio Pareja-Lora (Universidad de Alcalá, Spain)
·       Márta Seresi (Eötvös Loránd University, Hungary)
 
 
Program Committee
·       Aida Kostikova (Bielefeld University)
·       Alex Lascarides (University of Edinburgh)
·       Alípio Jorge (University of Porto)
·       António Branco (University of Lisbon)
·       Belinda Maia (University of Porto)
·       Caroline Lehr (ZHAW School of Applied Linguistics)
·       Diana Santos (Universitetet i Oslo)
·       Efstathios Stamatatos (University of the Aegean)
·       Ekaterina Lapshinova-Koltunski (University of Hildesheim)
·       Eliot Bytyçi (Universiteti i Prishtinës “Hasan Prishtina”)
·       Hanna Risku (University of Vienna)
·       Jörg Tiedemann (University of Helsinki)
·       Lynne Bowker (University of Ottawa)
·       Nataša Pavlović (University of Zagreb)
·       Paolo Rosso (Universitat Politècnica de València)
·       Ran Zhang (Bielefeld University)
·       Ruslan Mitkov (Lancaster University)
·       Sule Yildirim Yayilgan (Norwegian University of Science and Technology)
·       Tharindu Ranasinghe (Lancaster University)
 
 
For further information, please visit https://luhme.web.uah.es/ or contact rssilva@letras.up.pt


Rui Sousa Silva

Faculdade de Letras, Universidade do Porto
Faculty of Arts and Humanities, University of Porto

www.linguisticaforense.pt https://s.up.pt/qjur http://tinyurl.com/37w2ec6x 
Publicação mais recente / Latest publication: ‘We Attempted to Deliver Your Package’: Forensic Translation in the Fight Against Cross-Border Cybercrime

AVISO DE CONFIDENCIALIDADE: Esta mensagem e os seus anexos são confidenciais e dirigidos unicamente aos destinatários da mesma. Se não for o destinatário, solicito que não faça qualquer uso do seu conteúdo e proceda à sua eliminação, notificando-me do sucedido. Obrigado. 
// 
CONFIDENTIALITY WARNING: This message and its attachments are confidential and exclusively addressed to the recipients above. Should you not be one of the recipients, I kindly ask you not to make use of its contents and delete the message and its attachments. Please reply to this e-mail to warn me about this incident. Thank you.