Publication
Empowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretation
dc.contributor.author | Adão, Telmo | |
dc.contributor.author | Oliveira, João | |
dc.contributor.author | Shahrabadi, Somayeh | |
dc.contributor.author | Jesus, Hugo | |
dc.contributor.author | Fernandes, Marco Paulo Sampaio | |
dc.contributor.author | Costa, Angelo | |
dc.contributor.author | Gonçalves, Martinho Fradeira | |
dc.contributor.author | Lopez, Miguel A.G. | |
dc.contributor.author | Peres, Emanuel | |
dc.contributor.author | Magalhães, Luis G. | |
dc.date.accessioned | 2011-06-03T09:28:52Z | |
dc.date.available | 2011-06-03T09:28:52Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Communication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development of digital solutions for sign language recognition (SLR), several issues persist, such as cross-platform interoperability and strategies for tokenizing signs to enable continuous conversations and coherent sentence construction. To address such issues, this paper proposes a non-invasive Portuguese Sign Language (Língua Gestual Portuguesa or LGP) interpretation system-as-a-service, leveraging skeletal posture sequence inference powered by long-short term memory (LSTM) architectures. To address the scarcity of examples during machine learning (ML) model training, dataset augmentation strategies are explored. Additionally, a buffer-based interaction technique is introduced to facilitate LGP terms tokenization. This technique provides real-time feedback to users, allowing them to gauge the time remaining to complete a sign, which aids in the construction of grammatically coherent sentences based on inferred terms/words. To support human-like conditioning rules for interpretation, a large language model (LLM) service is integrated. Experiments reveal that LSTM-based neural networks, trained with 50 LGP terms and subjected to data augmentation, achieved accuracy levels ranging from 80% to 95.6%. Users unanimously reported a high level of intuition when using the buffer-based interaction strategy for terms/words tokenization. Furthermore, tests with an LLM—specifically ChatGPT—demonstrated promising semantic correlation rates in generated sentences, comparable to expected sentences. | por |
dc.identifier.citation | Adão, Telmo; Oliveira, Joao; Shahrabadi, Somayeh; Jesus, Hugo; Fernandes, Marco Paulo Sampaio; Costa, Angelo; Gonçalves, Martinho Fradeira; Lopez, Miguel A.G.; Peres, Emanuel; Magalhães, Luis G. (2023). Empowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretation. Journal of Imaging. eISSN 2313-433X. 9:11, p. 1-30 | por |
dc.identifier.doi | 10.3390/jimaging9110235 | |
dc.identifier.eissn | 2313-433X | |
dc.identifier.uri | http://hdl.handle.net/10198/4957 | |
dc.language.iso | eng | por |
dc.publisher | MDPI | |
dc.subject | Sign language recognition (SLR) | por |
dc.subject | Portuguese sign language | por |
dc.subject | Video-based motion analytics | por |
dc.subject | Machine learning (ML) | |
dc.subject | Long-short term memory (LSTM) | |
dc.subject | Large language models (LLM) | |
dc.subject | Generative pre-trained transformer (GPT) | |
dc.subject | Deaf-hearing communication | |
dc.subject | Inclusion | |
dc.title | Empowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretation | por |
dc.type | journal article | |
dspace.entity.type | Publication | |
oaire.citation.title | Journal of Imaging | por |
rcaap.rights | openAccess | por |
rcaap.type | article | por |