Repository logo
 
Publication

Empowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretation

dc.contributor.authorAdão, Telmo
dc.contributor.authorOliveira, João
dc.contributor.authorShahrabadi, Somayeh
dc.contributor.authorJesus, Hugo
dc.contributor.authorFernandes, Marco Paulo Sampaio
dc.contributor.authorCosta, Angelo
dc.contributor.authorGonçalves, Martinho Fradeira
dc.contributor.authorLopez, Miguel A.G.
dc.contributor.authorPeres, Emanuel
dc.contributor.authorMagalhães, Luis G.
dc.date.accessioned2011-06-03T09:28:52Z
dc.date.available2011-06-03T09:28:52Z
dc.date.issued2023
dc.description.abstractCommunication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development of digital solutions for sign language recognition (SLR), several issues persist, such as cross-platform interoperability and strategies for tokenizing signs to enable continuous conversations and coherent sentence construction. To address such issues, this paper proposes a non-invasive Portuguese Sign Language (Língua Gestual Portuguesa or LGP) interpretation system-as-a-service, leveraging skeletal posture sequence inference powered by long-short term memory (LSTM) architectures. To address the scarcity of examples during machine learning (ML) model training, dataset augmentation strategies are explored. Additionally, a buffer-based interaction technique is introduced to facilitate LGP terms tokenization. This technique provides real-time feedback to users, allowing them to gauge the time remaining to complete a sign, which aids in the construction of grammatically coherent sentences based on inferred terms/words. To support human-like conditioning rules for interpretation, a large language model (LLM) service is integrated. Experiments reveal that LSTM-based neural networks, trained with 50 LGP terms and subjected to data augmentation, achieved accuracy levels ranging from 80% to 95.6%. Users unanimously reported a high level of intuition when using the buffer-based interaction strategy for terms/words tokenization. Furthermore, tests with an LLM—specifically ChatGPT—demonstrated promising semantic correlation rates in generated sentences, comparable to expected sentences.por
dc.identifier.citationAdão, Telmo; Oliveira, Joao; Shahrabadi, Somayeh; Jesus, Hugo; Fernandes, Marco Paulo Sampaio; Costa, Angelo; Gonçalves, Martinho Fradeira; Lopez, Miguel A.G.; Peres, Emanuel; Magalhães, Luis G. (2023). Empowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretation. Journal of Imaging. eISSN 2313-433X. 9:11, p. 1-30por
dc.identifier.doi10.3390/jimaging9110235
dc.identifier.eissn2313-433X
dc.identifier.urihttp://hdl.handle.net/10198/4957
dc.language.isoengpor
dc.publisherMDPI
dc.subjectSign language recognition (SLR)por
dc.subjectPortuguese sign languagepor
dc.subjectVideo-based motion analyticspor
dc.subjectMachine learning (ML)
dc.subjectLong-short term memory (LSTM)
dc.subjectLarge language models (LLM)
dc.subjectGenerative pre-trained transformer (GPT)
dc.subjectDeaf-hearing communication
dc.subjectInclusion
dc.titleEmpowering deaf-hearing communication: exploring synergies between predictive and generative ai-based strategies towards (portuguese) sign language interpretationpor
dc.typejournal article
dspace.entity.typePublication
oaire.citation.titleJournal of Imagingpor
rcaap.rightsopenAccesspor
rcaap.typearticlepor

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
jimaging-09-00235-v2.pdf
Size:
6.85 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: