Browsing by Issue Date, starting with "2025-07-04"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Exploring indicators to identify bias in artificial intelligence modelsPublication . Saadi, Omar; Pereira, Ana I.; Muñoz, Raquel ÁvilaThis thesis details the conception, development, and evaluation of the Bias Detector Tool, an open-source Python application specifically built in the context of this work to assess ethical aspects of artificial intelligence (AI) systems. The tool evaluates machine learning (ML) models and datasets across five key ethical dimensions: fairness, transparency, privacy, robustness, and accountability. It is intended to assist researchers, developers, and policy makers to identify ethical risks in AI systems. The tool employs a modular pipeline that processes CSV-format datasets, applies established metrics using specialized libraries such as Fairlearn, Diffprivlib, and LIME and generates comprehensive output reports in TXT, JSON, and CSV formats. It offers a scoring mechanism that classifies each ethical indicator on a scale from 0 to 1 and provides both technical results and simplified interpretations for non-expert users. To validate the tool’s functionality and reliability, a test scenario was conducted in collaboration with Professor Raquel Ávila Muñoz, an expert in Equality, Diversity, and Inclusion at the Complutense University of Madrid. The evaluation compared datasets with limited ethical integrity such as those lacking diversity or metadata with well-structured datasets showing inclusive data practices. The tool successfully reflected these differences through the scoring system confirming its efficacy in identifying ethically problematic datasets. In summary, this work contributes to the field of responsible AI by offering a practical, transparent, and user-friendly approach to ethical assessment. The tool is publicly available via GitHub, encouraging further adaptation and development by the research community.