ÁGATA — Human Rights Violation Signal Detection Model
Overview
The digital threat detection model is a project where my role was to design from scratch the experience and interface of an AI-powered automated monitoring system for Colombia's Defensoría del Pueblo. The challenge was not just about design: it was about understanding a completely new domain — digital human rights, natural language processing, risk classification — and translating it into an interface that allowed analysts to make fast, proportional, and legally grounded decisions about signals that can escalate into a crisis within minutes.
Team
- UX/UI Lead: Jorge Molano
- Data Scientists: 2
- Developers: 3
- Project Manager: Human Rights Specialist
- Client: Defensoría del Pueblo
Time
3 Months (MVP)
Designed For
Analysts from the Dirección para Asuntos de la Libertad de Expresión (DADLE) at the Defensoría del Pueblo, who monitor public digital conversations for signals of human rights violations — child recruitment, political violence, and gender-based digital violence. The core challenge was that harmful content can go viral in minutes, making manual monitoring humanly impossible at scale. The interface had to surface the most critical signals first, reduce cognitive load during triage, and support human decision-making — without replacing it.
Crafting the Solution
I led a co-creation process with the Defensoría team using Design Thinking methodology — from empathizing with analysts to testing in a QA environment — ensuring every design decision was grounded in real operational workflows. This allowed us to prioritize the MVP scope in a structured way, separating what was essential from what was desirable and documenting what would be addressed in future phases. The result was a three-column interface — signal list, signal detail, and atomic content units — with a configurable 0-to-100 scoring model that automatically triggers the corresponding alert level.
Results
The first automated human rights signal detection interface for a Colombian public institution. A design that translated the complexity of an NLP model into a traffic-light system immediately actionable for non-technical analysts, built on a configurable architecture that allows the system to adapt to emerging risks, new categories, and different institutional contexts without rebuilding the underlying model. A replicable GovTech framework for AI-powered human rights monitoring platforms — scalable by design, applicable across different detection models and institutional needs.