Vector AIXpert: Responsible AI Infrastructure for Fairness, Explainability, and Evaluation¶
Vector Institute's contribution to the AIXpert Project: tools, benchmarks, and research for explainable, accountable, and fair AI.
This project represents the Vector Institute's research contributions to the AIXpert Horizon Europe initiative. It focuses on developing tools, datasets, and evaluation pipelines for fairness-aware generative AI and explainable AI systems.
What we do¶
Vector's contribution to AIXpert spans four core areas:
- Explainable & accountable AI — Tools and benchmarks for interpretability, fairness, and transparency in generative and multimodal AI.
- Trustworthy agentic AI — Transparent, auditable, human-in-the-loop agentic systems with measurable trustworthiness metrics.
- Multimodal evaluation — Benchmarks and datasets for audio-video understanding, vision-language assessment, and fairness across domains and demographics.
- Open, reproducible research — Code, datasets, and documentation shared openly to support governance-ready research.
For the full AIXpert vision, consortium, and funding details, see About.
System Architecture¶
Vector's responsible AI pipeline moves data through five stages — from raw inputs to governed, explainable outputs.
View pipeline
Recent Updates¶
- UnBias-Plus — Bias detection and debiasing for text (CLI, REST API, Python). Project page · Code · PyPI. More on Updates.
- FairSense-AgentiX — Agentic fairness and AI-risk analysis for text, images, and datasets (FastAPI, Web UI, Python). Project page · Code · PyPI. More on Updates.
- HAICON26 — Shaina Raza, PhD at Helmholtz AI Conference 2026: AI for Science (8–11 June 2026, Munich). More on Updates.
- Toronto Machine Learning Summit — Ahmed Y. Radwan presenting SONIC-O1 at the Toronto Machine Learning Summit (16–19 June 2026). Project page · Code · Dataset · Leaderboard. More on Updates.
- Evaluating and Regulating Agentic AI — Forthcoming in Information Fusion (journal link to follow). arXiv · Project page · Code. More on Updates.
- EU cluster webinar — We took part in AI-Enabled Public Services: Building Resilience and Accountability (20 Apr 2026), hosted by EU projects TANGO, AI4REALNET, HumAIne, THEMIS 5.0, and Peer AI. Details on Updates.
- Model immunization — Accepted at WCCI 2026 (IJCNN). arXiv · Project page · Code. More on Updates.
- F-DPO — ACL 2026 Findings. arXiv · Project page · Code. More on Updates.
- TRiSM for Agentic AI accepted — Paper accepted at AI Open, Elsevier 2026. A review of trust, risk, and security management in LLM-based agentic multi-agent systems.
- Remarkable 2026 — We presented AIXpert projects at Remarkable 2026 (photos on Updates).
- SONIC-O1 Multi-Agent — Multi-agent framework for audio-video understanding: planning, chain-of-thought reasoning, self-reflection, and temporal grounding with Qwen3-Omni. Code.
- From Features to Actions — Paper: Explainability in Traditional and Agentic AI Systems (arXiv). Code · Project page.
- Transparency in Agentic AI — Survey: Interpretability, Explainability, and Governance (arXiv). Project page.
- AIXpert news — Our work was highlighted on the AIXpert project website: Advancing Trustworthy, Explainable, and Responsible AI at NeurIPS 2025 (Bias in the Picture, HumaniBench, Carbon Literacy, and more).
- SONIC-O1 — Paper: A Real-World Benchmark for Evaluating MLLMs on Audio-Video Understanding (arXiv). Dataset · Code · Leaderboard.
Related Projects¶
A snapshot of Vector's key contributions within AIXpert. Each project has its own repository, documentation, and quickstart.
-
UnBias-Plus
AI-driven toolkit for bias detection and debiasing in text: biased spans, severity, reasoning, neutral replacements, and a full neutral rewrite for more trustworthy workflows.
Project page · Code · PyPI
-
FairSense-AgentiX
Agentic workflows for bias detection and risk assessment on text, images, and datasets—planning, tool use, self-critique, and telemetry-backed explanations.
Project page · Code · PyPI
-
SONIC-O1
Real-world benchmark for evaluating MLLMs on audio-video understanding, with a public leaderboard.
Dataset · Code · Leaderboard
-
SONIC-O1 Multi-Agent
Multi-agent framework for audio-video understanding with chain-of-thought reasoning, self-reflection, and temporal grounding.
-
Explainable Agentic Evaluation Framework
Analyzes reasoning traces and interpretability of agentic AI across static and agentic settings.
-
Factual Preference Alignment (F-DPO)
Factuality-aware preference learning to reduce LLM hallucinations without a separate reward model.
Paper · Project page · Code
-
HumaniBench
Fairness-focused vision-language benchmark evaluating foundation models across human-centric demographics.
-
Agentic Transparency
Survey and framework on interpretability, explainability, and governance of agentic AI systems.
View all papers Projects & quickstarts
Citation¶
If you use any of our tools, datasets, or benchmarks, please cite the relevant work. BibTeX entries are available on each paper's entry in the Papers page.
Responsible AI Notice
This project may generate synthetic data containing demographic attributes for fairness research. These datasets are designed for controlled bias analysis and responsible AI evaluation only. They are not intended to represent or target real individuals. All data generation follows Vector Institute's responsible AI guidelines and AIXpert's ethical framework.
Have feedback or want to contribute? See the Team section on About and open an issue or pull request.