Publications
-
HumaniBench: A Human-Centric Framework for Large Multimodal Models Evaluation
Raza, Shaina, Aravind Narayanan, Vahid Reza Khazaie, Ashmal Vayani, Mukund S. Chettiar, Amandeep Singh, Mubarak Shah, and Deval Pandya. arXiv preprint arXiv:2505.11454 (2025). -
Vldbench: Vision Language Models Disinformation Detection Benchmark
Raza, Shaina, Ashmal Vayani, Aditya Jain, Aravind Narayanan, Vahid Reza Khazaie, Syed Raza Bashir, Elham Dolatabadi et al. arXiv preprint arXiv:2502.11361 (2025). -
FairSense-AI: Responsible AI Meets Sustainability (Project Website)
Raza, Shaina, Mukund Sayeeganesh Chettiar, Matin Yousefabadi, Tahniat Khan, and Marcelo Lotif. arXiv preprint arXiv:2503.02865 (2025). -
Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs
Salimian, Sina, Gias Uddin, Most Husne Jahan, and Shaina Raza. arXiv preprint arXiv:2502.07186 (2025). -
Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods
Raza, Shaina, Rizwan Qureshi, Marcelo Lotif, Aman Chadha, Deval Pandya, and Christos Emmanouilidis. arXiv preprint arXiv:2505.17870 (2025). -
Vilbias: A Framework for Bias Detection Using Linguistic and Visual Cues
Raza, Shaina, Caesar Saleh, Emrul Hasan, Franklin Ogidi, Maximus Powers, Veronica Chatrath, Marcelo Lotif, Roya Javadi, Anam Zahid, and Vahid Reza Khazaie. arXiv preprint arXiv:2412.17052 (2024). -
Fact or Fiction? Can LLMs Be Reliable Annotators for Political Truths?
Chatrath, Veronica, Marcelo Lotif, and Shaina Raza. arXiv preprint arXiv:2411.05775 (2024).
Media Coverage
- New multimodal dataset will help in the development of ethical AI systems
- Neutralizing Bias in AI: Vector Institute’s UNBIAS Framework Revolutionizes Ethical Text Analysis
- Dataset For Disinformation Detection In AI systems
- FairSense: Integrating Responsible AI and Sustainability
- YouTube Presentation: HumaniBench