Header

Search

DAVALab – Digital Audio-Visual Annotation Lab

The Digital Audio-Visual Annotation Lab (DAVALab) develops an easy-to-use, web-based infrastructure for the automated analysis of text, speech, and video data. Researchers can upload interviews, conversations, or audiovisual recordings and receive high-quality annotations – such as transcriptions, speaker segmentation, facial expression and gesture recognition, or sentiment and emotion analysis. These results are delivered in standardized formats and can be used for further research, teaching, or public communication. Because DAVA Lab is a web application hosted at the University of Zurich (UZH) with a graphical user interface (GUI), no programming knowledge or special hardware is required.

Researchers across disciplines – from linguistics and psychology to media studies and AI – will be able to use DAVA to generate standardized, reusable datasets for multimodal research. The platform lowers technical barriers and promotes FAIR data practices, helping to make complex audiovisual analysis accessible and sustainable at UZH and beyond.

DAVALab builds on existing infrastructue initiatives:

The project is supported by the DSI, LiRI, and the Department of Computational Linguistics at the University of Zurich (UZH).



Project duration: 01.09.2025 - 31.08.2027

Contact: Dr. Teodora Vuković



Project Team

DAVALab is created by an interdisciplinary team of experts from speech science, computational linguistics, film studies, and data science and more.