MLD Co-Partners with MULTIDATA Consortium

18 November 2024
logo for Multidata
The Multimodal Language Department is delighted to announce its collaboration with MULTIDATA, a European project (E+ KA220) aimed at revolutionizing multimodal data analysis in higher education.

This strategic partnership is set to bring transformative changes to how video-based multimodal data is processed and utilized in academic and research settings.

Through this partnership, the MULTIDATA platform offers cutting-edge features such as:

  • automatic detection of persons on screen
  • automatic detection of human body, hand, facial, and foot keypoints
  • normalization of dynamic coordinates of body points for the reconstruction of motion and gesture trajectories
  • automatic speech analysis (pitch, intensity, harmonicity and formants, measured frame by frame)
  • time-stamped automatic speech transcription and subtitles
  • manual annotation in ELAN, containing annotations of the start and end times of each spoken word

 

Resources for Researchers and Learners

As part of this collaboration, MULTIDATA provides free webinars, workshops, and resources designed to empower researchers and educators. These sessions feature leading experts who demonstrate best practices for utilizing the platform’s tools in multimodal analysis.

Recently, MULTIDATA hosted an inspiring webinar titled "Data-driven AI for the Study of Multimodal Communication" by Mark Turner, showcasing live demonstrations and practical applications of the platform.

 

Get Involved

Don’t miss out on this transformative journey! Visit the MULTIDATA website to:

  • Register for upcoming events.
  • Explore the platform’s diverse offerings.
  • Access valuable research tools and resources.

 

For inquiries, reach out to the MULTIDATA team at hello [at] multi-data.eu.

Together, MLD and MULTIDATA are paving the way for a future where advanced tools and collaborative education redefine how multimodal communication is studied and understood. Stay tuned for more updates as we continue to shape the future of multimodal research!

Share this page