Session IV: Qualisys Markerless Mocap Solutions – Nils Betzler
If you missed any of the presentations and demos during the Qualisys Virtual EnTimeMent Event on May 5, 2021 and May 19, 2021, you can watch recordings here.
Thank you to everyone who participated in the event!
Session I. The EnTimeMent project – Goals and Foundations
- The EnTimeMent project | Prof. Antonio Camurri (University of Genoa)
- EnTimeMent Partner Pitch Presentations
- Cognitive Neuroscience
- Cortico-motor control orchestrates visual perception | Alice Tomassini (IIT)
- The microscopic structure of interpersonal synchronization | Alessandro D’Ausilio (IIT)
- The head department of body orchestration: INTRApersonal motor coordination in INTERpersonal couplings within a musical ensemble | Julien Laroche (IIT)
- Computational models
- Mocap technology for chronic pain management | Nadia Berthouze (UCL)
- Graph convolutional networks for movement analysis and generation | Mårten Björkman (KTH)
- Movement science
- Embodied emotional signatures in the context of joint action | Benoît Bardy & Marta Bienkiewicz (EuroMov)
- Intersecting information encoding and readout at the single movement level |Cristina Becchio (IIT)
- Studying the movement of Indian singers using OpenPose | Martin Clayton & Jin Li (Durham University)
- Using computer vision to assess leadership dynamics in musical groups | Peter Keller (Western Sydney University)
- Technology
- Motion analysis: Past, present and future | Fredrik Müller (Qualisys)
- Cognitive Neuroscience
Session II: Designing time – Temporal scales in interaction design
- Keynote: Media Interaction Design and Creativity Support | Dr. Joseph Malloch (Dalhousie University)
- Research through Design, Temporality, and Flying User Interfaces | Dr. Mehmet Aydin Baytas & Mafalda Gamboa (Chalmers University of Technology)
- Movement qualities in different time frames | Dr. Sofia Dahl (Aalborg University)
- Sound, Fragility, Intrusiveness | Andrea Cera (University of Genoa)
- Panel Discussion
Session III: Grasping time – Capturing, understanding and modeling temporal scales in human perception and action
- Keynote: Robot systems that act, interact and collaborate | Prof. Danica Kragic Jensfelt (Royal Institute of Technology, Stockholm)
- Temporal levels in music: investigating time dilations and performance synchrony | Prof. Clemens Wöllner & Dr. Birgitta Burger (University of Hamburg)
- The experience of time and space in human standstill | Prof. Alexander Refsum Jensenius (University of Oslo)
- The role of computational and subjective features in emotional body expression perception / Representation of perceived body pose in the brain| Marta Poyo Solanas & Giuseppe Marrazzo (Maastricht University)
- Panel Discussion
Session IV: Markerless mocap and machine learning
- Qualisys markerless MoCap solutions | Dr. Nils Betzler (Qualisys)
- Machine learning in MoCap | Sten Remmelg (Qualisys)
- Multiple Temporal Scales in Motion Recognition: from Shallow to Deep Multi Scale Models | Vincenzo Stefano D’Amato (University of Genoa)
- Markerless MoCap demonstration | Dr. Vincent Fohanno (Qualisys)