Traditional multi-view techniques
Videogame models, sci-fi characters in movies and humanoid models in sports analyses are all developed using photogrammetric or scanning techniques that rely on cameras and image processing with careful programming that governs the dimensions and profiles of the bodies scanned and modeled. These systems generally rely on body-worn markers, although some recent renditions can function without the need for a special marked suit. Multi-view approaches can be highly accurate in their functionality and sometimes provide dense surface reconstructions.
The problem with traditional techniques
While traditional methodologies can be very accurate, they are still constrained by the fact that most of them require a set of environment-mounted cameras that are caliberated very carefully to record one particular location. Due to this reason, these techniques are highly dependent on the environment under which they function which makes them extremely costly for developers and producers and entirely infeasible for certain projects.
Environment independent approach
Addressing or removing constraint imposed by environmental and architectural limitations on the multi-view techniques altogether is a crushing challenge to be answered by developers. One research publication titled, ‘Flycon: Real-time Environment-independent Multi-view Human Pose Estimation with Aerial Vehicles‘, by researchers at ETH Zurich and Delft University of Technology, addresses this problem as the researchers propose an environment-independent approach to multi-view human motion capture that leverages an autonomous swarm of micro aerial vehicles (MAVs), or drones. The approach leverages a swarm of camera equipped flying robots and jointly optimizes the swarm and skeletal states, which include the 3D joint positions and a set of bones.
The newly developed method allows real-time tracking of the motion of human subjects. For example an athlete, over long time horizons and long distances, in challenging settings and at large scale, where fixed infrastructure approaches are not applicable.
The proposed algorithm uses active infra-red markers, runs in real-time and accurately estimates robot and human pose parameters online without the need for accurately calibrated or stationary mounted cameras.
What it does
As per the publication, the developed algorithm effectively does the following tasks,
Challenges addressed by this new approach
Moreover, the algorithm addresses a number of challenges by adding to its processing such as:
Experiments and demonstrations
In one of the four experiments, a participant performing jumping jacks was analyzed with drones that were controlled by processors programmed by the algorithm developed by the researchers. The frequency or shutter speed of the camera limited the velocity of the limbs observed.
However, the position tracked by the algorithm is incredibly accurate as apparent from the 3d link model developed as shown in the image.
Citation: Flycon: Real-time Environment-independent Multi-view Human Pose Estimation with Aerial Vehicles – https://ait.ethz.ch/projects/2018/flycon/downloads/flycon.pdf