Control of complex, physically simulated robot groups

David C. Brogan

University of Virginia Charlottesville, Virginia

1. INTRODUCTION

Animated characters are needed to play the role of teachers or guides, teammates or competitors, or just to provide a source of interesting motion in virtual environments. The characters in a compelling virtual environment must have a wide variety of complex and interesting behaviors and must be responsive to the actions of the user. The difficulty of constructing such synthetic characters currently hinders the development of these environments, particularly when realism is required. In this paper, we describe one approach to populating graphical environments, using dynamic simulation to generate the motion of characters (figure 1).

(figures of groups of animated characters)

Figure 1. Images of 105 simulated one-legged robots and 6 simulated bicycle riders.

Motion for characters in virtual environments can be generated with keyframing, motion capture, or dynamic simulation. All three approaches require a tradeoff between the level of control given to the animator and the automatic nature of the process. Animators require detailed control when creating subtle movements that are unique or highly stylized. Generating expressive facial animations usually requires this low level of control. Automatic methods are beneficial because they can interactively produce motion for characters based on the continuously changing state of the user and other characters in the virtual environment.

(email link to author, for further information) Further author information: E-mail: dbrogan@cs.virginia.edu

Keyframing requires that the animator specify critical, or key, positions for the animated objects. The computer then fills in the missing frames by smoothly interpolating between those positions. The specification of keyframes for some objects can be partially automated with techniques such as inverse kinematics, but keyframing still requires that the animator possess a detailed understanding of how moving objects should behave over time as well as the talent to express that information through the configuration of the character. A library of many keyframed animations can be generated off-line and subsequently accessed in an interactive environment to provide the motion for a character that interacts with the user.

Motion capture is one of the most commonly used animation techniques. Magnetic or vision-based sensors are placed on an actor to record the positions of body parts or joint angles as the actor performs a desired action. This recorded motion is then played back through a graphical character. Motion capture is growing in popularity because of the relative ease with which many human actions can be recorded. In particular, sports video games often use motion capture to generate the stylistic movements of athletes in an interactive environment. However, a number of problems prevent motion capture from being an ideal solution for all applications. As with keyframing, recorded motion capture sequences must be skillfully blended together to create realistic movements that change in response to the actions of the user. Discrepancies between the shapes or dimensions of the motion capture subject and the graphical character also can lead to problems. If, for example, the subject was recorded touching a real table, the hands of a shorter graphical character might appear to intersect the table. Finally, the current technology makes it difficult to record certain movements. Magnetic systems often require connecting the subject to a computer by cables, restricting the range of motion, and they produce noisy data when metal objects such as treadmills are close by. Optical systems have problems with occlusion caused by one body part blocking another from view. Motion capture will become easier to use in interactive environments as researchers develop automatic techniques that reuse motion captured segments to animate graphical characters of many shapes and sizes and increase the variety of character actions by automatically blending two motion captured movements with a smooth transition.

Unlike keyframing and motion capture, simulation uses the laws of physics to generate motion of figures and other objects. Virtual characters are usually represented as a hierarchy of rigid body parts connected by telescoping and rotary joints. The equations of motion that simulate these body parts calculate the movements that result from acceleration due to gravity, forces caused by the ground during collisions, and torques applied at a joint. Each simulation also contains control algorithms that calculate the appropriate torques at each joint to accomplish such desired behaviors as hopping, riding, and balancing. Higher-level algorithms can use these control algorithms to direct a group of simulations to move as a herd or to navigate along a narrow path.

Dynamic simulation offers two potential advantages over other sources of motion for synthetic characters in virtual environments. First, simulated motion generates physically realistic motion that may be difficult to create using keyframing. While not all environments need or even benefit from physical realism, it is required for a growing set of applications such as sports training, task training, and team-oriented games. Second, because their motion is computed on the fly, dynamically simulated characters offer a more precise form of interactivity than characters animated with a fixed library of precomputed or recorded motion. For example, in football video games, the motion resulting from a collision between opposing players is a function of the magnitude and direction of their velocities as well as their body configurations at the time of impact. Because the number of initial conditions is very large, modeling this interaction accurately with a library of fixed motions is difficult. One disadvantage of dynamic simulation is the computational cost.

The bicyclist example presented here requires a modern processor to simulate in real time (such that simulated time passes at the same rate as wall clock time). Dynamic simulation also imposes some limitations on the behavior of synthetic characters. Simulated characters are less maneuverable than those modeled as point-mass systems and those that move along paths specified by animators. For example, although a point-mass model can change direction instantaneously, a legged system can change direction only when a foot is planted on the ground. If the desired direction of travel changes abruptly, the legged system may lose its balance and fall. These limitations are physically realistic and therefore intuitive to the user, but they also make it more difficult to design robust algorithms for group behaviors, obstacle avoidance, and path following.

To illustrate the use of dynamically simulated characters, we created a group of simulated human bicyclists and a group of alien bicyclists that ride on a bicycle race course (figure 2). Our earlier results indicate 1,2 that we can generate algorithms that support characters of different types and groups of varying size, however, manual tuning was required to obtain good performance. In this paper we describe automatic tuning methods and algorithms that generate improved group performance.

(figure of a race course referred to in this paper)

Figure 2. The 13-kilometer race course from the 1996 Olympics. This graphical course captures the elevation, side streets, and surrounding terrain of the streets from Atlanta, Georgia where the race was held.