Press release -

​Deep Neural Network Generates Realistic Character-Scene Interactions

31 October 2019, Brisbane, Queensland —A key part of bringing 3D animated characters to life is the ability to depict their physical motions naturally in any scene or environment.

Animating characters to naturally interact with objects and the environment requires synthesizing different types of movements in a complex manner, and such motions can greatly differ not only in their postures, but also in their duration, contact patterns, and possible transitions. To date, most machine learning-based methods for user-friendly character motion control have been limited to simpler actions or single motions, like commanding an animated character to move from one point to the next.

Computer scientists from the University of Edinburgh and Adobe Research, the company’s team of research scientists and engineers shaping early-stage ideas into innovative technologies, have developed a novel, data-driven technique that uses deep neural networks to precisely guide animated characters by inferring a variety of motions—sitting in chairs, picking up objects, running, side-stepping, climbing through obstacles and through doorways—and achieves this in a user-friendly way with simple control commands.

A selection of results using the researchers’ method to generate scene interaction behaviors.

The researchers will demonstrate their work, Neural State Machine for Character-Scene Interactions, at ACM SIGGRAPH Asia, held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

To animate character-scene interactions with objects and the environment, there are two main aspects—planning and adaptation—to consider, say the researchers. First, in order to complete a given task, such as sitting in chairs or picking up objects, the character needs to plan and transition through a set of different movements. For example, this can include starting to walk, slowing down, turning around while accurately placing feet and interacting with the object, before finally continuing to another action. Second, the character needs to naturally adapt the motion to variations in shape and size of objects, and avoid obstacles along its path.

“Achieving this in production-ready quality is not straightforward and very time-consuming. Our Neural State Machine instead learns the motion and required state transitions directly from the scene geometry and a given goal action,” says Sebastian Starke, senior author of the research and a PhD student at the University of Edinburgh in Taku Komura’s lab. “Along with that, our method is able to produce multiple different types of motions and actions in high quality from a single network.”

Using motion capture data, the researchers’ framework learns how to most naturally transition the character from one movement to the next –for example being able to step over an obstacle blocking a doorway, and then stepping through the doorway, or picking up a box and then carrying that box to set on a nearby table or desk.

The technique infers the character’s next pose in the scene based on its previous pose and scene geometry. Another key component of the researchers’ framework is that it enables users to interactively control and navigate the character from simple control commands. Additionally, it is not required to keep all the original data captured, which instead gets heavily compressed by the network while maintaining the important content of the animations.

“The technique essentially mimics how a human intuitively moves through a scene or environment and how it interacts with objects, realistically and precisely,” says Komura, coauthor and chair of computer graphics at the University of Edinburgh.

Down the road, the researchers intend to work on other related problems in data-driven character animation, including motions where multiple actions can occur simultaneously, or animating close-character interactions between two humans or even crowds.

Along with Sebastian Starke and Taku Komura, the researchers behind Neural State Machine for Character-Scene Interactions include He Zhang (University of Edinburgh) and Jun Saito (Adobe Research-USA). For the paper and video, visit the team’s project page.

SIGGRAPH Asia 2019 takes place at the Brisbane Exhibition and Convention Centre 17-20 November 2019. For more information, please visit https://sa2019.siggraph.org.

###

Video https://youtu.be/7c6oQP1u2eQ

Related links

Topics

  • Business enterprise

Categories

  • november events
  • brisbane events
  • exhibition
  • event
  • conference
  • asia pacific
  • asia
  • 3d animated characters
  • fx
  • animation
  • cgi
  • siggraph asia 2019
  • siggraph asia

Notes to Editors

1.Keep tabs of updates, download images and more at the SIGGRAPH Asia virtual newsroom

2.Media may apply for media accreditation to SIGGRAPH Asia at bit.ly/sa19accreditation

3. Learn more about SIGGRAPH Asia's Technical Papers.

About SIGGRAPH Asia 2019

The 12th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2019) will be held in Brisbane, Australia at the Brisbane Convention and Exhibition Centre (BCEC) from 17 – 20 November 2019. The annual event held in Asia attracts the most respected technical and creative people from all over the world who are excited by computer graphics research, science, art, animation, gaming, interactivity, education and emerging technologies.

The four-day conference will include a diverse range of juried programs, such as the Art Gallery / Art Papers, Computer Animation Festival, Courses, Doctoral Consortium, Emerging Technologies, Posters, Technical Briefs, Technical Papers and XR (Extended Reality). Curated programs include Business & Innovation Symposium, Demoscene and Real-Time Live! A three-day exhibition held from 18 – 20 November 2019 will offer a business platform for industry players to market their innovative products and services to the computer graphics and interactive techniques professionals and enthusiasts from Asia and beyond. For more information, please visit http://sa2019.siggraph.org. Find us on: Facebook, Twitter, Instagram and YouTube with the official event hashtag, #SIGGRAPHAsia and #SIGGRAPHAsia2019.

About ACM SIGGRAPH

The ACM Special Interest Group on Computer Graphics and Interactive Techniques is an interdisciplinary community interested in research, technology, and applications in computer graphics and interactive techniques. Members include researchers, developers, and users from the technical, academic, business, and art communities. ACM SIGGRAPH enriches the computer graphics and interactive techniques community year-round through its conferences, global network of professional and student chapters, publications, and educational activities. For more information, please visit www.siggraph.org.

About Koelnmesse

Koelnmesse Pte Ltd is one of the world's largest trade fair companies. Its more than 80 trade fairs and exhibitions have the broadest international scope in the industry, as 60 percent of the exhibitors and 40 percent of the visitors come from outside Germany. The Koelnmesse events include leading global trade fairs for 25 sectors, such as Imm Cologne, Anuga, IDS, INTERMOT, Interzum Cologne, Photokina, Gamescom, and the International Hardware Fair Cologne. Koelnmesse is ACM SIGGRAPH’s event organizer for the last 11 editions of SIGGRAPH Asia. For more information, please visit www.koelnmesse.com.sg.

Media Contacts

Illka Gobius, PINPOINT PR

illka@pinpointpr.sg | Mobile +65 97698370

Jamie Huang, Koelnmesse Pte Ltdjamie.huang@siggraph.org| Mobile +65 92329738

Contacts

Sheree Tan

Press contact Associate +65 8313 9472

Hakim Ishak

Press contact Client Executive +65 8949 3040

Windy Oktaviani

Press contact Associate +62 811 910 9266

Ramilyn Laysa

Press contact Senior Associate +63 998 992 4925

Related content