It would seem that when creating animated films or video games, the computer is very important, but only as a tool in the hands of a man. Man is the artist who gives the final shape to what we see on the screen, and the computer, although it accompanies him every step of the way, is a kind of color palette and brush. This may soon change, because machine learning is already playing a role in the production of virtual entertainment and does it better and faster than a human. How exactly?
Until now, when creating characters in animated movies or video games, either 3D models were given invisible spines, bones and tendons – to make them move as naturally as possible – or actors were recorded in so-called motion capture sessions – actors playing their roles in wetsuit-like outfits with special markers attached to the fabric. These markers memorized each movement, and then their readings were transferred to the silver screen in the form of full-blooded characters, such as the blue-skinned Na’vi race from the film “Avatar”.
Thanks to this, the movements, gestures and even facial expressions of these fantastic creatures were so realistic and familiar to us – humans. Or, in a sports game, the movements of an athlete recreating a ball play in a special suit were transferred to the video game screen, where they were recreated with surgical precision by the players of a football or basketball team, controlled by the player or artificial intelligence.
Today, this is still the industry standard, but video game developers have decided to take it a step further – the latest installment of the popular football simulation game, FIFA 22, features machine learning. How and why is it a breakthrough not only for the virtual entertainment industry?
Let’s imagine that an attacker comes with the ball straight at the goalkeeper in a 1:1 situation. They both run in their direction and suddenly the goalkeeper throws himself at the ball making a long slide, stretching his arms forward. At this point, the attacker may have time to take a shot, lob the ball over the goalkeeper, pick up the ball and jump up with it, or run straight into the goalkeeper and miss his chance at goal. All of these possibilities can happen, additional ones not mentioned (e.g., foul, pass), too. And what would it look like in a recording studio recording the players’ movement? Either the team would have to record each sequence separately, and then implement it in the game manually and, depending on the player’s decision, run the animation, or… teach the computer to create its own, unique animations, in real-time, also dependent on the player’s actions.
Sounds unrealistic? But it already exists. Until now, the bottleneck in sports games was the fact that you had to record a lot of short animations, because the long ones sometimes came out absurd in action. Why? Imagine the player was performing a scissors kick, but… at the same moment in which he decided to do it – he lost the ball. Unfortunately, the animation was already on and the player performed the salto even though he no longer had the ball. He literally kicked the air.
Today, the artificial intelligence responsible for “glueing” longer animations in real-time knows frame by frame what might happen next. Since we have 60 frames per second (fps = frames per second) in modern sports games, so machine learning allows to edit and display a different animation 60 times per second if necessary.
What’s more, as the name suggests, the machines are self-learners – that is, based on all the previous and each next play of the player or computer, they are be able to choose on the fly a more appropriate (smoother, more natural) animation of attack, defense, play, kick, throw, slide, set piece, etc.
Returning to the example of a striker and goalkeeper in a 1:1 situation, as the goalkeeper slides onto the ball that the striker is controlling, many unexpected solutions to the action can occur, fortunate or not for either team. Machine learning adds 4,000 new animations in the latest installment of the FIFA series. Without such advanced technology, this would not have been possible even despite 4 years of game production. It is likely that only the running onto the pitch of a stray cat or a crazy fan was not foreseen. All other scenarios are likely, just like in a real game.
As a result, what’s the end game? For players, more variation in action and a higher level of realism, and for developers, automation of processes. In the end, machine learning replaced humans in analyzing and selecting appropriate animations during a virtual match. Algorithms can select the right animation on the fly from 8.7 million recorded frames. This has saved the development team several years of tedious manual work.
Learn more about machine learning tools: https://www.bpxglobal.com/en/solution/altair-en/altair-knowledge-studio/
Previous post: Business Intelligence and data warehouses