Google’s Artificial Intelligence Will Play Video Games for You

I don’t know about you, but when I play video games, I really enjoy, you know, playing them. If I wanted to delegate the gameplay to someone else, I would watch a let’s play or a Twitch stream or something like that. But Google is working on an artificial intelligence model that can play video games on your behalf if you tell it what you want it to do: It’s called SIMA, short for Scalable Instructable Multiworld Agent, and it works as advertised. AI may just have taken over your favorite hobby.

Google DeepMind, the company’s artificial intelligence arm, announced the new model on its blog , as well as in a post on X (formerly Twitter). SIMA, according to Google DeepMind, is the first general-purpose AI agent that can follow natural language instructions in 3D environments. In other words, it can play video games at your commands. You say “turn left” and SIMS turns the character left.

Google DeepMind worked with eight video game studios to teach SIMA, including Hello Games No Man’s Sky and Tuxedo Labs Teardown . The development team wanted to use as many different types of games as possible to train SIMA, since each new variable added another skill to the model’s abilities. Google DeepMind even created a sandbox-like environment where SIMA had to create structures to test its understanding of physics and manipulate objects.

What makes SIMA so successful, at least in theory, is that it doesn’t require any technical information about the video game itself, such as source code or APIs. It can act based only on video game images and commands in your natural language. Google DeepMind claims that SIMA can perform more than 600 “basic skills,” such as turning in a certain direction, interacting with objects, and using game menus. However, Google DeepMind is still working on more complex actions, as well as commands containing multiple subtasks. It’s one thing to tell an AI to climb the stairs in front of it, but it’s another thing entirely to teach it to accurately respond to “get resources to build a shelter.” The company says this is a limitation of large language models in general – bots will respond to simple commands but will struggle to perform intuitive actions on their own.

Meanwhile, Google DeepMind is touting the success of its multi-game training model, claiming that SIMA outperforms models trained on one specific game at a time. In fact, the company claims that SIMA can respond better in a game it has never seen before than a model trained only on that game.

Although SIMA is not yet available to the general public, we can imagine some potential uses for this technology. I believe this could be a great accessibility option in the future: for players who have trouble using traditional controllers, telling a bot how to control a player could be a game changer. Of course, Google’s end goal here seems to go beyond this situation: they want AI to be able to play games itself. This could be useful for bypassing repetitive tasks like leveling up or making money, but it also begs the question: why are you even playing the game if you want a robot to do the entire game?

This is Google’s second major foray into artificial intelligence: last month we learned that the company was working on a model that could generate 2D platformers from natural language commands as well. Perhaps someday in the near future the company will release Google Gaming: just tell the AI ​​what type of game you would like to see and it will generate and run the game for you in real time. So funny.

More…

Leave a Reply