How Distributed Computing will Transform MMOs

Distributed computing has done some great things for science and technology: it’s used to predict climate change, work on aerospace engineering problems, the coding of genomes, and many more very processing-power-intensive tasks.

Now, a new company called Improbable is exploring what this technology could do for the gaming world with distributed, dynamic, constantly updating virtual worlds—and it could change the way we play games.

Worlds that Exist When You’re Not There

In most video games, when you turn the game off, the world goes into stasis—nothing happens, because you’re not there to simulate it – the bits just sit in memory. Often, the game will even discard any of your changes, and return to its original state – think an MMO with NPCs who lip-sync through the same conversations over and over again.But some games go beyond this and use a game world that keeps going even when no one’s around.

While many games aim simply to provide fun for a while, persistent-world games generally focus on drawing players into the universe of the game and making them feel that they’re actually a part of that universe. Building a world with causal depth and complexity would go a long way towards making them feel alive – and expanding the scope of gameplay enormously.

To create these worlds, hundreds or thousands of computers around the world need to be cooperating, 24 hours a day, 7 days a week, just to make the universe keep ticking. That’s what Improbable is trying to enable.

What’s Different about Improbable?

So if these games that have been around for years have these complex dynamic worlds, what’s so exciting about Improbable getting into the game? The answer is simple: Improbable wants to make all of this complexity simple. They want to use new technology to make this depth the standard – something that can be easily integrated by any game. To help achieve this, they’re learning from the difficulties faced by previous distributed virtual world engines.

In an interview with Wired, Mark Ferlatte, a long-time overseer of Second Life, pointed out that the architecture behind Second Life could result in some machines becoming overloaded, which slowed down the network and didn’t take full advantage of distributed computing.

Improbable’s new tech is designed to handle many backend task on behalf of the developers, automatically moving computation and bandwidth around, to prevent any part of the system from overloading. From the developer’s end, they don’t have to think about it. They can just build these rich, complex, detailed worlds, and let the software work out the details.

Of course, this is no small task; distributed computing is complicated, persistent-world games are complicated, and creating a sort of blueprint that will work for different designers to build upon and quickly deploy their games is a colossal undertaking.

Moving Beyond Gaming

Interestingly, while this technology may be the future of gaming, it could also play a significant role in the types of distributed computing projects discussed earlier. If Improbable architecture can be used to quickly deploy distributed systems for scientists as well as game designers, we could see a big increase in the number of this type of computing projects.

Boundaries Disappearing

As the computing power available to game designers increases, the worlds that we spend our time playing in will become bigger, more complicated, and even more unpredictable as they grow without our constant input. In short, they’ll become more like the real world. That was the idea behind Second Life, after all.