I’m here to talk about the future of VR and games, but I think that the things that the people in this room are working on are going to be so fundamental to the development of the entire computer industry that this might as well be a talk about the future of human-computer interaction. Now my thoughts here are based on my experience with Epic Games, so it might be useful to start out with a brief overview of what we do. We develop games like Unreal Tournament, an old school shooter for PC. Paragon is our new MOBA, free-to-play action game that’s now on PC and PlayStation 4, and we make the Unreal Engine. And in the course of making this leading edge technology we work very closely with all of the hardware vendors, all of the platform companies and researchers around the industry.
So we try to bring together the best of all of their knowledge and our own insight from building the technology to build an understanding of where the industry is going in the future, and that’s what I’m here to share today. Now we’re in a very good position already right now because we have on all platforms across the industry very high performance GPUs with a common feature set of highly programmable shaders, as well as low level APIs which eliminate all of the previous performance bottlenecks from it. So we have a really great lineup of hardware available.
On PC we have DirectX 12 and Vulkan, and we have Vulkan also on Android and Linux leading all of the open platforms forward. And the really interesting observation is the real innovation that’s driving the entire technology forward is driven by personal computers. PCs are platforms that are open to everyone, and releasing products doesn’t require permission from anybody. As a result, software and hardware innovation are completely unfettered by gatekeepers. Early Apple computers drove the video game industry before the consoles were even invented, and after the console crash of the late 1980’s, the IBM PC drove game development forward.
Now these early games began with abstract representations of environments and characters because we just didn’t have enough computing power to draw anything roughly approximating reality. But that changed in the early 1990s with John Carmack’s amazing work on Wolfenstein 3D and Doom, and then Epic’s work on the Unreal games. It finally became possible to render rough approximations of reality because we had enough computing performance to do that. And of course for the past 18 years now we’ve been pushing forward, ever advancing the state of the art in the industry, and now we’re just approaching the cusp of photorealism. And this has come just in time.
It’s come in time for the advent of virtual reality, where it’s most required. And of course the PC is leading the VR revolution. VR requires leading edge hardware, rapid evolution, and the binocular nature of the PC with separation between CPUs, GPUs and headsets and software and distribution platforms means that all of these different system components can evolve without any constraints or bottlenecks on the overall system. You know, it’s really a fountain of innovation that the other closed platforms are not.
Companies across the industry are all contributing to this aim of achieving photorealism, and we’re approaching that now. High end graphics are vital to VR in a way that was not true of any other platforms prior because our brains expect reality. Our brains in a VR environment are expecting a sense of presence, and we find it very disconcerting when several cues are wrong. And even when we’re looking at stylized graphics, this is Oculus’ Henry VR short movie, even with these stylized graphics a huge set of details have to be right, or else your brain just doesn’t accept the image, whether it’s real or stylized. And so this problem of rendering for VR is much harder than the problem of rendering for Doom or Pac-Man or today’s leading edge games.
It’s not just rendering. It’s also about sensor technology, about physiology and psychology, and weaving all of these different details together into something that really feels like magic to the user. I’d like to review just some of the core aspects of graphics that are forming the state-of-the-art right now. We have physically based rendering as the starting point for everything. The goal is the realistically simulate the interaction between light and the microscopic surfaces on objects using macroscopic calculations.
So objects have bumps on the nanometer scale and above, and these new rendering models and advanced BRDFs all are aimed at simulating those calculations accurately. We’re really at this point able to press into photo realistic territory on a lot of different fronts, not all of them, but many. And for a wide variety of real world surfaces it’s becoming very hard to distinguish what you’re seeing on the computer screen versus what you see in reality.
The big challenge is to integrate lighting and reflections, transparency, shadows and anti-aliasing to achieve realism through smart sampling. The key observation that’s the challenge to all graphics programmers in the industry right now is our display resolutions are increasing much faster than the underlying GPU performance. Now 4K televisions and monitors are available cheaply. Manufacturers are starting to demonstrate 8K monitors.
So we have 16X growth in pixel counts on the desktop, and in VR we’re somewhere on this 10-year roadmap from our current low resolutions all the way up at least 8K per eye, which will be required for complete realism in a VR environment. And so graphics developers are going to have to invest enormous efforts into achieving realism with these very high pixel counts, without commensurate increases in performance consumption. As we strive for photo realism, the content pipeline has to change, too.
In the early days we’d build all of the art for our game in Photoshop or Deluxe Paint, but the effort required to do that goes up exponentially with the resolution of the artwork and the resolution of the displays. And so now to capture realistic materials cost effectively we need to study and sample real world objects using photometrics, and this has been a key part of our content development pipeline for the past three years. For this one project, building the Kite demo in Unreal, we went out to New Zealand and sampled hundreds of real world objects, and then placed them in a procedurally generated environment. Here you can see all of the different layers of materials from the color channels to the height channels, all derived by automated processing of images that were sampled from the real world. Putting these together into a big environment requires proceduralism, and both of these efforts, sampling of real world objects and then procedurally distributing them through game worlds, are essential because they’re productivity multipliers.
As realism in games increases, we cannot have our team sizes increasing in direct proportion to that. It would be an exponential increase in game budgets. And so everything we can do to multiply the productivity of individual team members is going to be a very, very high value improvement to the game development pipeline. It’s funny, if you look at how game developers build content today, it’s not very much different than it was 20 years ago.
This is the first Unreal Editor, a program I wrote in Visual Basic of all things, back in 1996. (applause) Who’s used it? (laughs) Now computers and graphics are 100,000 times faster. We have a vastly larger set of display capabilities available, yet the tools have remained largely the same. But this is about to change dramatically.
The next step is to build content within VR in the fully immersive environment. Instead of using CADsoft or idioms of really complex mouse movements and menus to select and modify objects, you’ll reach out, grab them, move them around. And this is a technology that we’ve proven out in the Unreal Editor. It’s shipping today.
You can download it and use it. Unity has also done some really cool things in this area. But this is going to be one of the most profound changes to the content development pipeline we’ve seen in our lifetimes. And we’re not just talking about some features for some aspects of VR editing. With this effort you can actually go into VR mode, be immersed and bring up the full Unreal Engine menus as if you have a little virtual iPad in this VR world, and interact with all of the 2D menus and options so that you can do absolutely anything that’s part of the content pipeline.
This will be really interesting because it means that all of these legacy CAD programs, 3D Studio MAX, Maya, all of these different tools we use are going to have to be significantly upgraded and rethought or replaced in order for us to gain these awesome new tools for productivity because within a few years we’re going to be doing all of our painting and sculpting and modeling of objects and play testing of 3D environments within the VR medium directly, and not on an old computer monitor. This is really interesting because it’s going to open up graphics to a much larger set of users and content creators. We’ve already seen over the past two years a pretty surprising trend.
It turns out while we were off developing games that real time computer graphics possible with modern game engines passed this threshold where it suddenly became interesting for all of these companies that were using offline rendering techniques to build parts of their content for their corporate applications and other things are now adopting real-time game engines. Architects are using the Unreal Engine, for example, to build really complex environment. And working with these folks has been a very interesting experience because it really highlights as game developers we have enormous opportunity to cheat. When we’re trying to build a 3D environment, if the hardware can’t handle what we want to build, we can just stylize it, limit it or build some sort of clever workarounds to hide the limitations of the technology. But that does not work for architects or enterprise customers building models of 3D objects. They absolutely need everything in reality to be recreated accurately in real time.
And it’s a complete failure if the technology cannot support that. And so these new applications are going to push engine developers, all of them in really interesting new ways so that we can expand our repertoire of features to support everything required for representing reality. Now it’s being used in the automotive industry, for example. I kind of look at all of these applications as non-fiction gaming because they are doing all of the things that game developers do in terms of creating realistic interactive environments, objects you can interact with, everything but the gameplay itself. And so it’s kind of an interesting microcosm of technology. Now this has really pushed engine technology a lot, for example, play live dealer Singapore online casino to create realistic anisotropic paint.
It required changing the rendering model, implementing new BRDF support, creating realistic carbon fiber effort materials, and all these different materials are pushing rendering efforts, and the game industry is benefiting from it. The level of quality has actually gotten pretty high over the past several years. Here we have a rendered object versus a real world object side by side. It’s getting harder and harder to tell which is which, and it’s a very interesting and exciting thing for the future of technology because it means we’ll be able to have all of these technologies come to mainstream consumers.
Instead of going on Amazon.com to buy a product and seeing a crappy little thumbnail screenshot, you wanna be able to pull up the entire object fullscreen, move around it, even customize parts of it, all in real time, all on a computer, and all using engine technology. And it’s going to become even better with VR and augmented reality technology as it becomes more widely available. Now another major challenge for the whole industry is recreating realistic digital humans. Our experience going in Paragon has really pushed our artists and engineers to stay on top of this.
Our goal was to create the online games that’s faithful to the MOBA genre, but brings really advanced realistic immersive 3D graphics into the fold. So this has been a big challenge. It’s pushed the engine in a lot of different ways. We’ve had to start with photo references of real humans to calibrate brightness and color and surface properties. We measure all of these properties of people on a light stage to calibrate the local and global responses to different lighting inputs and provide parameters into subsurface scattering algorithms and other things. Of course, people were always wearing interesting kinds of clothes, so this has pushed cloth systems, both the animation aspects, the rendering and the physical simulation, everything from velvet to canvas.
And very high fidelity data needs to be captured to gain realism. You cannot do this at a low resolution. Down to the level of even pores in the human skin are really important. The anisotropic reflections in hair are super important to generating a realistic appearance of a human.
And also the interaction of all these different elements with light, including the back scattering of light through hair and all of the interaction of subsurface scattering with a layer of skin. Even when the light isn’t hitting them directly you see color. Now realistic eyes are a hugely important aspect of digital humans, especially if you’re trying to create the appearance of social interaction between characters, either interacting with the player or interacting with each other in a cinematic environment. There’s an enormous number of details, and it turns out our brains have so much special purpose circuitry just for determining a person’s intent from what you’re seeing of their face.
It’s very important to get it right. When everything comes together, here’s the sort of result that’s possible today running in real time. This is not perfect photorealism yet, but it’s getting closer everyday. With humans being the hardest problem to solve, I feel like this problem will be solved over the next decade, and we’re going to have just astonishingly realistic characters present in all sorts of virtual experiences. Now that’s just looking at the static rendering aspect of digital humans.
There are a lot of other challenges centering around animation, facial interaction, determining intent and artificial intelligence control of these things. There’s a lot more research to do, but even today it’s very astonishing what we can do with nine teraflops of GPU performance on the highest end PCs that are available today. And it’s even more astonishing to think about what happens as these GPUs increase in performance significantly every year.
The console platforms are getting upgrades. This is a very welcome thing to see, but there is no doubt that PC is leading the way in this revolution. What lies beyond nine teraflops? The next big problems to solve our completely realistic human motion, which means going beyond our skeletal animation approaches to fully physically simulate the movement of a human body. That means simulating muscle contraction and bulges, movement of body tissue and fat and tendons, all contributing to the appearance of a character.
And then to bake those technologies down into a highly usable form so that you don’t have to have man years going into creating each human, but you can start with a generic human and customize it to meet the needs, hopefully with automated tools that will match its real world characters. There’s a lot of work to do here, and the game industry really lags behind the state-of-the-art technology companies like Google and Facebook in adopting technologies like deep learning to highly automate the solutions to a lot of these problems, which we’re still solving by brute force. But these are areas where we need to invest, and we will invest, and the industry’s going to make really great strides over the next few years. Of course, camera models are also really important, both for games and even more so for film.
And these cross-genre uses of VR to create film-like experiences that have some levels of interactivity. We have very realistic camera models now which accurately produce effects like depth of field, not using little tweaky programmer oriented parameters, but using the sort of camera models that actual cameramen in Hollywood expect so that the things you would do with the real world physical camera can be replicated in a game environment in real time. This is really causing VR technology to push into Hollywood.
Every major movie studio and every major movie producer is thinking heavily about VR. Many are investing in projects. Some have been announced.
Many more are in the works. Weta Digital, for example, took all of the non-real time rendered computer assets from The Hobbit film, imported them into the Unreal Engine, and built a full scene which they showed at the Game Developers Conference a couple years ago to demonstrate the possibilities. But this is just the very leading edge of what’s happening there.
All the studios are adopting it. Lucasfilm has been experimenting with it using real time game engines. And Disney Imaginarium, now this is the coolest thing to think about how the game industry’s technologies and techniques and expertise are finding their way more widely into the outside world. Disney’s building expansions to Disneyland and Disneyworld theme parks that are based on VR.
They’re basically Star Wars VR expansions all running in real time using PC-based computer graphics. And the experiences are going to be just completely awesome, and they’re kind of a hint of the direction of everything to come. A funny thing happens in technology revolutions. Sometimes they move in unexpected directions. Just as the PC was getting up to the point of crossing the one teraflops boundary, smartphones came out. You see the funny little devices that we always thought as under powered, but over the next 10 years smartphones grew to a much, much larger industry than PC, with almost three billion units adopted worldwide, all capable of playing games.
That’s led the industry in a way that we hardcore technology enthusiasts think of as backwards for awhile because it caused gaming to move from super high end 3D back to the 2D and highly stylized rendering scenarios. But I think it’s very important for us to identify what made the smartphone revolution so successful so that we do not misapply those lessons to VR and augmented reality as it develops. Smartphones became successful because they’re incredibly convenient. They fit in your pocket, you can go anywhere, you can have access to all of the internet, your email and communication with people, and that drove adoption. And then game developers realized since you have this little device in your pocket, we can just stick some games on it.
And a smartphone, when you’re sitting there playing a smartphone game, the screen fills about 15 degrees of your field of view, a really, really tiny number. And so they quickly discovered that the games that worked best on smartphones were very simple games with very simple graphics with iconic representations of things in the world, and not immersive 3D graphics because immersive 3D is pretty limited when it’s fit within 15 degrees of your field of view. And so it was not mobility and this mobile technology revolutionizing the industry.
It was the convenience of these devices that led to the immergence of a game industry around them. But virtual reality is going to be the complete opposite because VR puts you in a completely immersive environment. It fills 120 degrees or more of your field of view, and attempts to recreate the sensation of presence.
It’s going to require games that push technology far harder than anything before with realism, with high frame rates, with interaction. And as VR evolves and is miniaturized over time eventually we’re going to see this develop into something of the form factor of Oakley sunglasses. We’re going to need vastly more GPU performance to achieve that. And I’m fairly skeptical of the success potential of mobile VR solutions in the meantime running on low end GPUs because they are just not going to be able to match the quality of what’s happening in high end VR today. And so I think the major development, the leading edge games, the profit will be made in these high end VR spaces for at least the next five years.
The funny thing that’s happened over the last 20 years in the industry is multi-player gaming. High-end multi-player 3D gamer was invented with Doom 20 years ago, but multi-player gaming hasn’t actually changed much over the meantime. We have better graphics now, and we have some bigger games, but still the mechanics of players moving around these environments, shooting and engaging in these very low input bandwidth experiences has remained unchanged throughout that time. I think this is going to change more in the next two years than it has in the past 20 years because we’ll have fully interactive VR social experiences.
Like Pool Nation was a cool pioneer of this, but you create a complete different sensation in these VR immersive experiences than you had in a multi-player game when you can actually see other people’s head and hands move around. And as additional forms of input and higher fidelity sensors come online, it’s going to get more and more realistic and interesting. Right now we’re up to six degrees of freedom times three different input devices, but that’s going to grow and grow.
Just imagine as you have outward facing cameras attached to your VR device, it can capture your arm movement and hand movement and finger movement. And then inward facing cameras that can capture your facial motion and translate that into movement of faces on 3D models. It’s really going to change everything. Now I’d like to show a little kind of extrapolation of what this might look like in the future. I’m going to show the Senua demo that was built by Ninja Theory.
We showed this a GDC for the first time running in real time. The interesting thing about this is in partnership with a bunch of leading edge motion capture companies the team built a VR motion, or a general motion capture rig for capturing body motion and facial motion simultaneously. Now this is a huge ass, highly expensive rig. There’s one of them in the world, so you’re not gonna find it in Best Buy yet. But I do believe that the technologies that are powering this are going to be reduced into consumer form incredibly rapidly.
And thanks to improved sensors, consumerized sensors and deep learning technology powering translation of camera inputs into high level motion data associated with faces and bodies, this is going to come online real time very quickly. And so I’d like to show kind of the state-of-the-art of what’s possible with completely crossed unconstrained technology right now. Let’s queue the movie. – [Disembodied Voice] (whispering) Stay still, stay quiet. Hide, don’t turn around. – [Ominous Voice] Their gods consumed your mind.
They will use this power to destroy you. – They won’t stop me. I can still feel him. – [Ominous Voice] What is left of him.
They will never let him go. – I’m not gonna let him rot here. – [Ominous Voice] You’re the one rotting here. – Leave me alone.
– [Ominous Voice] You will die here. – No. – [Ominous Voice] And all your suffering will have been for nothing! – Shut up! (applause) – Oh, thanks. So that was all captured in real time, and I think in just a few years this is all going to be consumerized and available to everybody.
And it’s going to be very easy to start building multi-player experiences that take advantage of this. And so instead of running around giant game environments at 20 miles an hour shooting everything you see, there’s going to be a much more intimate experience there, and it’s going to lead to entirely new types of VR experiences. And I think the ground rules for these new genres of games have not even been invented yet. We saw pretty early on that putting people just in a room talking to each other gets really boring really fast, but Pool Nation and these other games show that when you have some really cool diversionary like low brain bandwidth experience to frame these discussions that you can spend hours in these VR spaces, just like hanging out with your friends and doing interesting things.
So we’re going to be really watching this very closely. The next topic is how this comes to develop over the next 10 years or so. Right now all the real innovation that’s being consumerized and shipped to users is in VR. Over time this technology will also come to AR.
Right now these AR headsets are pretty big, have some pretty significant limitations, but over time they’ll be reduced to something of the form factor of your Oakley sunglasses, which immerse you in a full field of view experience with 8K pixels per eye. It becomes very indistinguishable from reality with the ability to seamlessly splice together your actual view of the real world with computer-generated images on top, with some sort of work being done to develop occlusion technology there. This is going to be a very interesting transition because these AR devices will be cheaper than any television because they’re much smaller, made of much less material, and they’re going to grow to the size of today’s smartphone market. There’s going to be a worldwide installed base of four billion of them, say within 15 years. And even though there’s a relatively small number of VR headsets out there in the distance today, less than one million high end VR headsets, it’s going to grow exponentially, I bet by a factor of three or four every year until we hit billions of users. And this is going to be a great opportunity for everybody.
With augmented reality, mobility will be part of the experience because you want the convenience of a smartphone, this technology that’s always on and works everywhere, but there’s no doubt that when you want the premium high quality game experience, you’re going to link it to a PC through some sort of wireless technology, and be engaging in realistic experiences that are far, far beyond the capabilities of any mobile GPU as part of this experience. That’s just the technology and market forecast. I think the human implications of this are much more interesting. Once we’re able to sample and replicate humans with human emotions in 3D across the internet in real time and scale billions of people, it’s basically teleportation technology at that point. Elon Musk is out building the hyper loop, but I feel like we’re probably not gonna need that if we can have the same kind of experience for the most part without leaving our homes.
And virtual worlds will be the center of it, and most importantly the people in this room are going to be in the center of it. We game developers are the only people in the industry who know all of the different components of this pipeline to make these interactive worlds work. I’m not just speaking of the VR developers in the audience today, but game developers in general have the knowledge and the mindset, and they’re going to be at the center of digitizing the entire real world. This idea has been brewing for a very long time in the industry.
It started in science fiction, and it’s been called the metaverse. You’ve read about it in Snow Crash and Diamond Age. These are funny books because they make amazingly accurate predictions about the future, but then in Snow Crash users have to coordinate how they’re going to enter the metaverse, and so in one scene the guy runs into a phone booth to call the other guy. So this is their prediction of technology in this era. The ultimate goal of all this is to create social experience in a virtual world in which humans feel like they’re really present. And it’s partly about gaming, but it’s also much larger than that.
I think it’s really the next stage of mankind’s development, from the caveman days, the agricultural revolution, the industrial age, the next stage is pervasive ability of anybody to connect with anybody else, and live a life that’s unconstrained by physical goods because anything that you might possess in the real world can be recreated there in the virtual world. So I think I owe it to everybody making all these wild prognostications to talk a little more concretely about what this might actually entail. This is not a, here we’re seeing one of the early 3D games that put multiple players together, but it’s going to be a shared virtual space where people can meet in real time and engage in all kinds of human activities from gaming to social, creative actual work, study, shopping, anything.
And it’s going to be a parallel universe that in some ways mirrors the real world, but in some ways diverges from it. And it’s going to be largely created by users because it’s going to be vastly larger than any one content company would create. And if you look at the most active platforms today like Steam or Facebook or Twitter, there is a proprietor at the center of it that does some content creation work, whether it’s Valve or Mark Zuckerberg posting his views on Facebook.
But the far, far majority of the content comes from users, and I believe that this will be a critical aspect of the metaverse. Creating this world means it’s going to impart all of the controversies of the real world itself. There’s going to be porn, there’s going to be crime, there’s going to be vandalism, there’s going to be harassment, and all of these creators of this new medium are going to have to really think deeply about how are we going to deal with the challenges as well as the opportunities. Before going further I feel like I should technically define what this metaverse I postulate really is because many of the building blocks exist today and have existed for decades, but they haven’t been fully integrated together into anything that truly approaches the ultimate experience. We’ve gone through the development of the internet, which has now enabled everybody in the world to connect to everybody else. There’s the web integrating that with persistent content, web pages that you can traverse using hyperlinks.
And social networks realize that hey, you know, this is only for nerds to create web pages. What we really need is a platform where anybody can participate and post their interesting stuff without being an expert, and can share their stuff with others in a way that creates interesting social interactions. So we see social networks as kind of the ultimate realization of that idea.
And that is really just the starting point for the metaverse. If you take those layers and you add on top virtual worlds, immersive 3D environments, the sense of human presence through avatars, which are accurately replicated and faithfully represent the user’s face and emotions, and the digital human technology that powers all of that, I think that’s kind of the minimum viable requirement of the metaverse. If we have that, plus the ability for users to create their own digital content on top of it, then I think we’re going to have the birth of a new medium that’s really going to change the world. One thing that really needs to change with the metaverse is today’s app model because the concept of apps is completely separate software packages. They’re isolated into separate distributed downloadable things. This is carried through from the mainframe all the way to the smartphone for 50 years, but it’s about to break because these experiences are going to generally be the product of collaboration of a lot of different users working together.
Every user is not only going to have their avatar and all of their customization, but they’re going to have their rooms, the areas they’ve created, and they’re going to want to piece it all together into a seamless environment that other people participate in. This might sound like I’m describing something like Minecraft, but it actually has to be quite a lot more sophisticated than that because one of the most interesting things in this is going to be gaming. And games are going to require very advanced custom game logic and all these other components. And so we’re going to have to figure out how to weave all of this software together in new ways that’s much more modular and much more connected, as opposed to completely compartmentalized applications.
But the really exciting thing about this is that everybody is going to be a creator at some level. There are actually 50 million digital content creators in the world today. Everybody who plays Minecraft is to some extent a content creator. And they’re starting there with this really simple application in low fidelity worlds, but they’re going to move up over time, and the best of them are going to rise to be stars in this new industry.
But whether you’re going to be a serious builder or not, everybody’s going to be participating. Laying out your room, taking virtual photos or movies of interactions in the metaverse because there’s no real difference between a selfie in a virtual environment and a selfie in the real world. And they’re going to be served by a lot of different tools ranging from everything like the Unreal Ed, or for serious high end game developers using it to tools that more resemble Minecraft and extensions to it for the casual experience. This metaverse will also have an economy, and it’s going to be very different than the real one because there’s no scarcity with digital goods. You won’t need to dig holes in the ground and pull up iron, ore, or dig oil wells in order to build stuff.
It’s really only limited by creativity and imagination. So in many ways this is going to be a freeing experience. And we might expect, for those of us who are young enough, that within our lifetimes we could actually see the virtual economy grow to the same scale as the real economy, because buying and creating and selling and trading these virtual things was going to grow to have the same meaning to actual people as owning a physical thing. So it’s going to be very interesting for the future of economics. Now for the fire and brimstone part of the talk, I really wanted to stress that the metaverse can either be a utopia or it can be a dystopia. And we’re very fortunate that we live in a world where the PC platform is the number one form for serious computing today, and the internet underlying it are all completely open technologies.
Anybody can build software. Anybody can distribute it. Anybody can do anything without getting anybody’s permission. I feel we’ve seen a major retraction from that great state of affairs over the past decade as closed platforms, iOS and Android and consoles have grown to the forefront of the industry.
And the social networks have come in and established completely closed ecosystems. And I want to point out that this was not inevitable. This could have gone a completely different way, and I believe we’d be in a better position as an industry if it had gone in a different way. But it’s probably too late to change those things, but as we’re creating a new medium together it would be really tragic if we let the future metaverse that binds all humanity together into shared online environments were a closed platform controlled by a giant corporation.
As always, they’d use it to spam you with advertising. They’d use it to gather information about your private life and sell it to the highest bidder, and they’d act as the universal intermediary between all users, content creators and transactions, ensuring that everything has to be approved by them to some extent, and they’d take a cut of everything. And we really should work at fighting to keep these things open. Even today’s platforms like Windows are under attack. I guess you’ve heard me rant about this in the past, but there’s real danger of Windows going in a closed direction. And as game developers we’re going to need to stand up to this, or else they will succeed with it.
It think though with augmented reality and VR it’s going to be even more important because this medium is going to be so central to our lives and shaping our social interactions and our experience of the world that it’s going to be… Really whoever controls that is going to be more powerful than any company of government that exists today. And if we can ensure that this system is built in an open way, then we’ll be free of that. So I’m arguing really that what we need is a protocol and a code base, not a company.
It needs to be an open platform, not a walled garden. And I don’t have one, by the way. I’m not trying to sell you one.
I’m just suggesting what ought to exist. I think this would be a protocol, like the internet protocol so that all individuals and companies can participate together. They can all create servers. They can all create content. They can all share it. There’s some mechanism for sharing the data back and forth that works.
And this kind of protocol doesn’t exist yet, but I just wanted to talk about some of the broad outlines of what this might look like because they’re becoming clearer and clearer every day. We need a real time data storage and transfer protocol because the metaverse is going to start at at least petabytes of data and quickly beyond exabytes in size, and grow from there. So there are centralized systems like Amazon Web Services or Steam, but then there are also decentralized approaches that are really interesting. Like BitTorrent enables users to share literally X bytes of data every day. Most of it is kind of in this gray area, not necessarily completely illegal, but it’s a system that works and that scales. And there’s this really interesting project, the Interplanetary File System, that builds on that swarm protocol to create a shared distributed file system layer on top of that where anybody can create directories, automatically replicate their data across, being stored on all participants’ computers in a way that really works and scales to even larger sizes.
Now we need engines to power this, not an engine, but many engines because we want all of the game engines to be able to participate in these shared environments and not force us into any single vendor. Of course, Unreal would be one. Of course Unity, Crytek, and all of these other engine providers ought to be a part of it.
And then we need a lot of new features to support this, but everybody’s working on this, and it’s going in interesting directions. You also need layers for e-commerce, and there is traditional systems for that. Steam has an awesome transaction system, but there are also distributed decentralized approaches like Bitcoin and the blockchain as a way of framing the transactions, which don’t require trusting any one central party, and are really kind of ideal from the point of view of an open system that nobody can control or censor. And I think it’s important to realize that as this new medium develops not everything is going to be immersive 3D content. There’s also going to be traditional communication, like chatting through text, because there are going to be people in the metaverse and people outside of it, and they’re going to need to talk together and communicate.
And the really interesting interactions will be how to people get in, how do people coordinate, and this is all going to require some really new innovative forms of communications. I think many companies are going to have to work on all of the different pieces of this together. I think the problem is so broad that no one company can possibly solve it, and that we’re going to have to build a set of products shared by all of the different developers in developing an open ecosystem. And some of these components will be commercial software, and some will not be, and some will be open software, but together the entire system will be open for anybody to provide. And we need to all really be thinking about what role we can play in ensuring that this is a utopia and not a dystopia. But those are a lot of speculative predictions about the future.
I think it’s interesting to summarize it. When this revolution is complete, you’ll be able to hang out with anyone in the world at any time in a shared virtual environment, and feel a really compelling sense of presence. All of your emotions will be replicated faithfully across the internet.
And all of this disparent industries that are currently engaged in content creation will be combined. Those architectural models are going to be real spaces where you can interact. McLaren’s CAD models of their cars are going to be drivable virtual cars. Every type of item that exists in the real world is going to have to be recreated and mirrored in the virtual world. And the most exciting thing about this, everybody in this new world is going to be a creator, and the people in this room are going to drive this revolution.
So it’s very exciting. I’m glad to be a part of it, and thank you very much for listening. (applause) (jazzy theme music)