After Google’s Stadia announcement at the GDC, I was swamped with people asking me what I thought about it, and what I thought about cloud gaming in general.
( I’m assuming here that you know what Cloud Gaming is. If not, have a look at https://en.wikipedia.org/wiki/Cloud_gaming )
If, like me, you were at GDC, you have seen the demo and maybe tried it. As much as the numbers given by Google’s CEO were great (7500 edge sites, really?) allow me to raise some concern about them. Let’s start with this twitter post:
This was taken at GDC, in a closed environment, downtown San Francisco probably only a few blocks away from the computing power. Still, latency between the player and the “renderer” was 100+ms. Leaving alone the fact that the video quality was not at par with a gaming pc/console (i’m sure they will work hard at optimizing this in the upcoming years), this kind of latency is going to be a tough sell to gamers, even non-professional one.
Folks at google are smart, but bending the law of physics and making the speed of light faster than it is will probably not be possible (!). Latency is a physic’s problem. Communications are going through fiber, people are not necessarily all close from each other, close from data centers, etc. you get the point.
There will be work-arounds. i.e. Guaranteed path to minimize hop in the network, degrate quality when not needed, try to guess what the player will do, add latency to put everyone at par, and so on. The real fix to this is to be closer from the players, go at the root of the problem being distance. Their 7500 edge sites will most likely be key to this problem. Still…
Let’s assume they deployed that many edge locations, they mitigated the latency and they fix the display issues, we are still only looking at a single player game. What about the 47% of games today which are multiplayer based? (source)
See, most multiplayer games today use a server to act as an arbitrator and define what “really” happened. The arbitrator receives communication from every player, and decide who was where, when and what happened. Latency around this creates situation where the player may see his enemy in a given location and tries to shoot him when he’s not there anymore. See picture below.
Player A shoots player B thinking he is at the green location. In reality, player B being closer from the arbitrator, is able to tell him that he moved faster than player A is able to tell arbitrator he shot in a given direction.
This situation is true for any fast pace games; MOBA, Racing, FPS, RTS.
If you have latency between each player and their renderer, and each renderer and the arbitrator, you get the following:
You get 4 areas where lag is introduce. Assuming lag for cloud gaming is around 100ms per player, and the multiplayer server is around 60-100ms per player (yes, 100ms, looking at you sea of thieves), you will get a total of 300+ms between player’s reaction. Again, you may mitigate that with tricks and pops in your network but if we compare this to the perfect experience being 2 computers connected to the same switch (i.e. e-sports tournaments), let’s say there is a lot to do.
If I take this example below and transpose it in a real world scenario. 2 players, one hiding behind a wall, the other waiting for his head to show up to shoot. Since latency represents round-trip, we split in half each leg, so we get 50ms for the cloud gaming player, 30ms for the multiplayer server. This gives the following:
Player 2, who’s hiding behind a wall, pop his head to see what’s out there. He hit the joystick, sends a trigger to his renderer (50ms) to move. His renderer tells the server that player 2 is now in plain sight (30ms). Server tells renderer 1 that player 2 is now exposed (30ms). Renderer shows player 2 exposed to player 1 (50ms). Between the time player 2 exposed himself, and player 1 as seen him, 160 ms passed. As soon as he sees player 2, player 1 hit the trigger to shoot and kill. Joystick tells renderer 1 (50ms) that a shot was made, renderer 1 tells server that shot is going toward a certain direction (30ms). Server now decides whether player 2 has been shot or not. A total of 240ms has passed between the time player 2 decided to move and the time where the server decides if player 2 is dead. Back at the beginning, after player 2 moved away from the wall, his renderer will show player 1 targetting him (50ms). He can send another move request (50ms) which will be relayed to the server (30ms), resulting in player 2 going back to his hiding before player 1 is able to kill him.
That is plenty of time for player 2 to move.
That’s also a lot of time for both player to feel lag. Issue here is that lag is not something you see, there won’t be any error message displayed saying there is lag. Player will feel like they are not good at that game and will look for something else to play.
You can reduce a lot the latency around the server by having the multiplayer server instance in the same DC as the renderers. Yes I agree. But, this assume that:
-Will you have enough players for a given game to play around the same edge location?
-Will each game/matchmaker will be modified to take into account players close proximity?
-Will you force studio to use the same DC/cloud/edge provider for everything?
-What if you want to play with your friend in a different city?
-What if players are far from some DC/edge location?
-How will cross-platform multiplayer works?
…and so on. There are many elements to be put in line for this major issue not to be there.
Google stadia will work for some use case/context (i.e. you want to try a game, you play a game by yourself and you are close from a high density DC/edge, you want to play smaller/not so fast pace games like puzzle and such). There are still a lot of technical challenges for them to do to the gaming industry what Netflix did to the video one.
Some niche market will probably have more success (i.e. cloud gaming for mobile like hatch) due to the nature of their offering (less h/w requirements, ad hoc style games, makes it easier for studios, offer an alternative to app store, etc.)
Google is not the first one to try cloud gaming, and there are many players in that market already. VDI is not new (convince me cloud gaming is NOT vdi and i’m offering you a beer). Many players have to agree on a common goal (service providers who own the last mile, studios creating the content, and so on…) This is a tough sell. The real value will be in how things are orchestrated in the backend. Gaming is NOT streaming. Gaming use case call for bi-directional communication, low latency, and CDN based use cases are quite different beast.
Each match is a different story.
Edgegap software prevent this problem by taking each match and create the best environment for players to have fun. At the end, if the fun is not there, player will go elsewhere.