Are gaming studios listening to gamers?

I’ve been meeting with many studios/distributors/console manufacturers in the last year, and getting them to admit that latency was a serious problem has always been a challenge. I’ve always wondered whether it was because they really believed lag was not a problem, or if they just refused to see it due to the fact that they have no control over it.

This is quite challenging for someone like me who’s trying to solve an issue for a market which seems to say they don’t need such a solution. But still…
Google the latest #1 game, or browse your favorite forum about gaming and you will find many, many players complaining about that. A single post in a deep dark forum may not be catastrophic for a AAA studio, but now even streamers are quite vocal about the poor player experience. The hype of the moment, Apex legends, was violently slapped by one of the top streamers, Shroud, during a session where over 100k viewers were watching him play. Keep in mind that those sessions are recorded, and this particular one had been watched over 400k times last time I checked.

For non-gamers out there, a lot of multiplayer games today work as follow: everybody starts on the same level, for every match. Everybody has an equal chance of winning as there’s no bonus to have initially (no stronger weapons, faster cars, more health, etc.) Better players are ranked and matched together before each match, but the mechanics within the game always remain the same. Now you see why latency can be so critical in giving you an edge against others. Few weeks ago, I was playing a game from Microsoft, Sea of Thieves, with friends based in the US. This game’s mechanic is as described above; everybody starts on the same level. But probably due to the fact that I was in Canada and my friends in the US, my server was located far from me. I had latency over 200ms. Fighting others was much harder because of this, ruining the fun I had. So long for the public cloud solving this problem.

So, why are gaming studios not seeing lag as their priority number one?
I believe the reason lies below. A hosting company did a survey with over 200 gamers and developers during the last GDC 2019. The answer is shown here in the first 2 bars; Developers believe that gameplay and mechanic is more important than lower latency, and players see low latency as more important than gameplay. For studios, it’s all about the game mechanics.

With the large offering of games on the market, gamers are switching quickly and for studio not to address their customers main concern can only lead to loss of revenues. You will not see any PR about this in the news, but ask any net code developer or network engineer at those studios and they will tell you.
With Cloud gaming starting to get traction (hype-wise…) this problem will be amplified by 3 to 4 times. I made a post on our company’s blog a few weeks about lag in cloud gaming. This lag we see today in video games is the tip of the iceberg, and it is not even the same as the one we saw at Stadia’s booth during the last GDC. Those lags will all add up, and guess what, gamers will not like that. Any fast pace multiplayer games will face serious problems in this kind of environment.

Gamers are buying WIRED mouse and keyboard. Do you know why? They want to avoid latency between their devices and their PC. This market is filled with examples like that, so go ahead and tell them that latency is not a huge deal.
Using edge computing to distribute gaming servers is key in solving this problem. Latency is a distance problem, getting closer is the answer. You may tie this with other things like guaranteed path through vpn (hello haste!), better net code in the game “guessing” what players will do, higher QoS in the network, shorter access time (hell-o 5G…), and such. Those are all good options, but there is no single silver bullet. Using all of those solutions together will give gamers the best possible player experience.

We at Edgegap can help studios by getting gaming instances closer from players. Reach out to get a live demo where we’ll show you how we can lower latency in your game (and how much I’m a bad player!).

Source: https://www.inap.com/blog/top-reason-gamers-quit-playing-online-multiplayer-game/

Cloud Gaming comparison table

Last update: May. 21st 2019

Thinking of jumping on cloud gaming bandwagon? Well you are not alone. So are a few (!) companies. While trying to list the players, we ended up figuring out we could put together a table considering the sheer amount of them there are on the market today. We tried to compare each service based on what we knew and what was publicly available. We have not included “pure virtual desktop” players even though some of those folks below can be considered as simple virtual desktop. We’ve added them when they had a “gaming” twist or making your life easier from a gamer perspective. Many details were left for now, we will update the table as more information makes its way to the public. We are planning on doing a full benchmark with each service using our internal tools to measure latency, this will probably happen this summer. Stay tune.

What we can conclude already is they they all try to make you believe they are different, but at the end they all do the same stuff. They are streaming video and audio content to you, while carrying your inputs in the other direction. They mainly rely on the same type of encoders/decoders. If you are familiar with codec like h264, you know they can be expensive and heavy from a license and patent perspective. Writing your own one can be done (while some of those companies are fairly small), but before your software gets as fast as its hardware equivalent, it would take a lot of effort (if possible). As we’ve pointed out in a previous post on this blog, there are many areas where lag appears throughout this flow, and there are only a few places where those companies will be able to differentiate from others. Those differences are things like where they capture the feed on the renderer, how they relay controls information, how the feed is displayed on your screen. One area of interest is the network protocol used. But even there, the amount of things you can do to “be better than your competitors” is fairly limited.

Those companies are mainly running servers, or virtual machines in datacenters and offer you a fency website to get access to those resources. Their golden image will contains pre-installed games and maybe some distributing software like steam, gog and such. You will still have to own the game for most of those services. The devil is in the details, and that’s where it gets hard to differentiate them. Some use certain type of GPU, certain model of CPU, etc. We’ve even seen cases where a given cloud gaming service was providing different model of cpu each time you connect. You could get a 3Ghz CPU at some point, and later in the day only get a 2.1Ghz cpu.

We used the publicly available information to fill the table below. We used the highest possible resolution/framerate for pricing as everyone is looking for this (or else why would you not simply get an old PC).

We broke down those services in 3 groups:

-GaaS: Gaming a sa service. You get the virtual machine, and games are included for you to play. Some will give you the full VM (i.e. you can do non-game related stuff), others will only allow you to play their games.

-CaaS: Computer as a Service. You get a VM, mainly windows, which may have some games pre-installed. You have to pay for the VM, but you need to own the game. You type in your distributor account (i.e. steam) and they let you play whichever game you have. If the game is not pre-installed you need to download it first. In most cases, your VM will disappear when you don’t use it, so re-install may be needed.

-Self-streaming: Slightly different beast here. You provide the renderer hardware in this case. Their software allows you to play games remotely. Either your console or your PC at home will do the rendering while you play the game (which you need to own, through a subscription or direct license) on your mobile device (slower pc, tv, phone, tablet, etc.). They may allow interesting feature like split screen and such, which would not initially be supported in game.

You have 2 tables, one with the url you can click and the other with some details from our report. We will try to update it frequently and if you feel something should be updated drop us an email at info@edgegap.com

URL
Amazon https://www.theinformation.com/articles/amazon-developing-game-streaming-service
Blacknut https://www.blacknut.com/en
Disney n/a
dixper https://dixper.gg
EA Project Atlas https://www.ea.com/news/announcing-project-atlas
Gameclub http://www.gamecloud.club/
Geforce Now https://www.nvidia.com/en-us/geforce/products/geforce-now/
Google Stadia https://store.google.com/magazine/stadia
Jump https://playonjump.com/
LiquidSky https://liquidsky.com/
Microsoft xCloud https://news.xbox.com/en-us/2019/03/12/project-xcloud-choice-for-how-and-when-you-play/
moonlight https://moonlight-stream.org/
nvidia stream https://www.nvidia.com/en-us/shield/
Paperspace https://www.paperspace.com/gaming
Parsec https://parsecgaming.com/downloads
Playgiga https://playgiga.com
Playhatch https://playhatch.com
Playkey https://playkey.net
Playstation Now https://www.playstation.com/en-ca/explore/playstation-now/
rainway https://rainway.com
Redfinger https://www.cloudemulator.net/
Shadow https://shadow.tech/
steam in-home stream / link https://store.steampowered.com/streaming/
Tencent Start https://technode.com/2019/03/21/tencent-to-begin-closed-beta-for-cloud-gaming-service-start/
Utomik https://www.utomik.com/
Vectordash https://vectordash.com/
Verizon https://www.gamespot.com/articles/verizon-is-reportedly-working-on-a-video-game-stre/1100-6464369/
Vortex https://vortex.gg
Walmart https://www.theverge.com/2019/3/21/18276235/walmart-cloud-gaming-service-google-stadia-competitor

Building your cloud gaming stack 101

In order to provide a cloud gaming service, multiple elements have to be put together for the service to work. Many players are jumping in the band wagon. I’ll disect what I believe are the key technical layers of this offering, how they interact and where the culprits are.
As you will see, while many people are comparing this to “video streaming”, this is quite a complex scenario which may go sideways in many places. I may have missed a few things, feel free to reach out and let me know, I will be happy to correct any statement which by the way represents my personal view on how things are moving forward.
Let’s start easy. The first diagram shows the 3 main pillars of cloud gaming. You have the player on the left. She sits home or in a bus, she plays games with her friends through this new trendy service. In the middle we have the cloud gaming infrastructure. That’s where the actual game will be rendered, where 3d are calculated and where the number crunching happens. Last one on the right is where game specific communication services happen. As most games call for communication between players, this is where players will be matched, where you store stateful data, content, etc. While the last 2 could be seen as a single one, they are split from a technical perspective since cloud gaming services are not making game and have different view/needs. As I’ve shown in a previous post, there are technical reason why merging those would also not make much sense.
If we go one step further, we see that each of those is made of a mix of hardware and software. Each rectangle below represents a high level component needed for the service to work.
Starting from the left, the player will use a controller in the form of a joystick, a mouse/keyboard (or a kinect if microsoft wasn’t killing it) to tell the system what to do. In most cases, this controller is connected to a local device. Stadia came up with a joystick going directly over wifi to skip that layer (even though the saving must not be overly great considering wifi latency). The controller could also be the touch pad of a screen (i.e. tablet/smartphone). Next comes the client. This “thin” client can be access through an app, a browser and such. Its role is to display what happens in the cloud gaming infrastructure. It is tweaked to decode encrypted/encoded video streams quickly and most likely has very few other capabilities. The only one we could think would be around synchronizing what’s display vs. what will be rendered (i.e. v-sync). Next comes what will run this thin client. A phone, a smart tv, a PC and even a console. I’ve seen people use a raspberry pi for shadow service. Those devices have to be connected to the internet. That’s done through a personal network (home wifi router, wired, phone tethering). This private network will have access to a broader network through standard access like RF (i.e. LTE), fiber, dsl, cable, gpon, etc.
We are now on the internet where anything goes. MPLS, guaranteed route, qos, traffic shaping, throttling (!).
Next step, the cloud gaming infrastructure. From the top, the actual game will run in an os environment, the same way it would run at your home on your pc/console. Most offering today use windows to support existing game. Stadia is going against the grain as they use linux based os so are asking studios to re-compile their games to support this infrastructure. Linux community will be happy about this, but this may cause stadia offering to be limited initially. The next stop is the OS, which run in a virtualized environment. Most, if not all, use windows virtual machine today. I suspect stadia is pushing for linux to go the containerized route which imho makes a lot of sense (leaving alone security question here…). Those virtual environments/clients will access the network through a virtual network. Based on which infra is used, pick your poison. In every case, you added another layer and some may be better than others feature wise. Games have different need than web services, and not everyone seems to understand that. Next layer: hypervisor. Again, pick your poison but since the open source offering is quite mature and corporates one are expensive, if you build your own cloud gaming infra you will most likely go with openstack (unless you use a public cloud). Below is the hardware layer where cpu and gpu provided a service. This service calls for super low latency, so you are better off with multiple smaller sites closer from players vs large server farm. In which case, density is key, so high core cpu and GPU cards more expensive than my car are the way to go. Some gpu vendor (euh-hum hi nvidia) need a special software to limit how many vm can be leverage per gpu. I’m not super familiar with how it works but that’s definetely to consider. We then get in the network where your private network is going to be connected to the internet. As fast as the fiber connection may be, most boxes will add hops here and there.
Last but not least, gaming specific services. Those are match maker, multiplayer games, chat/communications, inter-connections, stats tools, etc.. Each studio has a pletora of tools they use, and most of the time those are needed for a game to work. Every modern multiplayer game use a server to interconnect players, and connecting them is done through a match making service. Without going into lengthy details about the specific of those services, let’s just remember that some of them are critical for in-game activities, well beyond offering you to buy a pink polka dotted shirt for your character. Those services, like any cloud application, will go through the same hoop of virtual layers over virtual layers over physical layers over and so on. You get the point.
Alright we’ve listed the high (maybe not so high anymore) level layers. There are variances for some of them, but I’m pretty sure we’re hitting 95+% of the examples out there. Who does what? I don’t want to list here every actor for every layer, but I regrouped them in 4 buckets. Manufacturers, Connectivity/cloud infrastructure, OS/software folks, and last but not least cloud gaming folks. I’m including gaming studios in the last group, and while we can debate as to what’s gonna happen in the next few years, last time I check studios don’t spend much time on infrastructures, their concern being the game itself.
Now to my favorite part. Where exactly will the train derails. Below is the same set of stack, and every places where lag can be introduce. What’s worst? my last layer at the bottom is wayyyyy more complex than i’m showing.
Some items are adding latency more than others. Milliseconds vs microseconds makes a huge difference, but when you look at it from this point of view, they all add up. One comment I’ve seen about those cloud gaming services is that they will offer this worldwide. Truth is, it will probably be bearable only if you’re located close from a data center or in a major metropolitain area. In which case, you most likely have a higher home income than those far from metro area, a faster internet speed, and enough money to get yourself a console or a decent PC. This service would be useful for the other type of players, those without enough money to purchase a high-end pc. Issue is they won’t have the connectivity to get this service.
I’m not saying cloud gaming will not happen, i’m just showing that there are many locations where innovation and improvement will be needed. The main one is around dynamically orchestrating this. Each scenario will be different, and getting this right will be done case by case. That’s most likely why Google Stadia went for linux based core and are asking studio to convert their game. This will allow them to use many existing tools (hello kubernetes). That’s also why they are investing in open source projects around that (See Agones).
My last diagram goes a bit into what type of stack could be used to build quickly a cloud gaming offering. This would not be really innovative but you could have something up within a day or 2. From the bottom up, you get multiple locations connected straight in a given ISP (hello edge computing!), deploy x86 servers with high density cpu and gpu. Make sure you get high Ghz as core is not everything in gaming! You install your prefered hypervisor. A good choice is probably Openstack to start with. While not easy (oh dear) to install, it has many large users and I’d hope the project is mature enough now. At this stage i’m hoping you have the whole stack running. You create a virtual image based on windows with some gaming client /store pre-installed (my number 1 is steam store). At this point you only need a website which will support openstack API and credit card to allow your user to create their PC. voila. Add salt and pepper, and send me a check if you make money. As silly as I am, that’s really all there is to it. Now, as i’ve spent the whole article above pointing out, devil is in the details. Tweaking every red lines I’ve shown above is what will make this succesful.
Edgegap can help you build and improve your solution to get the most of edge computing and reduce the amount of red lines as much as possible!
Mathieu Duperre
mathieu@edgegap.com

Stop saying lag is not a problem!

For some reason I can’t explain, most people I met in the video game industry in the last year or so seems to think lag does not exist or is not a problem. Take 20 seconds and watch this clip:
To put some context, this player, Shroud, is one of the most popular e-sports streamer. He has over 4 millions subscribers and he constantly gets over 100k viewers at any given time. This video was wtched 300k+ times, and that does not includes live and replay else where. If that’s not bad advertising I wonder what it is.
Now just go on google and search for the word “lag” along with your prefered game title and tell me what you get. CoD, Apex, Rainbox 6, Pubg, LoL, Fortnite, Overwatch, etc. They all suffer lag. As in this clip above, players stop playing! Studios are losing money! The extracts below go in lengty details about lag and its impact on gameplay. It also shows that Edge Computing servers will double the amount of players to get 45ms and below, therefor potentially reduce by half unhappy players due to lag! We also covers game download speed, which is also solved by Edge computing. Happy reading!
Sources for each metrics and report are provided in each section.
This document illustrates how network latency affects games and players’ performance by showcasing various extracts of studies, experiments, surveys, and articles regarding this aspect. These issues are present in all online video games and could be resolved or reduced with Arbitrium.
As the following survey results show, the video game aspect that seems to be the most important for gamers is the performance. When it comes to game performance many things can affect how a game performs, in this document we will see how the network affects the performance of video games.
Studies have shown that players’ tolerance for latency varies depending on the type of games. Fast paced games requiring reaction time, aiming, shooting, etc. generally require a low latency network. The tolerance to latency is obviously not the same for every individual, but past a certain amount of latency, games are just unplayable.
The horizontal gray area around 0.75 is a visual indicator of typical player tolerances for latency. Generally game performance above this threshold is acceptable while game performance below this threshold is unacceptable.
When a game network performance is not optimal it introduces an unfair advantage or disadvantage to players.
In general, as the average latency increases, the unfairness (standard deviation) of the server also increases. Moreover, this unfairness increases at a faster rate, with a 2x increase in latency resulting in a 2.5x increase in unfairness.
This unfairness can cause frustration amongst players as we can see in the video clip below where a famous Twitch streamer known as “Shroud” is a victim of unfairness caused by lag.
“We literally lost to these guys because of f***ing lag. I’m gonna break something!” “Holy s***, I hate when stuff isn’t in my control… I can’t play until it’s fixed. I can’t play in this lag.”
This other clip shows the result of lag, packet drop and jitter. Watch this call of duty black ops 3 player going back and forth like he’s dancing:
As Shroud said “he can’t play until it’s fixed” and he’s not the only person who stops playing a game because of latency. The extract below if an extract from a study that has shown that players with bad network conditions leave games prematurely.
Do game players leave a game prematurely due to unfavourable network conditions? Yes. Generally speaking, the worse the network quality, the earlier players will leave the game. For example, sessions with a low packet loss rate ( ≤ 1%) have an average duration of 160 minutes, while those with a high packet loss rate ( > 1%) have an average duration of 70 minutes. If we only observe whether players quit in the first 10 minutes of a game, only 3% of players who experience a low loss rate leave in that time, compared to 20% of players who experience a high loss rate.
On average, the degrees of players’ “intolerance” to delay, delay jitter, client packet loss, and server packet loss are in the proportion 1:2:4:3. That is, a player’s decision to leave a game prematurely due to unfavorable network conditions is based on the following levels of intolerance: average RTT (10%), RTT variations (20%), client packet loss (40%), and server packet loss (30%).
Another study demonstrated that the closer a player gets to the game server the bigger is the advantage he will have on the other players. On the other hand, if a player gets farther and farther to the game server the bigger the advantage that others will have on him.
We can take the case of League of Legends when they moved their North America servers from Portland, Oregon to Chicago, Illinois. The following section shows the ping (B) before and (A) after the relocation of the servers for most of the states. We can see really good results for states located in the eastern part of North America, but players located in the west weren’t happy because their ping increased.
Now we can imagine this kind of server relocation, but for every game instance based on the specific players in that game thanks to Arbitrium. That way we would see players with good pings all around.
Before:
After:
The next extract is from a paper that explains how cloud gaming can increase the amount of players covered by using edge computing instead of only a few numbers of locations provided by a cloud service provider. This is true for cloud gaming, but it can also be applied to the standard game network infrastructures that have only a few locations for their servers. By using the edge traditional game servers could be deployed in various locations.
User coverage by utilizing edge computing:
The last two extracts don’t concern game performances, but concerns the process of downloading games and updating them. The following statistics show that the download process is annoying, mainly because of the time it takes. Once again by using the edge computing infrastructures it would be possible to expand the location of content delivery servers and offer to the users the closest server to download a game.

What’s missing to Cloud Gaming

After Google’s Stadia announcement at the GDC, I was swamped with people asking me what I thought about it, and what I thought about cloud gaming in general.
( I’m assuming here that you know what Cloud Gaming is. If not, have a look at https://en.wikipedia.org/wiki/Cloud_gaming )
If, like me, you were at GDC, you have seen the demo and maybe tried it. As much as the numbers given by Google’s CEO were great (7500 edge sites, really?) allow me to raise some concern about them. Let’s start with this twitter post:
This was taken at GDC, in a closed environment, downtown San Francisco probably only a few blocks away from the computing power. Still, latency between the player and the “renderer” was 100+ms. Leaving alone the fact that the video quality was not at par with a gaming pc/console (i’m sure they will work hard at optimizing this in the upcoming years), this kind of latency is going to be a tough sell to gamers, even non-professional one.
Folks at google are smart, but bending the law of physics and making the speed of light faster than it is will probably not be possible (!). Latency is a physic’s problem. Communications are going through fiber, people are not necessarily all close from each other, close from data centers, etc. you get the point.
There will be work-arounds. i.e. Guaranteed path to minimize hop in the network, degrate quality when not needed, try to guess what the player will do, add latency to put everyone at par, and so on. The real fix to this is to be closer from the players, go at the root of the problem being distance. Their 7500 edge sites will most likely be key to this problem. Still…
Let’s assume they deployed that many edge locations, they mitigated the latency and they fix the display issues, we are still only looking at a single player game. What about the 47% of games today which are multiplayer based? (source)
See, most multiplayer games today use a server to act as an arbitrator and define what “really” happened. The arbitrator receives communication from every player, and decide who was where, when and what happened. Latency around this creates situation where the player may see his enemy in a given location and tries to shoot him when he’s not there anymore. See picture below.
Player A shoots player B thinking he is at the green location. In reality, player B being closer from the arbitrator, is able to tell him that he moved faster than player A is able to tell arbitrator he shot in a given direction.
This situation is true for any fast pace games; MOBA, Racing, FPS, RTS.
If you have latency between each player and their renderer, and each renderer and the arbitrator, you get the following:
You get 4 areas where lag is introduce. Assuming lag for cloud gaming is around 100ms per player, and the multiplayer server is around 60-100ms per player (yes, 100ms, looking at you sea of thieves), you will get a total of 300+ms between player’s reaction. Again, you may mitigate that with tricks and pops in your network but if we compare this to the perfect experience being 2 computers connected to the same switch (i.e. e-sports tournaments), let’s say there is a lot to do.
If I take this example below and transpose it in a real world scenario. 2 players, one hiding behind a wall, the other waiting for his head to show up to shoot. Since latency represents round-trip, we split in half each leg, so we get 50ms for the cloud gaming player, 30ms for the multiplayer server. This gives the following:
Player 2, who’s hiding behind a wall, pop his head to see what’s out there. He hit the joystick, sends a trigger to his renderer (50ms) to move. His renderer tells the server that player 2 is now in plain sight (30ms). Server tells renderer 1 that player 2 is now exposed (30ms). Renderer shows player 2 exposed to player 1 (50ms). Between the time player 2 exposed himself, and player 1 as seen him, 160 ms passed. As soon as he sees player 2, player 1 hit the trigger to shoot and kill. Joystick tells renderer 1 (50ms) that a shot was made, renderer 1 tells server that shot is going toward a certain direction (30ms). Server now decides whether player 2 has been shot or not. A total of 240ms has passed between the time player 2 decided to move and the time where the server decides if player 2 is dead. Back at the beginning, after player 2 moved away from the wall, his renderer will show player 1 targetting him (50ms). He can send another move request (50ms) which will be relayed to the server (30ms), resulting in player 2 going back to his hiding before player 1 is able to kill him.
That is plenty of time for player 2 to move.
That’s also a lot of time for both player to feel lag. Issue here is that lag is not something you see, there won’t be any error message displayed saying there is lag. Player will feel like they are not good at that game and will look for something else to play.
You can reduce a lot the latency around the server by having the multiplayer server instance in the same DC as the renderers. Yes I agree. But, this assume that:
-Will you have enough players for a given game to play around the same edge location?
-Will each game/matchmaker will be modified to take into account players close proximity?
-Will you force studio to use the same DC/cloud/edge provider for everything?
-What if you want to play with your friend in a different city?
-What if players are far from some DC/edge location?
-How will cross-platform multiplayer works?
…and so on. There are many elements to be put in line for this major issue not to be there.
Google stadia will work for some use case/context (i.e. you want to try a game, you play a game by yourself and you are close from a high density DC/edge, you want to play smaller/not so fast pace games like puzzle and such). There are still a lot of technical challenges for them to do to the gaming industry what Netflix did to the video one.
Some niche market will probably have more success (i.e. cloud gaming for mobile like hatch) due to the nature of their offering (less h/w requirements, ad hoc style games, makes it easier for studios, offer an alternative to app store, etc.)
Google is not the first one to try cloud gaming, and there are many players in that market already. VDI is not new (convince me cloud gaming is NOT vdi and i’m offering you a beer). Many players have to agree on a common goal (service providers who own the last mile, studios creating the content, and so on…) This is a tough sell. The real value will be in how things are orchestrated in the backend. Gaming is NOT streaming. Gaming use case call for bi-directional communication, low latency, and CDN based use cases are quite different beast.
Each match is a different story.
Edgegap software prevent this problem by taking each match and create the best environment for players to have fun. At the end, if the fun is not there, player will go elsewhere.
Mathieu Duperre
Edgegap Founder