• Aceticon@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 months ago

    At some point in my career I’ve actually designed mission critical high performance distributed server systems for a living, so I’m well aware of that.

    You can still pack thousands of users per server and have very low latency as long as you use the right architecture for it (it’s mainly done with in-memory caching and load balancing) when you’re accessing gigantic datasets which far exceed the data space of a game where the actual shared data space is miniscule since all clients share a local copy of most of the dataspace - i.e. the game level they’re playing in - and even with the most insane anti-cheat logic that checks every piece of data coming in from the user side against a server-side copy of the “game level data space” it’s still but a fraction of the shared data space in equivalent situations in the corporate world, plus it tends to be easilly partitionable data (i.e. even in MMORG with a single fully open massive playing space, players only affect limited areas of the entire game space so you don’t really need to check the actions of a player against the data of all other players).

    Also keep in mind that all the static (never changing or slow changing stuff) like achievements or immutable level configuration can still be served with “normal” latencies.

    Further the kind LVL1 ISP that provides network access for companies like Sony servicing millions of users already has more than good enough latency in their normal service and hence Sony needs not pay extra for “low latency”.

    Anyways, you do make a good and valid point, it’s just that IMHO that’s the kind of thing that pushes the running costs per-player-month from one dollar cents or less to, at most (and this is likely quite a large overestimation), a dollar per-player-month unless they only have tens of players per-server (which would be insane and they should fire their systems designers if that’s the case).