I’d like to thank the admins for being so open and direct about the issues that they’re facing.

  • albert180@feddit.de
    link
    fedilink
    arrow-up
    165
    ·
    1 year ago

    I don’t get why anyone hosts Servers running 24/7 on AWS/GCloud/Azure. The pricing is just outrageous. Everyone else will be cheaper

    • thelastknowngod
      link
      fedilink
      arrow-up
      90
      ·
      1 year ago

      To be fair, with a proper autoscaling scheme in place these services should scale down significantly when not in use.

      That being said, a big reason for using AWS/GCP is all the additional services that are available on the platform… If the workload being run isn’t that complicated, the hyperscalers are probably overkill. Even DO or Linode would be a better option under those circumstances.

      • Overmind@lemmy.sdf.org
        link
        fedilink
        arrow-up
        65
        ·
        1 year ago

        This. AWS architect here. There are a lot of ways to reduce pricing in AWS like horizontal scaling, serverless functions, reserved instances. Most people aren’t aware of it and if you’re going to dive in head first into something like cloud, you’ll need to bear the consequences and then learn eventually.

        • Greyscale@lemmy.sdf.org
          link
          fedilink
          arrow-up
          24
          ·
          1 year ago

          Even with ASGs, ec2 costs a bomb for performance.

          And “serverless” functions are a trap.

          If you’re gonna commit to reserved instances, just buy hardware for goodness sake, its a 3 year commitment with a huge upfront spend.

            • whoisearth@lemmy.ca
              link
              fedilink
              arrow-up
              6
              ·
              1 year ago

              Mark my words the loop is coming back around. I look forward to when my work migrates the datacenter off AWS back on prem because of ballooning costs.

              You work in IT long enough you see it for the joke it is. We get paid obscene amounts of money to do what amounts to nothing.

              • msage@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Just because rotating managers always come with the ‘new current thing everyone is doing’.

                Like no, 99% of companies can just do what they’ve always done. No need to rebuild everything from scratch.

              • Greyscale@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I’m already in the middle of that. Everything non-public-facing is going to cheap lease boxes running workloads in docker. idgaf if the machine underneath lives or dies, its 3 lines of config in a terraform script to replace.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            And “serverless” functions are a trap.

            How are serverless functions a trap? They seem like a great cheap option for simple CRUD / client > server > db apps (what most apps end up being).

            • Greyscale@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Anything that is “cheap” to do on serverless is cheaper to do on a $5 droplet, especially once it starts to grow.

              Serverless gets you to buy in to a vendors lock-in.

              • masterspace@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Interesting, I’ll check out droplets, but in my experience with Azure Functions there’s not much vendor lock in. My API was just a normal Node.js / express server, the only part that was locked in to Azure Functions was the format for the endpoint definitions, but those can be adjusted in like an hour’s worth of time to anything else

        • thelastknowngod
          link
          fedilink
          arrow-up
          13
          ·
          1 year ago

          Yep. And if you want to really save some cash and don’t mind getting a little crazy, use an EKS node orchestrator that supports spot instances. I’m starting to do a serious dive into Harness at the moment actually.

          Google recently released a white paper on cost saving in kubernetes as well.

          • Toribor@corndog.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If you’ve got a kubernetes cluster running on 10 different spot instances, isn’t there a risk that all ten could be revoked at the same time? Even if they are built out across regions and availability zones?

            • Zalack@startrek.website
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Counterargument: I don’t need Lemmy to have 100% uptime. It’s not a corporate service and while – obviously – if it’s down all the time I would eventually move on, I’m not going to fault a not-for-profit entity for periodic failures.

            • thelastknowngod
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Ideally you’d have a baseline node group of traditional instances and use spot instances only for scale up.

              I think that’s though. PDBs and affinity rules should cover most cases. I’m just starting to dig into this though so I may be mistaken.

            • Phoenixbouncing@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Got my AWS architect cert 2 weeks ago.

              What you can do is setup a spot fleet so it’ll fill up with spots and only use on-demand if spot goes above the on demand price.

              You could also have a pure spot fleet and a reserved instance and use a load balancer with health checks to route traffic.

              The one thing you shouldn’t do with cloud providers is lift and shift your existing instances, that’s what leads to the crazy prices some people are seeing.

              Renting an ec2 on demande and installing your software is almost always the wrong way to do it.

        • Toribor@corndog.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I’m in a similar boat. I’m a sysadmin supporting a legacy application running on AWS EC2 instances and a new ‘serverless’ microservice based platform as well. It’s really really hard to scale and optimize anything running on EC2s unless you really know what you’re doing or the application is designed with clustering in mind.

          You tend to end up sizing instances based on peak load and then wasting capacity 90% of the time (and burning through cash like crazy). I can imagine a lot of Lemmy admins are overspending so fast they give up before they figure it out.

          • Dasnap@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            Nowadays I feel like EC2 is either used for legacy support or testing. Most prod nowadays should probably be built with some kind of container solution so you can scale it easier.

    • penguin@sh.itjust.works
      link
      fedilink
      arrow-up
      27
      arrow-down
      1
      ·
      1 year ago

      AWS is perfect for large operations that value stability and elasticity over anything else.

      It’s very easy to just spin up a thousand extra servers for momentary demand or some new exciting project. It’s also easy to locate multiple instances all over the world for low latency with your users.

      If you know you’re going to need a couple servers for years and have the hardware knowhow, then it’s cheaper to do it yourself for sure.

      It’s also possible to use aws more efficiently if you know all of their services. I ran a small utils website for my friends and I on it a while ago and it was essentially free since the static files were tiny and on s3 and the backend was lambda which gives you quite a few free calls before charging.

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Habit (guess). Its what is used professionally, despite being proven over and over that cost-per-speed is terrible compared to less known providers.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        If the average Web engineer’s salary capable of running a site like this is ~$180,000, then a $30,000 difference in cost is only about 2 months salary. Learning and dealing with a new hosting environment can easily exceed that.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Maybe, maybe that hosting provider doesn’t exist in the long term, maybe that hosting provider crashes more often or makes sudden api changes and causes more ongoing work and headaches that chew up more time and salary, maybe you end up needing a more complex over the top service that they don’t offer and need to go to AWS / Azure anyways.

        • fmstrat@lemmy.nowsci.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What’s that? Taxes? And no way do I agree with this. $30k is a lot, no matter how much you make. Learning a new environment is not THAT hard.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It is, but learning a new environment, then dealing with any down the line troubleshooting or instability can easily add up to $30,000 if you actually track where salaried employees time is going.

      • virtualbriefcase
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        That, and like others mentioned their flexibility, plus the fact that they’re fairly reliable (maybe less than some good Iaas providers but a fair bit more than your consumer vps places). Moments ago I went to the hetzner site to check them out and got:

        Status Code 504 Gateway Timeout

        The upstream server failed to send a request in the time allowed by the server. If you are the Administrator of the Upstream, check your server logs for errors.

        Annoying if it’s you nextloud instance down for a minutes, but a worthy trade off if you’re paying 1/4 of the price. Extremely costly for big business or even risking peoples’s lives for a few different very important systems.

        • barsoap
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Hetzner has four nines availability, usually higher. AWS claims five nines but chances are you’ll mess up something on your end and end up at three to two nines, anyway. If you really need five nines you should probably colocate and only use the likes of AWS as a spike backup.

          And I guess “messed something up on your end” happened in that case: I don’t think Hetzner is necessarily in the habit of maximising availability of their homepage at all cost (as opposed to the hosting infrastructure), you probably caught them in a middle of pushing a new version.

          …speaking of spike backups: That is what AWS is actually good for. Quickly spinning up stuff and shutting it down again before it eats all your money.

      • pomodoro_longbreak@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m not a server admin, but I am a dev, and for many of us it’s just what we know because it’s what our employers use. So sadly, when it comes to setting up infrastructure on our own time, the path of least resistance is just to use what we’re already used to.

        Personally I’m off AWS now though, but it definitely took some extra work (which was worth it, to be clear).

    • c1177johuk@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      AWS is mostly only useful for large companies who need one hosting provider for all their needs, with every single product tightly integrated into other products

          • barsoap
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            It does, but that comes with the territory. SAP is the IBM mainframe of business software. You’ll be hard-pressed to find a large multinational which don’t run SAP… or have a couple of IBM mainframes to run it on. The kind of “large” which means that they don’t have IT departments but IT subsidiaries, probably created by buying up a couple of tech consultancies. You know like Samsung buying Joyent, saying “never mind your public platform you’ll be busy enough hosting all our data we’re the only customer you’ll ever need”.

      • AnonymousDeity@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I mean I’m sure Lemmy’s server process is stateless, I’m sure it could use CloudRun/ECS pretty efficiently and that wouldn’t really require a rewrite (unless the process is stateful for some reason)

        • steal_your_face@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s possible to run Lemmy on kubernetes so I assume you could on ecs as well. I’m pretty sure the Postgres db manages state and not the process.

    • hawkwind@lemmy.management
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      The pricing scheme here is designed to gouge businesses for equal or more than the traditional non-cloud equivalent. Which happens to be completely unaffordable. Imagine buying a new enterprise grade server for your home setup.

    • Merlin
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Can you point the everyone else? Just out of curiosity. I know there’s digital ocean but I’m not quite sure they’re cheaper than azure/aws

  • MooseBoys@lemmy.world
    link
    fedilink
    arrow-up
    70
    ·
    1 year ago

    Figure 1: Human discovers that hosting a web service for hundreds of thousands of users is expensive.

  • filister@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    Why don’t you migrate to cheaper providers like Hetzner? I mean AWS is extremely expensive for what they are and I am pretty sure there are hundreds of people out here who will willingly help you set it up.

  • franglais
    link
    fedilink
    arrow-up
    25
    ·
    1 year ago

    Oracle free tier, 4arm cores, 200gb storage, 24gb ram, zero money’s spent

    • mplewis@lemmy.globe.pub
      link
      fedilink
      arrow-up
      29
      ·
      1 year ago

      Oracle is all fun and games until they lose your instance’s IP or data and don’t give it back because you’re a free tier freeloader.

      • franglais
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It’s a great deal, if you stay small, the idea is a loss leader, they temp you in and you set up your service, then when you need to scale up, they charge the extras.