• Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    146
    arrow-down
    1
    ·
    edit-2
    7 months ago

    xfinity will advertise 100 Tbps lines with the abysmal 1.5 TB/mo data cap anyway

    “you can drive this super sport car for $ per month - but only for 10 miles”

        • runefehay@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          Isn’t the phrase they use “up to” the promised speed? So if it is 300bps, that is not above 5Mbps, so they technically met their promise.

      • Zorque@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        7 months ago

        Aren’t fiber lines typically symmetrical? At least that’s how I’ve usually seen them advertised.

        • wildbus8979@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          51
          ·
          7 months ago

          You underestimate the fuckery that ISPs will go through to offer the least amount of services for the most possible money.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            Mine at least lets me adjust the upload and download ratio for my plan. I’m currently on 50/20, but I could upgrade my plan and get 100/20, 70/50, or whatever I want. But 50/20 has been plenty for me, and we’re getting municipal fiber soon so I’ll have more options as well.

            AFAIK, cable doesn’t offer that, you get 5mbps on pretty much every plan, or you upgrade to some ridiculous tier to get faster upload.

            • wildbus8979@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              7 months ago

              5Mbps is absolutely bonkers. I had 30/5 back in like 2006. And TCP has an overhead of about 5%, some protocols ever more.

              I lost my symmetric gigabit fiber recently after moving and I miss it dearly. Might have upgraded to 3Gbps by now 😭

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 months ago

                I had gigabit a long time ago, and while it was nice, I’m unwilling to pay for it. 50/25 is good enough for me, and it costs less than half what gigabit would cost ($55/month vs $125/month). I just checked, and apparently all plans have half the upload vs download, so that’s nice.

                Our new service promises to be $60-70 for 250 symmetric, and that would only get me 100/50 at my current ISP, so I’ll probably be getting that upgrade when it’s available.

    • doublejay1999@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      7 months ago

      Don’t be silly son, the free market will signal there is opportunity and prices will drop and quality will go up.

    • floridaman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      40
      ·
      edit-2
      7 months ago

      I hate Comcast as much as the next guy but I feel like 1.5TB a month would be reasonable. Even at those speeds you probably wouldn’t be downloading more, just downloading whatever you do now but faster.

      E: I was gonna ask why this was so controversial but I just checked my routers stats and, oh yeah I’ve only downloaded around half a terabyte over 3 segregated VLANs in the past 2 months. I’ve uploaded almost double that which is baffling to me though. Even still I don’t see why anyone would be downloading anything more that a terabyte in a month unless your one of those data hoarders, which fair but… I’ll stop my rambling.

      • Sneezycat@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        28
        ·
        7 months ago

        Why the fuck would I want that speed if I can only fully use it for less than a second before hitting the data cap? I’d rather have 100 times less speed with 100 times more cap, so I can actually fully use it however I want.

        Also it’s just ridiculous anyway because I don’t even think hard drive write speeds are that fast.

        • mb_
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 months ago

          I think you meant no data cap.

        • RippleEffect
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          The only thing data caps should affect is if there’s abnormal congestion.

        • RonSijm@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          11
          ·
          edit-2
          7 months ago

          There should be, that’s just how fiber works. If they lay a 10 Gb line in the street, they’ll probably sell a 1 Gb connection to a 100 households. (Margins depend per provider and location)

          If they give you an uncapped connection to the entire wire, you’ll DoS the rest of the neighborhood

          That’s why people are complaining “I bought 1Gb internet, but I’m only getting 100Mb!” - They oversold bandwidth in a busy area. 1Gb would probably be the max speed if everyone else was idle. If they gave everyone uncapped connections the problem would get even worse

          • ArchAengelus@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            7 months ago

            You’re taking about data rates here, measured in bits per second.

            Data caps have to do with the total amount of data you are allocated over a longer period of time. Usually per month. In the case of Comcast, it’s 1.5 TB/month.

            If the customer exceeds that allotment during the month, they will be charged an additional “overage fee” per arbitrary unit, usually by the gigabyte.

            It has nothing to do with the speed they advertise on a line, but rather a way to charge “heavy users” more.

          • crystenn@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            7 months ago

            you’re talking about a bandwidth cap, not a data cap. data caps are when you get throttled after downloading a certain amount of data or get charged extra. think phone data plans where you have 10 or 20gb or whatever per month

      • 4am
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        7 months ago

        Florida man fails math, yet again

      • repungnant_canary@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        Data caps are simply false advertising - if your infrastructure can only handle X Tb/s then sell lower client speeds or implement some clever QoS.

        There are plenty of users for whom 1.5TB is quite or very restrictive - multi member households, video/photo editors working with raw data, scientists working with raw data, flatpak users with Nvidia GPU or people that selfhost their data or do frequent backups etc.

        With the popularity of WFH and our dependence on online services the internet is virtually as vital as water or electricity, and you wouldn’t want to be restricted to having no electricity until the end of the month just because you used the angle grinder for a few afternoons.

      • edric
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        7 months ago

        I’m on pace for 0.60 TB this month and I’m no heavy user. I only have 1 4k TV and a laptop for work that I use all day. My wife is mostly on her phone but is a heavy TV user in the evening. I can imagine people who download and/or torrent most of the content they consume can easily hit 1.5TB

      • pirat@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        7 months ago

        Faster than “[…] the bandwidth of a station wagon full of tapes hurtling down the highway”?

        (Quoted: Tanenbaum, 1981)

    • KillingTimeItself@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      according to the FTC or FCC whichever one it was recently raised the defined speed of a broadband connection.

      It’s not symmetrical yet though. Which is weird.

      • KamikazeRusher
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        It’s not symmetrical yet though. Which is weird.

        Eh, I would say it’s to be expected. A lot of infrastructure still relies on coax/DOCSIS which has its limitations in comparison to an all-fiber backbone. (This post has some good explanations.) However it wouldn’t surprise me if some ISPs argue that “nobody needs that much uplink” and “it helps restrict piracy” when really it’s just them holding out against performing upgrades.

        • KillingTimeItself@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          it really shouldnt be though, this is going to be in effect for like, the next decade or two. FTTH is literally fresh off the presses for most suburbanites, and city dwellers, i see no reason that this standard should be so antiquated anymore.

          Literally only incentivizes ISPs to keep rolling out shitty infra that’s slow as balls everywhere that isn’t suburbia.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      7 months ago

      There are limitations to the technology, similar to saying 3 times faster than sound.

      Also broadband as a regulated term would have speeds tied to that definition.

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    ·
    7 months ago

    Distances though? I’ve seen similar breakthroughs in the past but it was only good for networking within the same room.

    • Blue_Morpho@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      62
      ·
      edit-2
      7 months ago

      It’s optical fiber so it’s good for miles. Unlikely to be at home for decades but telcos will use it for connecting networks.

      Optical fiber is already 100 gigabit so the article comparing it to your home connection is stupid.

      So the scientist improved current fiber speed by 10x, not 1.2 million X.

      • credo@lemmy.world
        link
        fedilink
        English
        arrow-up
        36
        ·
        7 months ago

        Note they did not say 1.2 million times faster than fiber. Instead they compared it to the broadband definition; an obvious choice of clickbait terminology.

        • 4am
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          No one said “always”; original comment is correct that fiber can literally go miles

      • blarth@thelemmy.club
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 months ago

        It’s much more than just 100Gb/s.

        A single fiber can carry over 90 channels of 400G each. The public is mislead by articles like this. It’s like saying that scientists have figured out how to deliver the power of the sun, but that technology would be reserved for the power company’s generation facilities, not your house.

        • Kazumara@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          over 90 channels of 400G each

          You mean with 50 GHz channels in the C-band? That would put you at something like 42 Gbaud/s with DP-QAM64 modulation, it probably works but your reach is going to be pretty shitty because your OSNR requirements will be high, so you can’t amplify often. I would think that 58 channels at 75 GHz or even 44 channels at 100 GHz are the more likely deployment scenarios.

          On the other hand we aren’t struggling for spectrum yet, so I haven’t really had to make that call yet.

      • AstralPath@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        7 months ago

        Its not stupid at all. “Broadband” speed is a term that laypeople across the country can at least conceptualize. Articles like this aren’t necessarily written exclusively for industry folks. If the population can’t relate to the information well, how can they hope to pressure telcos for better services?

        • Blue_Morpho@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          7 months ago

          So it’s fine if an article says Space X develops a new rocket that travels 100x faster than a car?

          Because that implies a breakthrough when it’s actually not significantly faster than other rockets: it’s the speed needed to reach the ISS.

          10X faster than existing fiber would be accurate reporting. Especially given that there are labs that have transmitted at peta bit speeds over optical already. So terabit isn’t significant, only his method.

            • Blue_Morpho@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              7
              ·
              7 months ago

              Then give me a related analogy you would accept and I’ll easily twist it into a misleading comparison exactly the article did.

              How about this, “British Telecom develops high speed internet 1700x faster than previous Internet service technology. Availability is today!”

              The above statement is completely true.

              Comparing to home Internet when it isn’t home Internet technology is misleading. Ignoring that there are already faster optical Internet speeds in other labs around the world is misleading.

                • aStonedSanta
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 months ago

                  Except that isn’t the case here. It’s completely different technology that transfers the data. So it’s comparing a train to a car.

      • 9point6@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        I wonder what non-telco applications will use this

        I wonder if something like a sport stadium has video requirements that would get close with HFR 8K video?

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          12
          ·
          7 months ago

          To be fair, it all trickles down to home users eventually. We’re starting to see 10+gbps fiber in enthusiast home networks and internet connections. Small offices are widely adopting 100gbps fiber. It wasn’t that long ago that we were adopting 1 gigabit ethernet in home networks, and it won’t be long before we see widespread 800+ gigabit fiber.

          Streaming video is definitely a big application where more bandwidth will come in handy, I think also transferring large AI models in the 100s of gigabytes may also become a large amount of traffic in the near future.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Yup, my city has historically had mediocre Internet, and now they’re rolling out fiber and advertising 10g/s at a relatively reasonable $200/month.

            I’m probably not getting it anytime soon (I’m happy with my 50/20 service), but I know a few enthusiasts who will. I’ll see what the final pricing looks like and decide if it’s worth upgrading my infrastructure (only have wireless AC, so no point in going above 300mbps or so).

          • aStonedSanta
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Man. The tech is so pricey though. 10g switch’s are scary lol

        • fruitycoder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Disaggregated compute might be able to leverage this in the data center. I could use this to get my server, gaming PC and home theater to share memory bandwidth on top of storage, heck maybe some direct memory access between distributed accelerators.

          Gotta eat those PCI lanes somehow

          • Kazumara@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Disaggregated compute might be able to leverage this in the data center.

            I don’t think people would fuck with amplifiers in a DC environment. Just using more fiber would be so much cheaper and easier to maintain. At least I haven’t heard of any current Datacenters even using conventional DWDM in the C-band.

            At best Google was using Bidir Optics, which I suppose is a minimal form of wavelength division multiplexing.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Also 1.2 million times less likely to leave the research stadium because even if this is true (very big if already) it’s still “new and exciting and revolutionary improvement #3626462” this week alone. Revolutionary new battery tech comes out twice a week if you believe the pop sci tech sites, it’s 99.9% crap

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Battery advancements aren’t crap. We’ve gotten 5-8% improvement in capacity per year, which compounds to a doubling every 10 to 15 years. Every advancement covered by over sensationalized pop sci articles you’ve ever heard has contributed to that. It’s important not to let sensationalism make you jaded to actual advancements.

        Now, as for broadband, we haven’t pushed out the technologies to the last mile that we already have. However, this sort of thing is useful for the backbone and universities. Universities sometimes have to transfer massive amounts of data, and some of the most efficient ways to do that are a van full of hard drives.

        • Phoenixz@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          That’s not what I said though, I meant that 99.9% of the “revolutionary new battery technology” articles on blogs, magazines and what not are clickbait crap. I’ve seen these articles for at least the last 25 years and hlbeyond lit-ion batteries, not much revolutionary has happened on the battery front. My point was more against the clickbait science and tech news that regurgitates the same dumb crap all the time

      • n3m37h@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Stuff like this is a bit more believable. Still will be more than a decade before we will see any benefit. First all of the sea cables would get the upgrade, then private companies (banks mainly), then governments (military and such), ISPs will prolly not touch it for as long as possible till governments force em.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      No normal consumer user would have any reasonable use case for this kind of bandwidth.

      This is data center and backbone network stuff.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        ultimately the end consumer is going to run their connection through it SOMEWHERE, or something very similar more than likely.

        It’s not going to be FTTH levels of connectivity, but interconnect to ISP it very well could be.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 months ago

    It’s compared to the average broaband speed in the UK, so it’s not quite as exciting as it might sound …

  • thbb@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 months ago

    I remember the early 90’s when fiber connection was being developed in research centers.

    Researchers had found a way to transmit all of a country’s phone calls’ bandwidth through a simple fiber cable. Then, they wondered: what could we use this for?

    This was a few years before the explosion of the internet…

    • Kazumara@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      1988 TAT-8 already went into productive use as the first transatlantic fiber optic connection. So the lab work must have happened in the 80’s already.

  • wrekone@lemmyf.uk
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    With further refinement and scaling, internet providers could ramp up standard speeds without overhauling current fiber optic infrastructures.

    Don’t worry. They’ll find some way to use this to justify massive rate increases.

    • tocopherol@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      We must make ISPs a public service owned by the people. Who can argue that internet isn’t essential to being a regular member of society? These companies rob us and use their monopolies to manipulate us.

  • humbletightband@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    I’m highly suspicious about group dispersion over long distances. Today’s infrastructure was developed for a certain range of frequencies. Broading it right away wouldn’t be applicable that easy - we would need to introduce error correction which compromises the speed multiplier.

    Too lazy to get the original paper though

    • blarth@thelemmy.club
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      We already have transceivers that perform forward error correction. That technology is a decade+ old.

        • blarth@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Dispersion compensation and FEC are separate layers of the cake, and work hand in hand.

          • humbletightband@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            7 months ago

            I don’t understand why, tho I do not have any kind of expertise here.

            I suggest (Haven’t read it), this paper proposes to send much denser and broadened signals around one carrier frequency (they use single mode). Due to dispersion they

            1. Start to overlap with one each other. If you put more frequencies, you would have more overlaps and I fail to see how it won’t lead to errors.

            2. They all arrive at the broader time window, which again could be mitigated either by error correction, or by extending the time window.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              8
              ·
              7 months ago

              “I haven’t read it, but I assume these are things they didn’t take into account.”

              Okay then.

              • humbletightband@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                7 months ago

                Okay, let’s read and find out whether we can find something that we don’t know.

                1. There’s no paper, there is no letter, it’s a simple statement at the institute page. The way science is being communicated nowadays is frustrating.

                2. From the statement

                However, alongside the commercially available C and L-bands, we used two additional spectral bands called E-band and S-band. Such bands traditionally haven’t been required because the C- and L-bands could deliver the required capacity to meet consumer needs.

                So they indeed broadened the frequency range.

                1. They also did not say anything about limitations. They just pushed this bizarre number everywhere 🤷🏼‍♂️
    • Kazumara@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      7 months ago

      The zero dispersion wavelength of G.652.D fiber is between 1302 nm and 1322 nm, in the O-band.

      Dispersion pretty much linearly increases as you move away from its zero dispersion wavelength.

      Typical current DWDM systems operate in the range of 1528.38 nm to 1563.86 nm, in the C-band.

      Group dispersion in the E-band and S-band is lower than at current DWDM wavelengths, because these bands sit between the O-band and the C-band.

  • Kazumara@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    7 months ago

    First of all some corrections:

    By constructing a device called an optical processor, however, researchers could access the never-before-used E- and S-bands.

    It’s called an amplifier not processor, the Aston University page has it correct. And at least the S-band has seen plenty of use in ordinary CWDM systems, just not amplified. We have at least 20 operational S-band links at 1470 and 1490 nm in our backbone right now. The E-band maybe less so, because the optical absorption peak of water in conventional fiber sits somewhere in the middle of it. You could use it with low water peak fiber, but for most people it hasn’t been attractive trying to rent spans of only the correct type of fiber.

    the E-band, which sits adjacent to the C-band in the electromagnetic spectrum

    No, it does not, the S-band is between them. It goes O-band, E-band, S-band, C-band, L-band, for “original” and “extended” on the left side, and “conventional”, flanked by “short” and “long” on the right side.

    Now to the actual meat: This is a cool material science achievement. However in my professional opinion this is not going to matter much for conventional terrestrial data networks. We already have the option of adding more spectrum to current C-band deployments in our networks, by using filters and additional L-band amplifiers. But I am not aware of any network around ours (AS559) that actually did so. Because fundamentally the question is this:

    Which is cheaper:

    • renting a second pair of fiber in an existing cable, and deploying the usual C-band equipment on the second pair,
    • keeping just one pair, and deploying filters and the more expensive, rarer L-band equipment, or
    • keeping just one pair, and using the available C-band spectrum more efficiently with incremental upgrades to new optics?

    Currently, for us, there is enough spectrum still open in the C-band. And our hardware supplier is only just starting to introduce some L-band equipment. I’m currently leaning towards renting another pair being cheaper if we ever get there, but that really depends on where the big buying volume of the market will move.

    Now let’s say people do end up extending to the L-band. Even then I’m not so sure that extending into the E- and S- bands as the next further step is going to be even equally attractive, for the simple reason that attenuation is much lower at the C-band and L-band wavelengths.

    Maybe for subsea cables the economics shake out differently, but the way I understand their primary engineering constraint is getting enough power for amplifiers to the middle of the ocean, so maybe more amps, and higher attenuation, is not their favourite thing to develop towards either. This is hearsay though, I am not very familiar with their world.