• barsoap
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I mean, interest groups lobby, news at 11 and I don’t really see anything wrong with what they lobbied for.

    The actual watering down that happened is states carving out exceptions for state use, think public surveillance and stuff.

    Overall it’s still too early to really tell what the act will contain, in the end, this was only the first reading in parliament there’s still going to be back and forth with the council.

    • geissi@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      That’s exactly what I was about to write.
      Interest groups representing their interests is completely normal.

      What the reporting and headline should focus on is how much legislators let themselves be influenced by lobbyists.

      Still, OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk.

      • barsoap
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Define “general purpose”. There’s no AGI yet and if there was it’d be able to lobby for its own human rights.

        As to ChatGPT in particular: No, it’s not inherently high risk. Me asking it to re-write my cookie recipe and add some sentimental fluff might lead to it messing up the instructions, or sound silly, or both, but that’s about it. The high-risk category is for stuff like CV scanners, where you have to make sure that the model is not applying discriminatory bias inferred from its learning data, such as “has a foreign-sounding name -> don’t hire”.

        You can use ChatGPT as an ingredient in such a scanner, and if you do then ChatGPT has to get certified for high-risk uses, but if you don’t then it doesn’t.

        • geissi@feddit.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          My comment was less about dangers of ‘AI’ and more about the influence of lobbyists on legislators.

          That said, I do see a risk of convincing chat bots.
          People are inherently stupid and gullible. People have a natural tendency to understand things they are told as facts if they don’t already know better.
          We already have a massive misinformation problem that could be further exacerbated if chat bots can generate massive volumes of convincing sounding nonsense.

          • barsoap
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            That’s why anything with human interaction is classed as “limited risk” and comes with transparency requirements, that is, ChatGPT needs to tell you that it’s an LLM.

            Then, well, humans are already plenty capable of generating massive volumes of convincing sounding nonsense. Whether you hire ChatGPT or morally flexible Indian call centres doesn’t really make much of a difference.

    • Perry@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I think lobbying as a term has gotten very infected by American politics, where it appears to be borderline bribery.

      Interest groups arguing their cause is exactly how I expect it to work.

      I find it ironic how many people seem to be complaining about lobbying while simultaneously complaining about the EU not listening to the industry enough.

      • barsoap
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        …and?

        I can be pro regulation of the apple farming industry and still grow apples and argue that green ones shouldn’t be outlawed.

  • bad_alloc@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I suspect their main goal is to legally mandate that people need a “license” to develop AI systems. Aside from the obvious practical issues, the main reason behind this is to build a moat around their product: They know open source AI is a danger to their business model so this is an attempt to squash that. Current systems have dangerous abilities but these concern things like privacy. Laws should prohibit certain things for both human and nonhuman actors (like facial recognition at protests).