• @NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    55 months ago

    When we finally have some rules/laws that AI’s need to adhere to, then someday we also need to define what to do with AI’s that do not adhere to the laws?

    Shoot them?

    Delete them?

    Put them in jail?

    Forbid them to enter our country?

    Take away their money?

    • @Send_me_nude_girls@feddit.de
      link
      fedilink
      English
      15 months ago

      Non of that is possible with FOSS AI code, if it’s out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.

      • @NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        15 months ago

        Non of that is possible

        That is not enough. Think harder.

        Today’s existing AI’s are child’s play, but it’s not going to be like that for long.

        One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.

        • @Send_me_nude_girls@feddit.de
          link
          fedilink
          English
          0
          edit-2
          5 months ago

          We already have issue to stop people believing fake news in writing form. I don’t see how we can stop people believing well made fake news with audio and video.

          Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.

          Everything profit oriented will result in propagation of missinformation as long as it generates clicks.

          Oh and don’t let AI control weapons, worst mistake one can make. We don’t even manage self driving cars, let alone a drone with mass killing weapons.

          Punishment won’t reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I’m trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.

          To block commercial products from allowing certain actions could be a start, but not a total fix. Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.

          AI is very broad, you can put everything with software into that topic too. Also it’s not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.

          I’m pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.

          We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save…

  • AutoTL;DRB
    link
    fedilink
    English
    55 months ago

    This is the best summary I could come up with:


    LONDON (AP) — Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

    But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot.

    Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

    “At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

    Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them “goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

    Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.


    The original article contains 1,119 words, the summary contains 221 words. Saved 80%. I’m a bot and I’m open source!

    • @GiddyGapOP
      link
      English
      15 months ago

      deleted by creator

    • @GiddyGapOP
      link
      English
      15 months ago

      deleted by creator

    • @GiddyGapOP
      link
      English
      15 months ago

      They do have world-leading AIs like Mistral and Aleph Alpha.

      And, yes, we do need rules.