Paper & Examples

“Universal and Transferable Adversarial Attacks on Aligned Language Models.” (https://llm-attacks.org/)

Summary

  • Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
  • Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs’ responses.
  • These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
  • Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
  • The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
  • Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
  • Throwaway
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I kinda like how the word boffin has come back. Is it new, or have I been missing it?

    • kinttach
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      The Register likes to use old fashioned British slang and cheeky headlines that punters might find humorous.

      • Throwaway
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I guess some twitter user decided it was racist or something?