Berlin-based business consultant Matt and his colleague were among the first at their workplace to discover ChatGPT, mere weeks after its release. He says the chatbot transformed their workdays overnight. “It was like discovering a video game cheat,” says Matt. “I asked a really technical question from my PhD thesis, and it provided an answer that no one would be able to find without consulting people with very specific expertise. I knew it would be a game changer.”

Day-to-day tasks in his fast-paced environment – such as researching scientific topics, gathering sources and producing thorough presentations to clients – suddenly became a breeze. The only catch: Matt and his colleague had to keep their use of ChatGPT a closely guarded secret. They accessed the tool covertly, mostly on working-from-home days.

“We had a significant competitive advantage against our colleagues – our output was so much faster and they couldn’t comprehend how. Our manager was very impressed and spoke about our performance with senior management,” he says.

Whether the technology is explicitly banned, highly frowned upon or giving some workers a covert leg up, some employees are searching for ways to keep using generative AI tools discreetly. The technology is increasingly becoming an employee backchannel: in a February 2023 study by professional social network Fishbowl, 68% of 5,067 respondents who used AI at work said they don’t disclose usage to their bosses.

Even in instances without workplace bans, employees may still want to keep their use of AI hidden, or at least guarded, from peers. “We don’t have norms established around AI yet – it can initially look like you’re conceding you’re not actually that good at your job if the machine is doing many of your tasks,” says Johnson. “It’s natural that people would want to conceal that.”

As a result, forums are popping up for workers to swap strategies for keeping a low profile. In communities like Reddit, many people seek methods of secretly circumventing workplace bans, either through high-tech solutions (integrating ChatGPT into a native app disguised as a workplace tool) or rudimentary ones to obscure usage (adding a privacy screen, or discreetly accessing the technology on their personal phone at their desk).

  • Seraphin 🐬@pawb.social
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    8 months ago

    The article: “I asked a really technical question from my PhD thesis, and it provided an answer that no one would be able to find without consulting people with very specific expertise."

    Me waiting for the part where the AI hallucinated and he got an F on his thesis:

    I mean, it might have worked out, but you have to be very careful when asking LLMs for factual information. I’ve tried it at my work and it gave me info that contradicted that from experts.

    • Ranvier@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      27
      ·
      8 months ago

      What kind of PhD thesis? I’m kind of shocked they would say that. I’m in a scientific field, and it falls flat on its face whenever I start to get relatively subspecialized. Not that I haven’t found some uses for it.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 months ago

        I believe this is a person who already has a PhD they defended successfully (presumably by doing all the work themselves, if they’ve only just discovered ChatGPT). So we’re talking about an advanced person who used their thesis subject matter to check ChatGPT’s abilities, not to produce their thesis.

        Which is exactly the type of user that can benefit the most from ChatGPT because they can verify its output… but YMMV wildly if you’re a beginner in whatever field you’re querying.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      8 months ago

      I was very surprised by this part as well. ChatGPT isn’t great when it comes niche subject matter. I feel like this must be an exaggeration, or perhaps he didn’t really validate the results.

      Or maybe he just got lucky.

    • hayes_@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      8 months ago

      It will also contradict itself and make the same mistakes even if you point them out.

      Can be useful as a starting point, but you basically need to fact check everything it says.