The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of “competent graduate student” is reached, at which point I could see this tool being of significant use in research level tasks.

  • dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    21
    ·
    2 months ago

    I genuinely hate this statement. A competent grad student can solve problems. GPT cannot solve anything, as all it does is put together the shit it stole from somewhere before

    • NegentropicBoy@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      2 months ago

      O1 is (apparently) different according to some videos I watched, as it pulls apart the question and does some reasoning steps.

          • aodhsishaj@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 months ago

            @NegentropicBoy English20•

            O1 is (apparently) different according to some videos I watched, as it pulls apart the question …

            Yes

        • jsomae@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          2 months ago

          LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.

          I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

          By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.

    • ContrarianTrail
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      2 months ago

      Aren’t the grad students similarly trained on books that other people wrote?

  • qooqie@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    2 months ago

    Using GPT without appearing like an idiot takes a competent grad student

    • jsomae@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      This I can believe tbh. It’s a very useful tool in the hands of an expert. Otherwise it’s like giving a chimp a gun.

      Maybe this is why I am surprised at people’s hatred of ChatGPT. It’s borne of misuse of a tool for experts, like newcomers struggling with a C++ compiler error.

      • jdeath
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        hey now let’s be fair here, people hate C++ too

    • jsomae@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I do agree that grad students don’t exactly live in luxury, and frequently develop mental health crises. But their contributions and insight are what power their labs. Profs often have to spend so much time teaching and chasing grants that they can’t do much real research. Academia overall is in a sad state.

      But Tao is a superstar, and a charismatic blogger. I’d be disappointed to learn he mistreats his grad students. (I don’t know if he even has any tbh)