I’m so tired.

  • 15 Posts
  • 163 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle



  • Seriously, what kind of reply is this, you ignore everything I said except the literal last thing, and even then it’s weasel words. “Using agential language for LLMs is wrong, but it works.”

    Yes, Curtis, prompting the LLM with language more similar to its training data results in more plausible text prediction in the output, why is that? Because it’s more natural, there’s not a lot of training data on querying a program on its inner workings, so the response is less like natural language.

    But you’re not actually getting any insight. You’re just improving the verisimilitude of the text prediction.


  • Got it, because the output you saw from doing this looks really really plausible. Disappointing, but what other answer could it have been?

    Here’s a story for you: a scientist cannot get his papers published. In frustration, he complains to his co-worker, “I have detailed charts on the different type and amount of offerings to the idol, and the correlations to results on prayers answered. I think this is a really valuable contribution to understanding how to beseech the gods for intervention in our lives, this will help people! Why won’t they publish my work?”

    His co-worker replies, “Certainly! As a large language model I can see how that would be a frustrating experience. Here are five common reasons that research papers are rejected for publication.”


  • You’re not just confident that asking chatGPT to explain it’s inner workings works exactly like a --verbose flag, you’re so sure that’s what happening that it apparently does not occur to you to explain why you think the output is not just more plausible text prediction based on its training weights with no particular insight into the chatGPT black box.

    Is this confidence from an intimate knowledge of how LLMs work, or because the output you saw from doing this looks really really plausible? Try and give an explanation without projecting agency onto the LLM, as you did with “explain carefully why it rejects”


  • These videos are, of course, suspiciously cut to avoid showing all the times it completely fucked up, and still shows the engine completely fucking up.

    • “This door requires a blue key” stays on screen forever
    • the walls randomly get bullet damage for no reason
    • the imp teleports around, getting lost in the warehouse brown
    • the level geometry fucks up and morphs
    • it has no idea how to apply damage floors
    • enemies resurrect randomly because how do you train the model to know about arch-viles and/or Nightmare difficulty
    • finally: it seems like they cannot die because I bet it was trained on demos of successful runs of levels and not the player dying.

    The training data was definitely stolen from https://dsdarchive.com/, right?

    it’s interesting that the only real “hallucination” I can see in the video pops up when the player shoots an enemy, which results in some blurry feedback animations

    Well, good news for the author, it’s time for him to replay doom because it’s clearly been too long.











  • Calls it a “hostile takeover” even though he literally explains why it wasn’t a hostile takeover: Developers were way behind schedule and not making progress, Star Theory leadership tried to hold T2 hostage with the project and T2 called their bluff and cancelled the contract. They then offered developers to transfer to new studio. Some developers wanted a pay raise or didn’t transfer for other reason.

    This seems like bad faith rules lawyering. The publisher didn’t literally buy out the studio, they just withheld funding, made a new studio they owned and forced everyone over. They took over, it was hostile.