• koreth
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    ChatGPT is certainly no good at a lot of aspects of storytelling, but I wonder how much the author played with different prompts.

    For example, if I go to GPT-4 and say, “Write a short fantasy story about a group of adventurers who challenge a dragon,” it gives me a bog standard trope-ridden fantasy story. Standard adventuring party goes into cave, fights dragon, kills it, returns with gold.

    But then if I say, “Do it again, but avoid using fantasy tropes and cliches,” it generates a much more interesting story. Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments, but the setting turned from generic medieval Europe into more of a weird steampunk-like environment, and the climax of the story was the characters convincing the dragon that it was hurting people and should stop.

    • lars@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Yeah I remember when it GPT-3 first became available (before Chat) and people found that you could get better results by simply asking it to be better. Someone asked it to predict the end of a story, then tried again but told it to be a super genius instead and it did a much better job.

      Like by default its predicting the output of an average person, but it also knows how to predict above average people.

    • th3raid0r@tucson.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments

      As a server owner of a different instance, text is infinitely less expensive to host than images. So I say go ham! It’s better than posting photos of the chatgpt convo. But I don’t know the beehaw mod team’s opinion on that.

  • th3raid0r@tucson.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I dunno what this GM is doing but I find that ChatGPT (GPT4 particularly) does wonderfully as long as you clearly define what you are doing up front, and remember that context can “fall off” in longer threads.

    Anyways, here’s a paraphrasing of my typical prompt template:

    I am running a Table Top RPG game in the {{SYSTEM}} system, in the {{WORLD SETTING}} universe. Particularly set before|after|during {{WORLD SETTING DETAILED}}.
    
    The players are a motley crew that include:
    
    {{ LIST OF PLAYERS AND SHORT DESCRIPTIONS }}
    
    The party is currently at {{ PLACE }} - {{ PLACE DETAILS }}
    
    At present the party is/has {{ GAME CONTEXT / LAST GAMES SUMMARY }}
    
    I need help with:
    
    {{ DETAILED DESCRIPTION OF TASK FOR CHAT GPT }}
    

    It can get pretty long, but it seems to do the trick for the first prompt - responses can be more conversational until it forgets details - which takes a while on GPT4.

  • Pekka@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I guess it makes a lot of sense for a bot that predicts the most likely response to generate generic fantasy worlds. I think a bot DM would work a lot better if it had access to tables of tropes, environments, monsters and order elements and could roll or pick from those to create the story.

    In the same way combat should probably be handled by code specifically written for that purpose similar to video games. If such a robot DM would be developed like that it would probably do much better.