• Oiconomia@feddit.deOP
    link
    fedilink
    arrow-up
    37
    arrow-down
    3
    ·
    1 year ago

    Would like to know that as well. I just stole the meme from non-fediverse meme site

    • awesomesauce309@midwest.social
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      1 year ago

      It’s probably Stable diffusion. I use comfyui since you can watch the sausage get made but there’s also other UIs like automatic1111. Originally for a qr pattern beautifier, there is a controlnet that takes a two tone black and white “guide” image. but you can guide it to follow any image you feed it. Such as a meme edited to be black and white, or text like “GAY SEX.”

      • SzethFriendOfNimi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        Looks like somebody created an outline mask and then used that in img2img with a prompt for the particular scenery.

        I remember seeing somebody use that technique to generate the same pose and model but different colored outfits with that technique

        Maybe Canny nap like used here

        https://youtu.be/8cVnooYgpDc?t=13m

      • simplylemons
        cake
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I’ve used InvokeAI (stable diffusion models) with ControlNets for it too, it’s a bit easier to use than comfy/a1111 but not as powerful IMHO.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.

      The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf

      I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.