I just installed stable diffusion on my homelab and did not change anything, but the outputs are really bad, I tried following tutorials, but my version just outputs really weird stuff.

What am I doing wrong?

    • SweetAIBelle@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’d also suggest that using a different model (Deliberate 11, in this case), changing the prompt a bit, and using a negative embedding can help, even at 256x256.

      an ai generated photograph of a woman with brown hair

      Prompt: photograph of a woman with brown hair, portrait, masterpiece, trending on artstation
      Negative prompt: easynegative
      Steps: 20 | Sampler: Euler a | CFG scale: 7 | Seed: 2156939601 | Size: 256x256 | Model hash: 57d103206a | Model: deliberate_v11-pruned | VAE: vae-ft-mse-840000-ema-pruned | Clip skip: 1 | Version: 8749067 | Token merging ratio: 0.5 | Parser: Full parser

      Used embeddings: easynegative [119b]

  • Stampela@startrek.website
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Too low resolution and not the best sampler are the main ones. Try again with those exact settings, same seed and same prompt BUT at 512x512 and you will get

      • Fubarberry@lemmy.fmhy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        DDIM is great for both fast results, and final results on non-realism images. You can set DDIM to 12 steps and get pretty solid results, which is great when you’re wanting to quickly try prompt variations or search for really good seeds. I usually use 20 steps when getting a final image.

        DPM++ is the best I’ve tried for realism results, but it’s slower per step than DDIM and requires 35-40 steps to get good results imo.