Zum Inhalt springen

Stable Diffusion: Unterschied zwischen den Versionen

Anlumo (Diskussion | Beiträge)
examples
Cerise (Diskussion | Beiträge)
 
(8 dazwischenliegende Versionen von 5 Benutzern werden nicht angezeigt)
Zeile 24: Zeile 24:


* Text-to-image: Enter a text prompt (positive and negative) and generate a low-res image out of that.
* Text-to-image: Enter a text prompt (positive and negative) and generate a low-res image out of that.
* Image-to-image: Take an image as input and modify it based on a text prompt. This can be used for style transfer for example, or taking the composition of another image for a new creation.
* Image-to-image: Take an image as input and modify it based on a text prompt. This can be used for style transfer for example, or taking the composition of another image for a new creation. ([https://www.reddit.com/r/StableDiffusion/comments/1196vyi example])
* Inpainting: Same as image-to-image, but only modify a part of the image. This can be used to add or remove details in images, for example. ([https://www.reddit.com/r/StableDiffusion/comments/11gbijd example])
* Inpainting: Same as image-to-image, but only modify a part of the image. This can be used to add or remove details in images, for example. ([https://www.reddit.com/r/StableDiffusion/comments/11gbijd example])
* Controlnet: Applicable to any of the above. Take a reference image, extract some property of it, like the pose of a person or a depth map, and nudge the AI to generate one of the above outputs with this extra information ([https://www.reddit.com/r/StableDiffusion/comments/11fn96y example]). This can also be used in text-to-image to convert a pencil sketch to a photorealistic image, for example.
* Outpainting: Same as image-to-image, but extending an existing image instead. For example, if you have an image of the upper half of a person, you can add the lower half or add more of the environment (based on the text prompt).
* Controlnet: Applicable to any of the above. Take a reference image, extract some property of it, like the pose of a person or a depth map, and nudge the AI to generate one of the above outputs with this extra information ([https://www.reddit.com/r/StableDiffusion/comments/11fn96y example]). This can also be used in text-to-image to convert a pencil sketch to a photorealistic image, for example ([https://www.reddit.com/r/StableDiffusion/comments/11h0m9v example]).
* Upscaling of images: This can increase the resolution of an image by adding details that weren't in the original image (like individual strands of hair). Usually this is used to increase the low resolution output of the techniques above to usable resolutions.
* Upscaling of images: This can increase the resolution of an image by adding details that weren't in the original image (like individual strands of hair). Usually this is used to increase the low resolution output of the techniques above to usable resolutions.


Zeile 65: Zeile 66:
* [[User:ripper|ripper]]
* [[User:ripper|ripper]]
* [[User:Nicole|ncl]]
* [[User:Nicole|ncl]]
* [[User:Nicole|Qubit23]]
* [[User:eest9|eesti]]
* [[User:zentibel|zentibel]]
* [[User:Sonstwer|Sonstwer]]
* [[User:Cerise|Cerise]]
* Your name could be here!
* Your name could be here!


If an sufficient amount of people appear on this list, a date and time will be discussed/announced.
If an sufficient amount of people appear on this list, a date and time will be discussed/announced.