You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: .github/ISSUE_TEMPLATE/bug_report.yml
+3-3
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ body:
40
40
- type: dropdown
41
41
id: os
42
42
attributes:
43
-
label: Where are you running the webui?
43
+
label: Where are you running the webui?
44
44
multiple: true
45
45
options:
46
46
- Windows
@@ -52,7 +52,7 @@ body:
52
52
attributes:
53
53
label: Custom settings
54
54
description: If you are running the webui with specifi settings, please paste them here for reference (like --nitro)
55
-
render: shell
55
+
render: shell
56
56
- type: textarea
57
57
id: logs
58
58
attributes:
@@ -66,4 +66,4 @@ body:
66
66
description: By submitting this issue, you agree to follow our [Code of Conduct](https://docs.github.com/en/site-policy/github-terms/github-community-code-of-conduct)
67
67
options:
68
68
- label: I agree to follow this project's Code of Conduct
* K-Diffusion Samplers: A great collection of samplers to use, including:
40
-
40
+
41
41
-`k_euler`
42
42
-`k_lms`
43
43
-`k_euler_a`
@@ -95,8 +95,8 @@ An easy way to work with Stable Diffusion right from your browser.
95
95
To give a token (tag recognized by the AI) a specific or increased weight (emphasis), add `:0.##` to the prompt, where `0.##` is a decimal that will specify the weight of all tokens before the colon.
96
96
Ex: `cat:0.30, dog:0.70` or `guy riding a bicycle :0.7, incoming car :0.30`
97
97
98
-
Negative prompts can be added by using `###` , after which any tokens will be seen as negative.
99
-
Ex: `cat playing with string ### yarn` will negate `yarn` from the generated image.
98
+
Negative prompts can be added by using `###` , after which any tokens will be seen as negative.
99
+
Ex: `cat playing with string ### yarn` will negate `yarn` from the generated image.
100
100
101
101
Negatives are a very powerful tool to get rid of contextually similar or related topics, but **be careful when adding them since the AI might see connections you can't**, and end up outputting gibberish
102
102
@@ -131,7 +131,7 @@ Lets you improve faces in pictures using the GFPGAN model. There is a checkbox i
131
131
132
132
If you want to use GFPGAN to improve generated faces, you need to install it separately.
133
133
Download [GFPGANv1.4.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth) and put it
134
-
into the `/sygil-webui/models/gfpgan` directory.
134
+
into the `/sygil-webui/models/gfpgan` directory.
135
135
136
136
### RealESRGAN
137
137
@@ -141,7 +141,7 @@ Lets you double the resolution of generated images. There is a checkbox in every
141
141
There is also a separate tab for using RealESRGAN on any picture.
142
142
143
143
Download [RealESRGAN_x4plus.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
144
-
Put them into the `sygil-webui/models/realesrgan` directory.
144
+
Put them into the `sygil-webui/models/realesrgan` directory.
145
145
146
146
### LSDR
147
147
@@ -174,8 +174,8 @@ which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF
174
174
175
175
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
176
176
model.
177
-
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
178
-
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
177
+
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
178
+
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
179
179
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
180
180
With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
181
181
See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).
@@ -184,26 +184,26 @@ See [this section](#stable-diffusion-v1) below and the [model card](https://hugg
184
184
185
185
Stable Diffusion v1 refers to a specific configuration of the model
186
186
architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet
187
-
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
187
+
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
188
188
then finetuned on 512x512 images.
189
189
190
190
*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
191
-
in its training data.
191
+
in its training data.
192
192
Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).
193
193
194
194
## Comments
195
195
196
196
- Our code base for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
197
-
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
197
+
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
198
198
Thanks for open-sourcing!
199
199
200
-
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
200
+
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
201
201
202
202
## BibTeX
203
203
204
204
```
205
205
@misc{rombach2021highresolution,
206
-
title={High-Resolution Image Synthesis with Latent Diffusion Models},
206
+
title={High-Resolution Image Synthesis with Latent Diffusion Models},
207
207
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
Copy file name to clipboardexpand all lines: Stable_Diffusion_v1_Model_Card.md
+9-10
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
21
21
22
22
# Uses
23
23
24
-
## Direct Use
24
+
## Direct Use
25
25
The model is intended for research purposes only. Possible research areas and
26
26
tasks include
27
27
@@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
68
68
considerations.
69
69
70
70
### Bias
71
-
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
72
-
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
73
-
which consists of images that are primarily limited to English descriptions.
74
-
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
75
-
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
71
+
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
72
+
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
73
+
which consists of images that are primarily limited to English descriptions.
74
+
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
75
+
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
76
76
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
77
77
78
78
@@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
84
84
- LAION-2B (en) and subsets thereof (see next section)
85
85
86
86
**Training Procedure**
87
-
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
87
+
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
88
88
89
89
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
90
90
- Text prompts are encoded through a ViT-L/14 text-encoder.
@@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
108
108
-**Batch:** 32 x 8 x 2 x 4 = 2048
109
109
-**Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
110
110
111
-
## Evaluation Results
111
+
## Evaluation Results
112
112
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
113
113
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
114
114
steps show the relative improvements of the checkpoints:
115
115
116
-

116
+

117
117
118
118
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
119
119
## Environmental Impact
@@ -137,4 +137,3 @@ Based on that information, we estimate the following CO2 emissions using the [Ma
137
137
}
138
138
139
139
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
0 commit comments