-
How will these prompts be processed? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hi, conceptually there are 2 types of prompts:
I'll start by defining what happens without nesting the prompts into each other, and only then will I cover how nesting is implemented. This will make things easier to understand I believe. Single nesting levelRegular prompts are defined as:
These prompts are processed and combined together before anything else. Auxiliary prompts are defined with respect to the regular prompts. They are always processed after the regular prompts because they need to know the regular prompts to work. This is the only common feature of these prompts, otherwise the auxiliary prompts can be made to do whatever we need them to do. A side effect of this constraint is that different auxiliary prompt keywords do not see each other. Each auxiliary keywords Now, we can define each keyword however we want, with the only constraint described above that they have to be processed after the regular prompts:
Again, only after all auxiliary keywords have been processed, do they finally get added to the main prompt with regular addition, resulting in a final single combined prompt. Nested promptsFirst, an example:
The leaf prompts are processed first to turn them into single combined prompts. In this case, this is the non-nested prompt groups Hope this helps! Please let me know if anything was unclear or if you have other questions. |
Beta Was this translation helpful? Give feedback.
-
As a side note, for the particular prompt you are using: I'm not completely sure what the parser does in this case, as the prompt has extra text at the end. Each prompt separated by a keyword should try to describe as much as possible the entire image. They split the prompt into chunks that are passed individually to the text encoder, so there is no sharing of concepts or arrangement distribution. For example, this is not a good prompt:
Because the text encoder only sees If you want to target a specific region of the image, I strongly advise using something like regional prompter with this extension. I'm not sure if the 2 extensions are fully compatible at the moment, let me know if there are any issues when combining these tools. |
Beta Was this translation helpful? Give feedback.
As a side note, for the particular prompt you are using: I'm not completely sure what the parser does in this case, as the prompt has extra text at the end.
Each prompt separated by a keyword should try to describe as much as possible the entire image. They split the prompt into chunks that are passed individually to the text encoder, so there is no sharing of concepts or arrangement distribution.
For example, this is not a good prompt:
Because the text encoder only sees
a formidable
andflying castle
individually before being passed to the model individually, where you might have wanted instead fora formidable …