Skip to content
yownas edited this page Aug 19, 2024 · 16 revisions

Features

Simple prompting.

As default this is what you see when you start RuinedFooocus (RF). Just a place for you prompt and an image window where your generated images will show up. RF tries to keep the attention on having fun and exploring prompts rather than pushing/forcing you to tweak a myriad of settings to get the "perfect" image. The settings are still there if you need them, but you tries to remind you that you probably don't.

Just type what you want to see and press Generate. (Or press Ctrl+Enter)

image image
At start up Your first generated image

Multiple Prompts

RF has a bunch of tricks. One of those is Multiple Prompts. You can split up your prompt with ---. Letting you generate multiple different prompts at a time.

image

Hurt me plenty

To get access to more settings, click the checkbox in the lower left corner.

Settings

While other Stable Diffusion user interfaces gives you access to lots and lots of settings like steps, cfg, samplers, schedulers, seeds, width and height, RF tries to hide as much of the confusing details as possible. Most of the time the default Performance and Aspect Ratio presets will be more than enough, but if you want to tweak settings to fit your needs you can always add your own.

If you want to try different settings, select Custom.... The settings will be used when generating images and when you find a setting you like, give it a name and press Save.

image

  • Performance - List of generation presets.
  • Aspect Ratios - Here you can set the size of the images.
  • Styles Selection - Long list of styles that can be added to the prompt to give it more flair.
  • Send Style to Prompt - Apply the styles to the prompt directly instead of during generation. Good if you want to edit the result.
  • Image Number - The number of images to be generated. If you set this to 0 RF will keep generating infinitely.
  • Auto Negative Prompt - Automatically creates a Negative Prompt.
  • Negative Prompt - Here you can specify things you don't want in the resulting images.
  • Random Seed - Every image starts as an image of random noise, this is the seed number that will create that randomness. Unless you want to re-generate the same image you can keep this set to random.

Models - Model

Here you can select which model to use.

image

Previews are downloaded from CivitAI. If a model can't be found it will show up as a "Warning". If you want to change the image for a model, look in RuinedFooocus\cache\checkpoints or RuinedFooocus\cache\loras. Simply replace the image there and it should show up in RF.

Models - LoRAs

LoRA (Low-Rank Adaptation) are like patches for your model. They can add styles, characters, poses or items to your result.

image

To add a LoRA, click on the + and select the one you want. Select a LoRA and click - to remove it. If you select a LoRA and drag the slider above them you can set the strength, how much it will affect the model.

At the bottom you will see the trigger words used by the LoRAs. They are collected from CivitAI, and if you wish to change them each LoRA has a .txt file in RuinedFooocus\cache\loras you can edit.

Models - MergeMaker

image

If you have been looking for models to download you have probably seen "checkpoint merges". These are models that instead of being trained, is a mixture of other models, to create a new one that has the features the creator wants. You can make your own by adding models (and LoRAs) just as you you did with LoRAs above. Give it a name, a short comment and press Save.

The merge will show up under Models like this.

image

Loading merges can take a while since you need to load all added models. If you want to save some time for the next time you want to check Save cached safetensor of merged model and a checkpoint will be saved in RuinedFooocs\cache\merges. The .merge files are small but take longer to load, checkpoint files are bigger (around 6GB) but loads faster. So it is up to you which you like better.

If you want to edit/create a merge-file, simply open them in a text editor. You need a base model that the other models will be added to. The weight of the models can be any number, these are scaled down (or up) to the value in normalize to get a model that works nicely. You probably don't want to change normalize to anything but 1.0, but you are free to experiment. You can also add LoRAs. The weight of these are just as if you would add them in RF's web gui. And cache is if you want to generate a safetensors file or not.

{
  "comment": "test",
  "base": {
    "name": "eldritchPhotography_v10.safetensors",
    "weight": 1.0
  },
  "models": [
    {
      "name": "crystalClearXL_ccxl.safetensors",
      "weight": 1.0
    }
  ],
  "loras": [],
  "normalize": 1.0,
  "cache": false
}

Random prompt with "One Button"

This will automatically generate a completely random prompt using OneButtonPrompt. It can also be used together with "Multiple prompt" by clicking Add To Prompt and they will be separated with --- and let you generate lots of random images at once.

OneButtonPrompt style wildcards in RF

There are two types of One Button wildcards in RF. The first set is all around generating a subject. The second set is all about generating artists.

One Button subject wildcards

  • __onebuttonprompt__ --> Executes a random one button prompt with all standard settings. Useful for iterating through many images at once.
  • __onebuttonsubject__ --> Executes a tiny one button prompt, just the subject, no other frills, images types or other stuff. Great for using with styles!
  • __onebuttonhumanoid__, __onebuttonmale__, __onebuttonfemale__, same as above, but for all humans, males or females
  • __onebuttonanimal__, __onebuttonlandscape__, __onebuttonobject__, __onebuttonconcept__, same as above, but for those specific types.

All __onebutton__ wildcards work with subject override, it can be typed like this:

__onebuttonmale:keanu reeves__ Or __onebuttonanimal:cute dog__ etc.

One Button artist wildcard

  • __onebuttonartist__ --> Executes a random one button prompt in artist only mode with all standard settings. Will create results such as (dark art by Lisa Keene:1.1) or (graffiti art designed by Rone:0.9) and ROA

These also work with subject override, to select a certain artist types from One Button Prompt.

Example: __onebuttonartist:fantasy__ Or ____onebuttonartist:popular__ Or __onebuttonartist:fashion__

Here is the list of artist categories supported:

category
all wild popular greg mode
3D comics line drawing sci-fi
abstract cubism low contrast sculpture
angular dark luminism seascape
anime detailed magical realism stained glass
architecture digital manga still life
art nouveau expressionism melanin storybook realism
art deco fantasy messy street art
baroque fashion monochromatic streetscape
bauhaus fauvism nature surrealism
cartoon figurativism photography symbolism
character graffiti pop art textile
children's illustration graphic design portrait ukiyo-e
cityscape high contrast primitivism vibrant
cinema horror psychedelic watercolor
clean impressionism realism whimsical
cloudscape installation renaissance
collage landscape romanticism
colorful light scene

One Button subject wildcards

Some wildcards have been ported over and usable as a wildcard directly:

  • __charactertype__ --> Keanu Reeves as a __charactertype__ character
  • __cardname__ --> mega wildcard, all your favourites such as black lotus, dark magician girl, ragfields helmet, and 20.000 others.
  • __episodetitle__ --> mega wildcard, names from episodes from tv shows
  • __poemline__ --> random line from a poem
  • __songline__ --> random line from a song

PowerUp

image

Here you have things like Controlnet, img2img and Upscale. These are features that allows users to upload a base image and modify it with text prompts.

And just as with Performance, you have a selection of presets. If you want to experiment more, select Custom....

Info

This will show the metadata of the current image. And has some important links.

Metadata or generate prompt from image

The settings used during generation will be saved in the images. If you drag-drop an image into the main image-box, the meta data will be read and shown in the prompt. If the image isn't generated with RF and is missing metadata, RF will use an AI model generate a prompt from what it will "see" in the image. (This will only happen if your prompt box is empty, so you won't overwrite it by mistake.)

image

Using json metadata as prompt

The json metadata from images can be used directly as a prompt and (hopefully) re-generate an image the same way it was created. Used together with Multiple Prompts you can generate a batch of images with different settings, testing different models or other settings.

While this might seem like a complicated way to view information from old images, there are are couple of nifty things you can do. Internally when generating an image all settings will be taken from the web interface and then overwritten by the json data. This mean you could give a prompt like this {"Prompt": "cat", "Negative": "cartoon", "seed": 123} and control the negative prompt and seed from the main prompt while still using the others settings from the ui. (Note that when using json in a prompt, styles from the ui will be ignored. This is to make things work properly when re-generating images.)

Used together with Multiple Prompts you can generate series of images with different settings, allowing you to almost do the same things as what a script like XYZ plot in other web ui's would do.

Styles

image

Here you can select different presets to enhance your prompt. These are active as soon as you select them but if you want to see what they look like you can press Send Style to prompt.

Inline styles

You can also add styles directly in the prompt. Example: <style:sai-cinematic>

Advanced prompt editing

RF supports all basic prompt editing and switching that is also supported in for example A1111. It also has some prompt editing techniques not found anywhere else. RF does support nesting of prompt editing. For example:

[a [norwegian forest cat|red fox] playing in a snowy field::0.75], art by van gogh

Note: Due to the nature of ComfyUI, it is not possible to combine advanced prompt editing with ControlNET. ControlNET will only be applied on the first step.

Basic prompt editing syntax

This logic is great for merging words or concepts together. It is especially effective for combining faces of celebrities.

  • [cat|dog] -> Each step will switch between cat and dog
  • [waterfall|ancient ruin|magical elven city] -> Each step will switch between waterfall, ancient ruin and magical elven city

A1111 style prompt editing syntax

With : and :: it is possible to determine when to start or stop adding words to your prompt. When using whole numbers, it will use that as the exact step it should start or stop. When using decimals, it will determine that as a percentage of the steps.

  • [cat:dog:16] --> after step 16, change from cat to dog

  • [cat:dog:0.5] --> after 50% of steps, change from cat to dog

  • [cat:16] --> add cat after step 16

  • [cat:0.5] --> add cat after 50% of steps

  • [cat::16] --> remove cat after step 16

  • [cat::0.5] --> remove cat after 50% of steps

  • [cat:0.5::25] --> add cat after 50% of steps, but remove it after step 25.

RF style only syntax

  • [cat~dog] --> ~ functions similar as | but in steps of 10% of steps. Meaning with 30 steps, it will switch every 3 steps. So it becomes cat, cat, cat, dog, dog, dog, etc. Lowest it will go is 2 steps.

  • [cat^dog] --> ^ functions as a slow transformation from cat to dog with a peak in the middle of the steps. It uses a simple linear algoritm. It starts out with cat, and then slowly adds more dog, until the middle of the steps. Then it switches to mainly dog, and slowly removes cat.

  • [cat/dog] --> / functions as a slow transformation from cat to dog, but from the middle of the steps, it then moves entirely over to dog.

  • [cat\dog] --> \ functions as having cat at the first half of the steps. After the middle of the steps, it will slowly transform into dog. It can be used to give some highlights at the end of the scheduling.

  • [cat?dog] --> ? functions as a randomizer, creating different results each time. It will randomly pick a value on each step. Is mainly for fun and experimentation.

Here are some examples of the effects, on the same seed an model. The base prompt is used as following:

realism, photograph,lsdr, upper body shot, a woman wearing [plate armor|wedding dress], background is [winter forest|skyscrapers|large trees]

For each method, the | is replaced by the relevant function.

prompt_editing_examples