A collection of pre-trained textual inversion embedding files for generating various starsector themed art assets using stable diffusion
Strict embeddings were trained using parameters that push towards results that more closesly match the training data for a greater number of style matching results but sacrifices adaptability in concept
Free embeddings were trained using parameters that allow the network more freedom in adapting prompt concepts but generate more results that dont match the target style
Follow the installation steps for the stable diffusion webui from this repo https://github.com/AUTOMATIC1111/stable-diffusion-webui
To make use of the pretrained embedding files, create a directory called embeddings (in the same place as webui.py) and put the .pt or .bin embedding files into it.
The filename (without .pt/.bin) will be the term you'll use in the prompt to get that embedding. Generally asking for "an image of embedding" or "style of embedding" should be sufficent along with whatever other prompt you want to include.
Quick example showing using the planet embedding as well as img2img prompting. The simple starting image was fed into the img2img section of webui along with a text prompt describing the planet and asking for the style of the planet embedding file.
Ship generation is still a bit rough around the edges due to downsampling that occurs in the training process damaging the fine details in training images as seen in this comparison between input training image and the reconstruction of that image during training.
Example of a single seed value over different prompt modifiers and sampling methods