How to use Stable Diffusion on vast.ai easy step-by-step guide

This is a guide to setting up a Stable Diffusion by renting a server from vast.ai. it by no means is exhaustive and will only show you what is needed to get the standard astronaut riding a horse.

Step 1. Get a vast.ai account on vast.ai by using my link…

Step 2. sign up on https://huggingface.co/ to download the stable-diffusion-v-1-4-original model.

Step 3. Setting up the system you want to rent.
After loading some credits on vast proceed to the consol /client sections and search for your preferred system

Change the filters to show a 1 RTX 3090 system each and 100GB storage and sort the listing by price.

Change the filters to show a 1 RTX 3090 system with 24 GB ram and 100GB storage and sort the listing by price. To get this to work on more than one GPU takes a bit more work that is beyond this article

It is wise to avoid systems that are not verified but some gems do exist there to be found. We want an On-demand instance as we will not want to get interrupted when generating the images

Click the edit image button and use docker image nvidia/cuda:11.7.0-devel-ubuntu20.04 as based and tick the boxes as above. Click select and save. There are other docker images you can use but I’m starting from scratch

After finding the system you want go to the instance section and wait for the open button to get activated.

When the JypterLab interface has opened up change the terminal theme to dark. Else it is going to be hard to follow.

Run the following command and then close the Terminal and reopen. You should say yes to all prompts

apt update && apt -y upgrade 
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh

After reopening the terminal run the below commands:

git clone https://github.com/CompVis/stable-diffusion.git
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
pip3 install --upgrade pip
pip3 install --upgrade diffusers transformers scipy
apt install -y libsm6 libxext6 libxrender-dev libgl1-mesa-glx ffmpeg

At this stage, you should go to https://huggingface.co/ and create an account. Please verify this account as you will need to download the model from them.

mkdir /root/stable-diffusion/models/ldm/stable-diffusion-v1/
wget https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt   --user=YourHuggingfaceUserName --password=YourHuggingfacePassword
ln -s /root/stable-diffusion/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt 

And now the hoarse magic!

python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms 
Outputs can be found under /root/stable-diffusion/outputs/txt2img-samples

All credits go to https://github.com/CompVis/stable-diffusion#weights and https://huggingface.co/CompVis/stable-diffusion-v1-4 for providing the incredibly powerful tool.