Inpainting In Stable Diffusion – Online And For Free


If you click on a link and make a purchase, I may receive a small commission. As an Amazon affiliate partner, I may earn from qualifying purchases.
Read our disclosure.

What Is Inpainting?

Inpainting is a technique used in Stable Diffusion image editing to restore and edit missing or damaged portions of pictures. Inpainting has been used to reconstruct deteriorated images, eliminating imperfections like cracks, scratches, disfigured limbs, dust spots, or red-eye effects from AI-generated images.

Instead of solely repairing missing portions of a generated image, this technique allows for creating entirely new content within any desired area of an existing picture.

How to Do Inpainting In Stable Diffusion

After generating an image, use the ‘Send to inpaint’ button.

To get started with inpainting, you need the following files:

When you have the needed files downloaded and have generated an image with Stable Diffusion, click the ‘Send to inpaint’ button below the generated image(s) to start the inpaint process.

Green squares show what I’m looking to create for the image and where I want the elements to be created. In this example, I wanted the mask to be filled with a crystal.

In the inpaint area, you can use your mouse/touchpad to paint over the parts that you want to inpaint(modify). You can also change the brush size to be used for inpainting. Use the text prompt to define what you want to create for the area you have painted (masked).

In the image above, the green squares show the area I’ve masked (inpainted) and the text prompt of what I would like to see in the area.

When I had the crystal in place, I further wanted to modify the image with an eye patch.

If you want to continue modifying the image, click the ‘Send to inpaint’ button to continue with the image you modified with the inpainting.

Image showing the evolution of the image by doing inpainting. The left side shows the original image, the middle image shows the added crystal, and the right side, the added eye patch.

The best part about inpainting with Stable Diffusion and AUTOMATIC1111 WebUI is that you can use the same diffusion model or even change it during the inpainting process. The faster your computer is, the more enjoyable the inpainting process is.

The left side image was done with GhostMix, the middle image crystals were made with Dark Sushi 2.5D model, and the right side image eyes were made with Counterfeit diffusion model.

Inpainting in Photoshop with Stable Diffusion

You can do inpainting straight from Adobe Photoshop with a Stable Diffusion plugin. Scroll down the Github page for installation instructions for the plugin. Remember to uncheck the ‘Artboards’ option when you create a new file in Photoshop.

The easiest way to get started is to open a generated image in Photoshop and continue the inpainting process from there.

You need to have AUTOMATIC1111 WebUI running locally, and you also need to modify the webui-user file to have the following script: COMMANDLINE_ARGS= --api


With the Stable Diffusion plugin, you can access all of your downloaded diffusion models straight from Photoshop, do inpainting and outpainting, and even use ControlNet to fine-tune your images.

With a Stable Diffusion plugin, you can do inpainting straight from Photoshop.

The inpainting, outpainting, ControlNet processes done through Photoshop is VRAM intensive. Be sure you have a computer with enough VRAM and RAM (Photoshop uses).

Check out: Stable Diffusion requirements

Inpainting In Playground AI

In Playground AI, you can use their Canvas feature to do inpainting and outpainting. Playground AI gives you 1,000 credits to be used every day, so it’s basically free to use. The best part about Playground AI is that you don’t have to have a powerful computer do the inpainting process. It’s handled by the AI art generator.

You can start the inpainting process by either generating your own image or by editing an image you see on the homepage of Playground AI.

You can edit (inpaint, outpaint, erase) any image you find from Playground AI.

The results might lack in quality, so the best thing to do is to continue evolving the image in the outpainting mode (Generate image area). Send the inpainted image to the ‘Image to image,’ set the image strength to around 65, and use the same filters (in the demo, I used Masterpiece) to create the image.

Creating a consistent look by inpainting with AI art generators is harder than inpainting with a local Stable Diffusion model and installation.

While Playground AI offers free online inpainting and outpainting, getting the exact results you are looking for can be time-consuming.

Feature image credits.



Digital Artist

I’m a digital artist who is passionate about anime and manga art. My true artist journey pretty much started with CTRL+Z. When I experienced that and the limitless color choices and the number of tools I could use with art software, I was sold. Drawing digital anime art is the thing that makes me happy among eating cheeseburgers in between veggie meals.

More Posts

11 Best Anime Stable Diffusion Models for Anime Art Generation

8 Best AI Anime Art Generators – Next-Level Anime Art Generation

My Honest PixAI Review – Pricing, Features, Use Cases

How to Use AUTOMATIC1111 WebUI – Full Beginners Guide

AI Art Models – Everything You Need to Know (Incl. SD Models)

How to Use Stable Diffusion For AI Art Generation – Beginners Guide to AI Art

Contact and Feedback

Thank You!

Thank you for visiting the page! If you want to build your next creative product business, I suggest you check out Kittl!