One very cool open source project has taken Stable diffusion into Blender. This means you can generate AI art, whether it be concept art, or a texture, right where you need it.

One epic feature of this project that was just released though is a new model that can deal with the depth it receives from Blender. So not only does it generate something new, it can also understand your model and wrap the texture around it. It seems to do this by asking the model for a mew flat image, knowing what surfaces it meeds to wrap. It makes for a nice addition, but can drastically reduce the quality of the image if you aren't zoomed in on your Blender viewport.

Here I've asked for a simple texture of a cereal box on a simple mesh. The output is obviously not perfect, generative AI has never yet been great at rendering text nicely, but it's very clearly a cereal box that we have rendered out.

As a distant background object this would be more than enough to quickly fill out a scene, or make interesting new models on the fly. It can use cloud compute if you have a Dream Studio, or locally on your own GPU.

0:00
/

Here it is in action, going from a blank cuboid to a fully textured cereal box!

Blender is using my Nvidia 3080 to render this nice and quickly locally. The nice addition here is that makes it completely free!