This page showcases a collection of experimental artworks created using generative AI models. These works were generated by combining latent diffusion models (LDMs) such as SD1.5, SDXL, and FLUX.1 Dev with large language models (LLMs) and vision-language models (VLMs). The primary aim is to explore how prompts interact with AI models. Notably, most portfolios based on LDMs also employed additional techniques such as Inpaint and ControlNet to improve overall image quality, rather than relying solely on a single model.
I believe that in the near future, designers must embrace generative AI as a new design tool and learn how to use it effectively. The image introduced below is an example of a generative artwork I created. For some of the images, the prompts used during their creation are also provided.