Stability AI Releases Stable Diffusion 3.5 Text-to-Image Generation Model
Stability AI, developer of open source models focused on text-to-image generation, has released Stable Diffusion 3.5, the latest version of its deep learning, text-to-image model.
This release features three enhanced open-source text-to-image models designed for a diverse range of users, including researchers, enterprise clients, and hobbyists, the company said in a statement.
- Stable Diffusion 3.5 Large: At 8.1 billion parameters, with superior quality and prompt adherence, this base model is the most powerful in the Stable Diffusion family. This model is ideal for professional use cases at 1 megapixel resolution.
- Stable Diffusion 3.5 Large Turbo: A distilled version of Stable Diffusion 3.5 Large generates high-quality images with exceptional prompt adherence in just 4 steps, making it considerably faster than Stable Diffusion 3.5 Large.
- Stable Diffusion 3.5 Medium: At 2.5 billion parameters, with improved MMDiT-X architecture and training methods, this model is designed to run “out of the box” on consumer hardware, striking a balance between quality and ease of customization. It is capable of generating images ranging between 0.25- and 2-megapixel resolution.
The release follows the earlier debut of Stable Diffusion 3 Medium in June, which the company acknowledged as falling short of both community and internal expectations.
“We chose to build a solution that could truly transform visual media rather than a quick fix,” the company said. This latest update is aimed at reclaiming Stability AI’s competitive edge amid rising competition from platforms such as OpenAI’s DALL-E and Midjourney.
A key technical feature of the new models is Query-Key Normalization within the AI’s transformer blocks, which Stability AI said enhances customization and prompt adherence. This modification supports developers and creatives in achieving more consistent results with precise prompts while also allowing broader interpretation with less specific prompts.
“In developing the models, we prioritized customizability to offer a flexible base to build upon,” the company explained. “To achieve this, we integrated Query-Key Normalization into the transformer blocks, stabilizing the model training process and simplifying further fine-tuning and development.”
The new Stable Diffusion models will be available under Stability AI’s Community License, allowing free non-commercial use and free commercial use for entities earning under $1 million annually. Those exceeding this threshold will require an enterprise license. The models, including weights for self-hosting, will be accessible on Hugging Face and via Stability AI’s API. ControlNets, offering advanced image customization options, are expected in the coming days.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].