Economy

Apple slices its AI image synthesis times in half with new Stable Diffusion fix

Enlarge / Two examples of Stable Diffusion-generated artwork provided by Apple. (credit: Apple)

On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image generator to run on Apple Silicon using Core ML, Apple’s proprietary framework for machine learning models. The optimizations will allow app developers to use Apple Neural Engine hardware to run Stable Diffusion about twice as fast as previous Mac-based methods.

Stable Diffusion (SD), which launched in August, is an open source AI image synthesis model that generates novel images using text input. For example, typing “astronaut on a dragon” into SD will typically create an image of exactly that.

By releasing the new SD optimizations—available as conversion scripts on GitHub—Apple wants to unlock the full potential of image synthesis on its devices, which it notes on the Apple Research announcement page. “With the growing number of applications of Stable Diffusion, ensuring that developers can leverage this technology effectively is important for creating apps that creatives everywhere will be able to use.”

Read 8 remaining paragraphs | Comments

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close