APPLE DESIGN AWARDS
The power of Pixelmator Photo
Discover how 20 million images helped this photo editor hone his app.
Pixelmator Photo: Pro Editor
Enhance, adjust, retouch
Pixelmator Photo is the winner of a 2019 Apple Design Award, which recognises the creative artistry and technical achievements of developers who reflect the best in design, innovation and technology on Apple platforms.
Back in 2007, Pixelmator was a simple startup with just four people working in Vilnius, Lithuania. Today the company has grown to 20 and evolved into a global photo-editing powerhouse by following a simple strategy: creating products they’d use themselves.
Pixelmator Photo is a perfect example. The iPad-only app delivers impressive photo-editing power in a beautiful, uncluttered interface. For beginners, it’s surprisingly approachable, offering filters based on classic cameras. For experts looking to maximize every last pixel of their iPad screen, it offers a robust toolset and support for RAW images, which (in a big win for the Pixelmator team) can be edited non-destructively.
Most helpful of all, the app offers machine-learning-powered editing tools trained using more than 20 million images. Here lead developer Simonas Bastys tells us how Pixelmator Photo stands out in the rich field of photo-editing apps.
After Apple launched Core Image, the framework that made it easier for indie developers to create image-editing applications, we hopped on that and launched the first Pixelmator in less than a year. We had the idea of building something that we’d all want to use. Twelve years later, we’re still working on that part.
When we were in the middle of development of Pixelmator Photo last year, there was one decision that changed everything. Initially we had all the presets near the adjustment sliders, but it covered lots of the iPad screen. Then one day [designer] Monika Perlikovskiene came up with the idea to put the presets at the bottom. It’s not a revolutionary idea, but for us it kind of was. It seriously changed the idea of the company.
It took a long, long time to develop the new machine-learning features like ML Enhance. We needed a data set so the first thing we had to do was build one and of course, 1,000 or 2,000 photos is not enough. You have to have millions of photos.
Eighty per cent of the pictures we do are of people, so the ML has to be very good at that. Even then it’s not as easy as “bad picture versus good picture.” We had to take good pictures and ruin them in a really natural way, so we could train our machine-learning models to think the picture could be fixed. That was the trickiest part.
But there’s really nothing more fancy than that. Get a good data set, set up a very good training process, and transfer it into knowledge. That’s all the magic.