Prisma, the wildly popular photo editing app, has made a major breakthrough in mobile technology.
Over the past two months, its team of nine developers has made it possible for the app to run offline, without connecting to its servers.
The update, available Tuesday for iOS devices, is a big deal because Prisma previously needed a large amount of computing power in remote servers to process each image. That work can now be performed on individual phones thanks to the improved efficiency of the app.
If you haven’t used Prisma, it works a lot like Instagram, without the social network component: Upload or take a photo, and the free app will transform it into artwork in the style of famous masterpieces. You can make your selfie look like Edvard Munch’s The Scream, or your backyard resemble Katsushika Hokusai’s The Great Wave. Half the styles will work offline now, Prisma said, while the others will be added later.
Since launching in June, Prisma has been downloaded more than 52 million times and has 4 million daily active users. The app has processed over 1.2 billion photos and makes money through sponsored styles.
“The technology behind Prisma — deep learning — is a bridge between your imagination and your digital creation,” CEO Alexey Moiseenkov said in a statement. “Now, people can carry that power in their pockets.”
Deep learning requires an interconnected bundle of math formulas that compute tons of problems simultaneously. It’s designed, typically, to make computers evaluate visual information like humans do. When we see a book, we know instantaneously what the title is, its color and length. For a computer to understand this information all at once, many calculations are needed to figure those details out.
Deep learning requires a computer with 60 times the graphics processing power of a smartphone to edit one photo, according to Prisma. With over 35,000 photos converted each minute, the Prisma team needs thousands of graphics processors, which isn’t scalable.
To solve this problem, the Prisma team has, in essence, outsourced that computational process to the smartphone, according to Leon Gatys, whose research and DeepArt.io project inspired the app.
This was accomplished by reducing the deep learning neural network “to throw away unnecessary parts” while still maintaining the performance on a weaker machine.
“It’s a non-trivial engineering challenge,” Gatys told CNNMoney.
The 27-year-old PhD student from Germany is studying machine learning and computational neuroscience.
By making Prisma more efficient so a smartphone can run the app, the team can now let many more people use it.
“It’s a lot more scalable if it’s on the device,” said Gatys. “It’s a really cool thing.”
But why does a photo app need huge processing power like this?
Prisma’s technology doesn’t merely take a picture and put a filter on it; the app works by examining pixels in a photo and reconstructing them into different spatial arrangements to get a completely new image in a specific style of art, such as a Mondrian.
“You need a powerful graphics processing unit for that,” said Gatys.
Prisma’s update comes at a time when major tech companies are investing heavily in image recognition and enhancement technology.
Twitter recently acquired a machine learning company called Magic Pony Technology for a reported $150 million. The startup’s platform is designed to improve the quality of photos and streaming videos by approximating and filling in missing visual data, such as the texture of a wall. Meanwhile, Google, Microsoft and Facebook have been testing photo recognition technology over the past few years with varying degrees of success.
CNNMoney took the new version of Prisma for a spin while set to airplane mode, and it processed photos in about seven seconds. Prisma says it should take about two or three seconds.
But the real benefit of the update means a lot of server capacity will be freed up so Prisma can focus on making its technology work for video.
More importantly, it’s also about showing companies of all sizes how neural networks can be implemented for mobile phones.
“This means we will see a lot more new products based on neural networks,” Moiseenkov muses. “Thus, the quality of products will soar due to competition.”