Top 11 Tech Sneak Peeks from MAX 2017 that Wow’ed

Even after more than three full days attending Adobe MAX keynotes, sessions and community pavilion “Make it” areas, we’re still ready for more. We want to see further radical, previously unseen innovations from the Adobe Research and product teams, and the MAX Sneaks show is our ticket.

We don’t stream the Sneaks show live so if you weren’t in Vegas with us, you missed it. Luckily, we got them all right here.

None of these technologies are currently buyable or incorporated into an Adobe app but if you like them, we suggest you share your affection on social using the corresponding hashtag. Our product teams listen – if certain potential features are popular, there’s a stronger chance they make it into Adobe products.

On to the show!


#SceneStitch

Remember content-aware fill? Scene Stitch is like that, but more. Instead of just searching the image you are editing for content, and updating that image with what it finds, Scene Stitch looks through other images (like you would find on Adobe Stock) to find all new graphic elements. Scene Stitch isn’t just matching image types, it’s looking for what would fit the image well. 

How?

Scene Stitch is powered by Adobe Sensei, which is also part of many current Creative Cloud product features. Adobe Sensei is core to a lot of the coolest things coming from the product teams. (You’ll see an Adobe Sensei theme throughout the MAX 2017 Sneaks…)

—-

#Puppetron

Make your selfie into the graphic style of your choice. Simple. Yet amazing.

When would I use this, you ask? How about with Character Animator, which is one of those products you’ve likely seen in the real world and may not have known it. See Homer Simpson live? That was Character Animator. Project Puppetron, powered by Adobe Sensei, will make it easy to create animated characters ready for real-time webcam animation.

—-

#ProjectScribbler

Colorize an image with a single click. Make your black and whites into colorful images. Can be a photo, can be a doodle. Adobe Sensei powers Project Scribbler – the researchers working on the technology trained the neural network so the technology makes color choices based on what it learned.

Project Scribbler is a collaboration between Adobe researchers, Georgia Tech and UC Berkeley.

—-

#PhysicsPak

Physics Pak shows us why a time saver can be a ‘wow’ feature. Instead of you trying to place elements just right in a design outline, the elements can move themselves into perfect placement. What might take hours of design work, could take just minutes through the magic of physics. 

—-

#ProjectDeepFill

Project Deep Fill (also powered by Adobe Sensei) takes the concept behind content aware fill to the next level. 

One of the best parts about Adobe is that innovation comes from anywhere. There is no hierarchy saying which employees can create, and then present great ideas to a MAX audience of 12,000. Project Deep Fill was created with, and presented by, one of our interns (!) and that alone makes this one worth watching.

#ProjectCloak

Project Cloak is simple: need to hide something in your video? This Adobe Sensei-driven feature will help you do that. No need to tweak each frame. This is content aware fill for video. 

—-

#PlayfulPalette

Designers and artists tend to spend a lot of time on creating color palettes. Mixing colors digitally can be difficult because of all the options. Playful Palette reimagines how you can more easily manage the colors you’ve mixed. Digital artists will love it – analog painters will only dream for a feature like this. 

—-

#SonicScape

One challenge with Virtual Reality (VR) and 360-degree video is how editors can marry the sound to what scene/image they want people to look at. “Ambisonic audio” is how we hear things more directionally within 360 content. For editors, it’s nearly impossible to line up the audio with the visual in a way that can enhance the immersive experience. If people can look anywhere, then the sound becomes integral to the story, and you need to get your audience looking where you want them to look.

SonicScape will allow editors to more easily align sound to the story, by analyzing where the sound is misaligned and augment the ambisonic sound to where the video creators want the audience to focus their attention.

—-

#ProjectLincoln

Infographics are still the data-sharing asset of choice on the internets.

Instead of starting with data and creating a graphic around the findings, what if you started with the creative design, and have it automatically adjust based on the data you want to present?

Borrowing the “repeat grid” feature from Adobe XD, Project Lincoln automates the design and production of creating infographics.

Much easier to use than spreadsheet software and you wouldn’t need to manually create it in Illustrator or even have to code. Plus, you wouldn’t be hampered by only having the fancy-looking bar chart option. Get complex – do radial graphics and more in just minutes.

—-

#ProjectQuick3D

Can’t do 3D? With Project Quick 3D you could. Easily convert a simple sketch to a 3D model that can be used in applications like the new Adobe Dimension CC. The system uses machine learning in Adobe Sensei to search Adobe Stock, simply based on a simple doodle. Impressive.

—-

#ProjectSidewinder

VR video has a fundamental challenge: the camera sits in just one place, so a scene is only ‘seen’ by the viewer from a single angle. Enter Sidewinder, which adds more depth of space to VR video. Now when you kneel with the headset on, you kneel in the VR video. When you move your head around a corner, you can see the side of that corner. Sidewinder makes VR video look more like genuine 3D. The future of VR is bright.