February 02, 2014

Ask Me Anything about photo manipulation

And by “me” I mean “someone far smarter & more seasoned in this discipline.”

Dr. Hany Farid “has been called the ‘father’ of digital image forensics by NOVA” & has partnered several times with Adobe. More recently he formed an imaging startup with my old boss, longtime Photoshop VP Kevin Connor. This Thursday at 4pm Eastern time he’ll participate in a Reddit Ask Me Anything event.

[Update: Here's the transcript of the session. Hany's quite a witty dude.]

Incidentally, Kevin tells me that “At one point there were people coming to the Fourandsix site after searching for ‘Abraham Lincoln porn.’” And with that, these guys just won the internet.

Mic Drop

1:52 PM | Permalink | Comments [4]

December 12, 2013

Some rather brilliant imaging tech from Microsoft

The company’s Photosynth technology has been public since 2006, and while it’s been cool (placing photos into 3D space), I haven’t seen it gain traction in its original form or as a free panorama maker. That could now change.

The new version stitches photos into smooth fly-throughs. Per TechCrunch:

[U]sers upload a set of photos to Microsoft’s cloud service then the technology begins to looking for points (“features”) in the successive photos that appear to the be same object. It then determines where each photo was taken from, where in 3D space each of these objects were, and how the camera was oriented. Next, it generates the 3D shapes on a per-photo basis. And finally, the technology calculates a smooth path – like a Steadicam – through the locations for each photo, and then slices the images into multi-resolution pyramids for efficiency.

Check this out:

Once you’ve clicked it, try hitting “C” to reveal & interact with the 3D camera path. Here’s an example from photographer David Brashears, who captured Mt. Everest during one of the highest-elevation helicopter flights ever attempted:

So, will we see this become more common? It’s the first presentation I’ve seen that makes me want to don a wearable, lifelogging camera on vacation.

8:08 AM | Permalink | Comments [2]

October 09, 2013

Content-Aware Fill as… animation?

Invisibility cloak, engage!

From the project site:

Since Photoshop introduced the content aware fill tool, it has been familiar between several artists who used it to create different concepts. All what I had seen till now are pieces working through static images, but Zach Nader made in 2012 optional features shown, a 02:10 min video using the same tool over some commercial cars in which the texts, cars and people have been replaced by the content aware’s background. I find very interesting the glitchy movement over a constant and quiet background.

[Vimeo] [Via]

8:08 AM | Permalink | Comments [4]

May 10, 2013

Sneak Peek: Playing with Lighting in Photoshop and After Effects

Sylvain Paris (creator of previous eye-popping tech demos) presents a sneak peek of a way to transfer the appearance of lighting in one image or video to another. Any yes, you’ll find Rainn Wilson’s interjections annoying. I recommend skipping the first 1:30. (Can anyone tell me exactly how to do that with embedded YouTube content, by the way?)

8:11 AM | Permalink | Comments [16]

May 08, 2013

Sneak Peek: Perspective Warp in Photoshop

If I may echo Rainn Wilson, “Oh my God, that’s ridiculous.”

Note: This is a technology demo, not a feature that’s quite ready to go in Photoshop CC. With the move to subscriptions, however, Photoshop and other teams are moving away from “big bang” releases & towards more continuous deployment of improvements.

[Update: I know that a number of people aren't digging Wilson's schtick. Hats off to Sarah for being such a pro under pressure.]

10:53 AM | Permalink | Comments [33]

April 17, 2013

Photoshop Sneak Peek: New De-Blurring Tech

Here’s a brief look at new image-sharpening technology the Photoshop team has been developing (previously hinted at during the last Adobe MAX):

2:20 PM | Permalink | Comments [16]

March 22, 2013

Who’s got two thumbs & is disappearing from a video?

A. This guuuuuuy!

Fascinating:

Researchers at the Max Planck Institute for Informatics (MPII) have developed video inpainting software that can effectively delete people or objects from high-definition footage. The software analyzes each video frame and calculates what pixels should replace a moving area that has been marked for removal. 

More detail is on Gizmag. [Via Bruce Bullis]

Previously: Diminished Reality; Tourist-Zapping in Photoshop.

8:08 AM | Permalink | No Comments

March 06, 2013

New research aids stop motion animation

Begone, hands! New technology out of Hong Kong promises to:

allow animators to perform and capture motions continuously instead of breaking them into small increments… More importantly, it permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners.

The first half of the demo is pretty dry, but jump ahead to about 2:20 to start seeing the big wow:

[Via David Simons]

8:10 AM | Permalink | Comments [1]

February 19, 2013

In memoriam: Petro Vlahos

I can’t claim to have known his name, but like you I know his work: Petro Vlahos pioneered blue- and green-screen techniques & founded Ultimatte before passing away this past week at the age of 96. The BBC writes,

Mr Vlahos’s breakthrough was to create a complicated laboratory process which involved separating the blue, green and red parts of each frame before combining them back together in a certain order.

He racked up more than 35 movie-related patents and numerous Academy commendations.

By coincidence, I came across the following peek behind the scenes of The Hobbit. It bears out what Robin Shenfield from compositing firm The Mill says of Mr. Vlahos’s work: “It’s the absolute building block of all the visual effects that you see in television and movies.”

[Via]

10:29 AM | Permalink | No Comments

Demo: Automated photo insertion, compositing

Alyosha Efros has some ideas on cracking a particularly hard nut:

Apparently there’s a Java demo available, but it seems I long ago disabled Java. [Via Sylvain Paris]

8:20 AM | Permalink | Comments [1]

February 12, 2013

Feedback, please: Voice-driven photo editing

What do you think of PixelTone, an experimental interface from Adobe Research & the University of Michigan?

11:23 AM | Permalink | Comments [19]

February 10, 2013

Augmented-reality app translates newspapers for kids

Color me deeply skeptical, but intrigued: The BBC reports on an app that modifies the paper version of The Tokyo Shimbun in ways kids might appreciate:

“What it’s really about is something that’s been talked about for a long time, about content being presented in different ways depending on who the user is,” he said.

“It means two versions of the content – a grown-up one and the kids one. That has enormous potential. It also tackles a big gap in young readership.

This makes me oddly wistful: I’m Proust-ing out, almost smelling the newsprint & listening to the “funny papers” rattle as my dad read me cartoons, or as he’d read news & obits with a drink after work. The real obit, of course, is for the paper newspaper: I’m afraid all this will show up as a quaintly hilarious discovery that flits by on some future adult’s in-optic-nerve newsfeed. But whatever; I’m suddenly, and surprisingly, all choked up.

8:46 AM | Permalink | Comments [1]

January 20, 2013

Why Adobe publishes research

Adobe publishes some of its best work (e.g. tech behind Content-Aware Fill) in the academic community, rather than keeping it a trade secret, as some other big software companies do. Dan Goldman, one of the brains behind CAF, writes,

First, by encouraging publication, we make it attractive for the best minds in the business to come work in our labs – we count several former and current University professors among our ranks. Second, our researchers draw on the wealth of knowledge in the academic community as well – a great deal of our research is done in collaboration with graduate students like Connelly. And third, the rigorous demands of peer review keep us motivated to try truly new things – rather than being content to simply do all the old things better.

Check out the rest of Dan’s post for more insights into how the groundbreaking Content-Aware Fill came to be.

8:52 PM | Permalink | Comments [1]

December 15, 2012

Siggraph 2012 highlights

Buckets of inspiration, including a bunch from Adobe researchers & collaborators (HelpingHand & more):

(Now I kinda want to use the pseudonym “Laplacian Eigenfunctions.”)

8:11 AM | Permalink | No Comments

November 19, 2012

Lytro cameras add “perspective shift”

Remember that Wayne’s World “Camera one, camera two!” scene where he opens & closes one eye at a time? (No, you probably weren’t born when that came out; but I digress.) Lytro’s “perspective shift” feature works a bit like that, letting you switch among two subtly different points of view on the same scene:

It’s cool, though my big hope here remains that such technology offers a better way to select elements in a photo by detecting their varying depths. [Via]

10:05 AM | Permalink | Comments [2]

November 01, 2012

New Adobe tech to help video editors

It’s far from the flashiest task, but placing cuts & transitions in interview footage can be crucial to telling a story. Adobe’s Wil Li plus UC Berkeley-based collaborators Maneesh Agrawala and Floraine Berthouzoz have unveiled “a one-click method for seamlessly removing ’ums’ and repeated words, as well as inserting natural-looking pauses to emphasize semantic content.”:

To help place cuts in interview video, our interface links a text transcript of the video to the corresponding locations in the raw footage. It also visualizes the suitability of cut locations… Editors can directly highlight segments of text, check if the endpoints are suitable cut locations and if so, simply delete the text to make the edit. For each cut our system generates visible (e.g. jump-cut, fade, etc.) and seamless, hidden transitions. 

 

Here’s more info about the project.

8:10 AM | Permalink | Comments [9]

September 18, 2012

A new Photoshop extension detects image manipulation

I’m excited to announce that the company founded by my old boss & friend Kevin Connor, working together with image authenticity pioneer Dr. Hany Farid, has released their first product, FourMatch—an extension for Photoshop CS5/CS6 that “instantly distinguishes unmodified digital camera files from those that may have been edited.” From the press release:

FourMatch… appears as a floating panel that automatically and instantly provides an assessment of any open JPEG image. A green light in the panel indicates that the file matches a verified original signature in FourMatch software’s extensive and growing database of more than 70,000 signatures. If a match is not found, the panel displays any relevant information that can aid the investigator in further assessing the photo’s reliability.

Check it out in action, and see also coverage in the NY Times:

One other neat detail:

Fourandsix will donate 2 percent of their proceeds from the sale of this software to the National Center for Missing & Exploited Children (NCMEC). The donation will support NCMEC efforts to find missing children and prevent the abduction and sexual exploitation of children. 

8:42 AM | Permalink | Comments [5]

August 19, 2012

Using Kinect to turn objects into puppets

Fascinating!

KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications.

“When we started this,” says creator  Jiawen Chen, “we were thinking of using it as a more effective way of doing set dressing and prop placement in movies for a preview. Studios have large collections of shapes, and it’s pretty tedious to move them into place exactly. We wanted to be able to quickly walk around and grab things and twist them around. Then we realized we can do many more fun things.” I’ll bet.

Pretty darn cool, though if that Kinect dodgeball demo isn’t Centrifugal Bumble-Puppy come to life, I don’t know what is.

Here’s more info on using a Kinect as a 3D scanner:

[Via]

8:39 AM | Permalink | Comments [1]

August 09, 2012

Adobe & MIT team up on Halide, a new imaging language

Last month I broke the somewhat sad news that Adobe’s Pixel Bender language is being retired, but for a good cause: we can now redirect effort & try other ways to achieve similar goals. To that end, Adobe researchers have teamed up with staff at the Massachusetts Institute of Technology to define Halide, a new programming language for imaging. It promises faster, more compact, and more portable code.

According to MIT News,

In tests, the MIT researchers used Halide to rewrite several common image-processing algorithms whose performance had already been optimized by seasoned programmers. The Halide versions were typically about one-third as long but offered significant performance gains — two-, three-, or even six-fold speedups. In one instance, the Halide program was actually longer than the original — but the speedup was 70-fold.

Hot damn. #progress

8:24 AM | Permalink | Comments [2]

June 12, 2012

Exaggerating motion in video

Normally we work so hard to reduce motion in video (e.g. bringing the awesome Warp Stabilizer from After Effects to Premiere Pro CS6). There are cases, though (e.g. monitoring a heartbeat, or the breathing of a baby) where one wants to do just the opposite. Here’s an interesting demo:

[Via Pedro Estarque]

6:57 AM | Permalink | Comments [3]

May 31, 2012

New Adobe tech for making cinemagraphs

You know cinemagraphs, “still photographs in which a minor and repeated movement occurs”?  They can be extremely cool, but creating them is tricky.

Now Adobe researcher Aseem Agarwala & colleagues at UC Berkeley have devised “a semi-automated technique for selectively de- animating video to remove the large-scale motions of one or more objects so that other motions are easier to see.” It’s easier seen than described:

From the project site:

The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions. We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner.

Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation. We demonstrate the success of our technique with a number of motion visualizations, cinemagraphs and video editing examples created from a variety of short input videos, as well as visual and numerical comparison to previous techniques.

8:42 AM | Permalink | Comments [4]

March 10, 2012

Mercedes makes a real-world Content-Aware Fill

Brilliant use of LEDs & cameras:

[Via Rob Cantor]

8:20 AM | Permalink | Comments [1]

February 21, 2012

Scalado Remove promises handheld tourist-zapping

About five years ago we gave Photoshop the ability to stack multiple images together, then eliminate moving or unwanted details. Similar techniques have appeared in other tools, and now it appears you’ll be able to do all the capture & processing with just your phone. Here’s a quick preview:

The Verge has a bit more detail on the user experience. [Via John Dowdell]

8:40 AM | Permalink | Comments [11]

November 30, 2011

“Photoshopped or Not? A Tool to Tell”

My longtime boss Kevin Connor left Adobe earlier this year to launch a startup, Fourandsix, aimed at “revealing the truth behind every photograph.” Now his co-founder (and Adobe collaborator) Hany Farid has published some interesting research:

Dr. Farid and Eric Kee, a Ph.D. student in computer science at Dartmouth, are proposing a software tool for measuring how much fashion and beauty photos have been altered, a 1-to-5 scale that distinguishes the infinitesimal from the fantastic. Their research is being published this week in a scholarly journal, The Proceedings of the National Academy of Sciences.

Check out the interactive presentation of before & after images. Details are on the NY Times.

8:12 AM | Permalink | Comments [4]

November 07, 2011

Mobile facial recognition promises clever new apps

Check it out:

Petapixel writes,

The video at the top of this post is a Polar Rose demo of an app called “Recognizr”, which recognizes people’s faces and provides you with links to their social media accounts.

Imagine a world where every person on the street can be identified by simply pointing your phone at their face. Curious about a stranger? Point your camera at them to pull up their Facebook profile. People who had concerns over facial recognition in Facebook photos are going to have a fit about this one…

I remain eager to see what developers can do in terms of building photography & design apps. If you see anything cool, give a shout.

10:56 AM | Permalink | Comments [6]

November 05, 2011

Demo: Microsoft’s 3D “Holodesk”

This project leverages a Kinect sensor to let you manipulate 3D virtual objects with your hands:

[Via]

8:19 AM | Permalink | Comments [2]

November 03, 2011

Sneak peek: Adobe image recognition technology

Researcher Jon Brandt demos a potential new feature for searching through a large library of images by identifying images that contain the same people, backgrounds, landmarks, etc.:

11:11 AM | Permalink | Comments [5]

October 25, 2011

Amazing tech for turning video into 3D

I will never get over how lucky I am to work with people like this:

In this video, Sylvain Paris will show you a sneak peak of a potential feature for editing videos, including the ability to create 3D fly-throughs of 2D videos and change focus and depth of field.

8:08 AM | Permalink | Comments [8]

October 24, 2011

Research: Auto-selecting good stills from a video

A couple of years ago, Esquire shot a magazine cover using not a still camera but a high-res RED video camera. What was groundbreaking becomes commonplace, and as video capture resolution increases, so does the possibility of using stills as photos.

To make that easier, Adobe engineers & University of Washington researchers are collaborating on a method of automatically finding the best candid shots in a video clip. Check it out:

Very cool–though I continue to suspect there’s a market for auto-selecting the most ridiculous, unflattering images of one’s friends…

8:25 AM | Permalink | Comments [3]

October 18, 2011

Eye-popping tech for inserting 3D objects into photos

“With a single image and a small amount of annotation,” writes researcher Kevin Karsch, “our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects.” Fascinating:

Check out the project site for much more detailed info. [Via Zorana Gee]

8:17 AM | Permalink | Comments [3]

October 17, 2011

Adobe demos amazing deblurring tech (new video)

Last week over a million people (!) watched a handheld recording of this demo. Here’s a far clearer version*:

And here’s a before/after image (click for higher resolution):

Now, here’s the thing: This is just a technology demo, not a pre-announced feature. It’s very exciting, but much hard work remains to be done. Check out details right from the researchers via the Photsohop.com team blog. [Update: Yes, it's real. See the researchers' update at the bottom of the post.]

* Downside of this version: Bachman Turner Overdrive. Upside: Rainn Wilson.

1:26 PM | Permalink | Comments [17]

October 16, 2011

The Throwable Panoramic Ball Camera

Interesting concept:

The Throwable Panoramic Ball Camera captures a full spherical panorama when thrown into the air. At the peak of its flight, which is determined using an accelerometer, a full panoramic image is captured by 36 mobile phone camera modules.

[Via Jeff Tranberry]

8:26 AM | Permalink | Comments [8]

September 21, 2011

Video: Bizarre face-substitution technology

God that’s creepy. I want to look away… but I cannot.

Creator Kyle McDonald credits Jason Saragih’s FaceTracker library, the ofxFaceTracker addon, and openFrameworks​. [Via]

10:03 AM | Permalink | Comments [5]

August 22, 2011

“Really Being John Malkovich”

Adobe researcher Eli Shechtman & collaborators have created this excellent bit of madness:

Given a photo of person A, we seek a photo of person B with similar pose and expression. Solving this problem enables a form of puppetry, in which one person appears to control the face of another.

Now, let’s see if we can pry a webcam version out of them… [Via]

8:02 AM | Permalink | Comments [2]

July 28, 2011

Exhibit: Photo Tampering Throughout History

You can build a business manipulating photos; how about building one by detecting those manipulations?

My longtime boss Kevin Connor was instrumental in building Photoshop, Lightroom, and PS Elements into the successes they are today, and he taught me the ropes of product management. After 15 years he was ready to try starting his own company, so this spring he teamed up with Dr. Hany Farid (“the father of digital image forensics,” said NOVA). Together they’ve started forensics company Fourandsix (get the pun?), aimed at “revealing the truth behind every photograph.”

Now they’ve put up Photo Tampering Throughout History, an interesting collection of famous (and infamous) forgeries & manipulations from Abraham Lincoln’s day to the present. Numerous examples include before & after images plus brief histories of what happened.

I wish Kevin & Hany great success in this new endeavor, and I can’t wait to see the tools & services they introduce.

Related/previous:

10:08 AM | Permalink | Comments [1]

June 30, 2011

Google adds “Search by Image”

Ah–I’d been wondering what that little camera icon in the Google Images search field meant. As the company explains,

You might have an old vacation photo, but forgot the name of that beautiful beach. Typing [guy on a rocky path on a cliff with an island behind him] isn’t exactly specific enough to help find your answer. So when words aren’t as descriptive as the image, you can now search using the image itself.

Or, “Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke

2:08 PM | Permalink | Comments [6]

April 13, 2011

Photography: Shallow depth of field on iPhone

Stanford professor & occasional Photoshop team collaborator Marc Levoy has created SynthCam, an interesting tool for simulating large-aperture photo effects using a tiny-aperture cell phone camera:

For more examples, tutorials, etc., see Marc’s site. [Via]

11:22 AM | Permalink | Comments [4]

January 29, 2011

Lovely fractal animation

A little amuse l’oeil for Saturday morning:

[Via]

7:44 AM | Permalink | Comments [5]

January 07, 2011

Trimensional: 3-D Object Scanning for iPhone

Fascinating:

Here’s more info. [Via] As more and more devices can capture and display 3D data, I think it’ll become clearer why we’ve invested in giving Photoshop a 3D infrastructure.

11:52 AM | Permalink | Comments [8]

Video: Show your bones

Crafty German folks + gaming hardware = Creepy good times.

“The cross section isn’t actually the user’s skeleton but a volume visualization of a medical data set,” notes PCWorld. Here’s more info on the Medical Augmented Reality project.

9:01 AM | Permalink | Comments [1]

November 29, 2010

Sneak peeks: New Adobe digital imaging tech

At Adobe MAX last month, digital imaging researcher Sylvain Paris showed off some tech he & colleagues are cooking up in Adobe’s Boston office. Here he touches on color/tone matching between photos; more sophisticated auto-correction of color and tone (based on analyzing thousands of adjustments made by pro photographers); and image de-blurring:

Lots of other really interesting MAX sneaks are collected here.

8:13 AM | Permalink | Comments [9]

November 23, 2010

Video: Body Dysmorphia, Xbox edition

I’ve been a fan of Robert Hodgin’s visual experiments for many years, and now he’s creating some intriguing work by hacking a Microsoft Kinect:

See also his Dueling Kinects demo. (And I’m probably alone in this, but these weird characters are giving me flashbacks of the bad guy in RoboCop 2.) [Via]

12:41 PM | Permalink | Comments [2]

October 19, 2010

Bokode: Using bokeh to transmit info

What an interesting idea: researchers are using bokeh (lens blur), instead of sharpness, to transmit lots of data into cameras (think barcodes/QR codes on steroids). Check out the demo:

Check out the project page for more details. [Via]

1:28 PM | Permalink | Comments [1]

October 18, 2010

“Diminished Reality” removes objects from video in real time

Interesting tech, to say the least:

More info (in German) is on the project site. [Via Tobias Hoellrich]

9:02 AM | Permalink | Comments [5]

October 15, 2010

Todor talks plenoptic imaging

Did you know that the Photoshop team has a resident theoretical physicist? If you’d like to meet him, check out next Thursday’s Silicon Valley ACM SIGGRAPH talk:

Recently we and others have gained deeper understanding of the fundamentals of the plenoptic camera and Lippmann sensor. As a result, we have developed new rendering approaches to improve resolution, remove artifacts, and render in real time. By capturing multiple modalities simultaneously, our camera captures images that are focusable after the fact and which can be displayed in multi view stereo. The camera can also be configured to capture HDR, polarization, multispectral color and other modalities. With superresolution techniques we can even render results that approach full sensor resolution. During our presentation we will demonstrate interactive real time rendering of 3D views with after the fact focusing.

See previous video: Adobe demos refocusable images.

6:42 AM | Permalink | Comments [5]

October 07, 2010

Video: Reshaping human bodies on the move

Built at last, built at last, thank God almighty, I’ll be built at last… According to Popular Science:

Developers at the Max Planck Institute for Informatics in Saarbrücken, Germany compiled 3D scans of 120 men and women of varying sizes, merging them into a single model that can be morphed to any shape and overlaid atop original footage.

The software, called MovieReshape, builds on existing programs that track an actor’s silhouette through a scene, mapping the body into a morphable model. Using the compiled 3D scans, the program can create realistic-looking and moving body parts to the programmer’s specifications.

Check out the project site for more info. [Via Jerry Harris]

10:47 PM | Permalink | Comments [2]

September 25, 2010

Adobe demos refocusable images

At NVIDIA’s technology conference this week, Adobe researcher Todor Georgiev demonstrated GPU-accelerated processing of plenoptic images. As Engadget puts it, “Basically, a plenoptic lens is composed of a litany of tiny “sub-lenses,” which allow those precious photons you’re capturing to be recorded from multiple perspectives.” Plenoptic image capture could open the door to easier object matting/removal (as the scene can be segmented by depth), variable perspective after capture, and more.

This brief demo takes a little while to get going, but I still think it’s interesting enough to share.

6:57 AM | Permalink | Comments [9]

July 21, 2010

Video: Local layering ideas

Jim McCann is a graphics researcher (you might remember his interesting work with gradient-domain painting), and I’m happy to say he’s joining the Adobe advanced technology staff. He has some ideas about dealing with the limitations of traditional graphical layering models (as seen in Photoshop, After Effects, Flash, etc.):

For more videos & papers on the subject, check out the project page. [Via Jerry Harris]

9:50 AM | Permalink | Comments [10]

July 20, 2010

Video: Technicolor dreamglove controls a computer

When he’s not helping bring Puppet Warp to Photoshop, Jovan Popovic does interesting work at MIT in computer interfaces and… fashion?

More videos & info are on the project team’s site.

[Via]

9:19 AM | Permalink | No Comments

July 16, 2010

Video: HDR relighting technology

Here’s some very cool imaging tech, though it’ll be interesting to see how many people will take the time to create multiple exposures, each with different controlled lighting:

If this is up your alley, check out a paper and video on the subject that some Adobe researchers put together a couple of years back.

6:22 AM | Permalink | Comments [4]

April 07, 2010

Video: Automated reshaping of human bodies

Oh my; how long until we see the Ralph Lauren EmaciatorPro(™) Edition?

Here’s more info on the project. (And no, unlike Puppet Warp, it’s not a CS5 thing.) [Via Jerry Harris]

10:00 AM | Permalink | Comments [8]

March 03, 2010

“Enhance!” Redux

Heh–here’s a nice little satire of phony image enhancement on TV (see previous montage):

Of course, image scientists continue to work on all sorts of new craziness, so it’s all just a matter of time… right?

4:50 PM | Permalink | Comments [3]

February 22, 2010

Video: “A computational model of aesthetics”

People always like to joke about Photoshop eventually adding a big red “Make My Photo Good” button, automatically figuring out what looks good & what adjustments are needed. Of course, researchers are working on just that sort of thing:

As someone who aspires to be creative, I have mixed feelings. The idea of rating images according to precomputed standards of beauty makes me think of the Robin Williams character in Dead Poets Society excoriating a textbook that rated poetry along two axes:

Excrement! That’s what I think of Mr. J. Evans Pritchard! We’re not laying pipe! We’re talking about poetry. How can you describe poetry like American Bandstand? “I like Byron, I give him a 42 but I can’t dance to it!”

And yet, I find I’m intrigued by the idea, wanting to run the algorithm on my images–if only, maybe, to have fun flouting it. I also have to admit that I’d like to see the images taken by certain of my family members (not you, hon) run through such algorithms–if only to crop in on the good stuff.
[Via Jerry Harris]

1:18 PM | Permalink | Comments [21]

January 04, 2010

Photos to sound & back again

  • A technology called Photosounder can treat images as audio (demo). “Sounds, once turned into images,” they say, “can be powerfully modified to achieve effects and results that couldn’t be obtained in any other way, while images of all sorts reveal the infinite kinds of otherworldly sounds they contain.” [Via]
  • In a related vein, scientists have turned dolphin calls into kaleidoscopic patterns. (Note the image gallery navigation controls on the right.) [Via]
5:19 PM | Permalink | Comments [5]

October 20, 2009

Video: New from Adobe Labs, Content-Aware Fill in Photoshop


You like? :-) (Here’s some more background on the technology.) To see higher-res detail, I recommend hitting the full-screen icon or visiting the Facebook page that hosts the video.
As with all such sneak peeks, I have to be really clear in saying that this is just a demo, and as such it’s not a promise that a technology will ship inside a particular version of Photoshop. (As the late Mac columnist Don Crabb told me years ago, “There’s many a slip ‘twixt cup & lip.”) Still, it’s fun to show some of the stuff with which we’re experimenting.

10:56 PM | Permalink | Comments [33]

October 07, 2009

PhotoSketch: Internet Image Montage

Oh, that’s rather cool, then:

I’ve seen various experiments at Adobe that fetch & automatically composite images, but the idea of basing searches on sketches is new to me. Details are in the researchers’ paper (PDF).
Almost completely unrelated, but in the spirit of cool image science, during last night’s sneak peeks at Adobe MAX, Dan Goldman showed a little taste of “PatchMatch” (“content-aware healing”) integrated into Photoshop. (As always, no promises, this is just a test, yadda yadda.)

11:30 AM | Permalink | Comments [7]

July 10, 2009

Wide-angle image correction tech

Adobe researcher Aseem Agarwala, working with Maneesh Agrawala & Robert Carroll at Berkeley, has demonstrated techniques to enable “Content-Preserving Projections for Wide-Angle Images.” That may sound a little dry, but check out the demo video (10MB QT) to see how the work enables extremely wide-angle photography. [Via Dan Goldman]
Aseem contributed the depth-of-field extension feature to Photoshop CS4. For previous entries showing advanced imaging work, check out this blog’s Image Science category.

6:30 AM | Permalink | Comments [1]

July 01, 2009

Super cool video stabilization technology

Adobe researchers Hailin Jin and Aseem Agarwala*, collaborating with U.Wisconsin prof. Michael Gleicher & Feng Liu, have unveiled their work on “Content-Preserving Warps for 3D Video Stabilization.” In other words, their tech can give your (and my) crappy hand-held footage the look of a Steadicam shot.

Check out the demonstration video, shot at & around Adobe’s Seattle office. (Hello, Fremont Lenin!) It compares the new technique to what’s available in iMovie ’09 and other commercial tools.

As with all research papers/demos, I should point out making technology ready for real-world use can require plenty of additional work & tuning. Still, these developments are encouraging. [Via]

[Previously: Healing Brush & Content-Aware Scaling on (really good) drugs.]

* If you’ve created a panorama using Photoshop, you’ve used Hailin’s (image alignment) and Aseem’s (image blending) work.

6:58 AM | Permalink | Comments [8]

June 01, 2009

Image science radness o’ the day

“This is your Healing Brush.
“This is your Content-Aware Scaling.

“*This* is your Healing Brush & Content-Aware Scaling on (really good) drugs…”

Adobe researchers Eli Shechtman & Dan Goldman, working together with Prof. Adam Finkelstein from Princeton & PhD student Connelly Barnes, have introduced PatchMatch, “A Randomized Correspondence Algorithm for Structural Image Editing.”

No, I wouldn’t know a randomized correspondence algorithm for structural image editing if it bit me on the butt, either, but just check out the very cool video demo. More details are in the paper (one of the 17 papers featuring Adobe collaboration presented at SIGGRAPH this year).

So, what do you think? [Via]

11:57 AM | Permalink | Comments [13]

April 10, 2009

Adobe papers light up SIGGRAPH

I was excited to hear that researchers at Adobe have submitted 22% of all papers accepted at SIGGRAPH this year. That’s a pretty incredible accomplishment*. In addition, Wojciech Matusik has been selected as this year’s recipient of the ACM SIGGRAPH Significant New Research Award. Congrats, guys!
The company has been making significant investments & attracting top talent in this area in recent years, and it’s great to see those efforts bearing fruit. It’ll be even better when we start harvesting more of this research as real-world features in Photoshop and other apps–and believe me, we’re working to do just that.
* By way of comparison, Microsoft had 6 papers accepted this year (vs. Adobe’s 17). Microsoft has 90,000 employees; Adobe has 7,000.

10:48 AM | Permalink | Comments [12]

December 13, 2008

Adobe previews “Infinite Images” technology

Remember Shai Avidan, the co-creator of seam carving (Content-Aware Scaling) who joined Adobe last year?  Just as he did at Adobe MAX last year, Shai took to the stage this year with an eye-catching demo.  Collaborating with Prof. Bill Freeman and a team from MIT, Shai has been working on "Infinite Images," "a system for exploring large collections of photos in a virtual 3D space."  The team writes:

 

Our system does not assume the photographs are of a single real 3D location, nor that they were taken at the same time. Instead, we organize the photos in themes, such as city streets or skylines, and let users navigate within each theme using intuitive 3D controls that include pan, zoom and rotate…

We present results on a collection of several millions of images downloaded from Flickr and broken into themes that consist of a few hundred thousands images each. A byproduct of our system is the ability to construct extremely long panoramas, as well as image taxi, a program that generates a virtual tour between a user supplied start and finish images.

 

To read up on some details, check out the PDF (shared via Acrobat.com):

You could also visit Shai’s site to read up on “Non-Parametric Acceleration, Super-Resolution, and Off-Center Matting,” not to mention “Part Selection with Sparse Eigenvectors”–but I’d recommend being a lot smarter than I am. ;-) (We just may have to name our next child “Eigenvector.”)

9:52 AM | Permalink | Comments [5]

December 04, 2008

Promising video research from Adobe

"Dan Goldman is an old friend of mine from ILM," writes FX pro Stu Maschwitz.  "He now works for Adobe’s top-secret G*d Dammit Put This In A Product Now division."  Check out Dan’s Interactive Video Object Manipulation demo to see if you agree.  (Now that Photoshop Extended can work with video, it’s fun to imagine the possibilities.  No promises, of course.)

10:12 AM | Permalink | Comments [6]

September 09, 2008

Colliding hadrons, sinking subways, & more


9:48 PM | Permalink | No Comments

June 29, 2008

Hot image science o’ the day

Pravin Bhat & friends at the University of Washington have put together a rather eye-popping video that demonstrates Using Photographs to Enhance Videos of a Static Scene.  I think you’ll dig it.  (The removal of the No Parking sign is especially impressive.) [Via Jeff Tranberry]

 

The work builds upon research by Adobe’s Aseem Agarwala (who was instrumental in bringing Auto-Blend to Photoshop CS3).  Adobe Senior Principal Scientist (and UW prof.) David Salesin is helping facilitate more collaboration between Adobe teams & academia, recruiting full-time hires like Aseem & sponsoring visiting researchers like Hany Farid.

(Note: As always, please don’t take my mentioning of various tech demos as a hint about any specific feature showing up in a particular Adobe product. I just post things that I find interesting & inspiring.)

Previously:

1:32 PM | Permalink | Comments [6]

June 08, 2008

Cool painting tech demo o’ the day

Photoshop engineer Jerry Harris is responsible for the application’s painting tools, and he’s always got an eye open for interesting developments in the field of computerized painting.  This morning he passed along a cool demo video of James McCann and Nancy Pollard’s Real-time Gradient-domain Painting technology.

 

In a nutshell, according to the video, "A gradient brush allows me to paint with intensity differences.  When I draw a stroke, I am specifying that one side is lighter than the other."  Uh, okay… And the video is a little ho-hum until the middle.  That’s when things get rather cool.  Check out cloning/duplicating pixels along a path, plus the interesting approach to painting a band of color.

10:31 AM | Permalink | Comments [13]

August 28, 2007

Imaging heavy hitters join Adobe

A number of rock stars from the world of image science have recently joined Adobe:

Adobe Senior Principal Scientist David Salesin, who manages this crew, notes that "If you count their SIGGRAPH papers as well, you’ll see that current Adobe employees had 11 of the 108 papers in the conference."

Now, let me inject a disclaimer:  Just because a particular researcher has worked on a particular technology in his or her past life, it’s not possible to conclude that a specific feature will show up in a particular Adobe product.  How’s that for non-commital? ;-)  In any case, it’s just exciting that so many smart folks are joining the team (more brains to hijack!).

[Update: Cambridge, MA-based Xconomy provides additional context for this news.]

2:38 PM | Permalink | Comments [19]

August 19, 2007

“Holy crap”-worthy imaging technology

Wow–now this I haven’t seen before: Israeli brainiacs Shai Avidan and Ariel Shamir have created a pretty darn interesting video that demonstrates their technique of "Seam Carving for Content-Aware Image Resizing."  When scaling an image horizontally or vertically (e.g. making a panorama narrower), the technology looks for paths of pixels that can be removed while causing the least visual disruption.  Just as interesting, if not more so, I think, is the way the technology can add pixels when increasing image dimensions.  Seriously, just check out the video; I think you’ll be blown away.  (More info is in a 20MB PDF, in which they cite work by Adobe’s Aseem Agarwala–the creator of Photoshop CS3′s Auto-Blend Layer code.) [Via Geoff Stearns]

I hope to share more good stuff from SIGGRAPH soon.  While I was being stuffed with ham sandwiches by kindly Irish folks, a number of Adobe engineers were speaking at & exploring the show.  Todor Georgiev, one of the key minds behind the Healing Brush, has been busily gluing together his own cutting edge optical systems.  More on that soon.

11:43 PM | Permalink | Comments [19]

March 05, 2007

Digital imaging goes to court

CNET reported recently on a court case that involved image authentication software as well as human experts, both seeking to distinguish unretouched photographs from those created or altered using digital tools.  After disallowing the software, written by Hany Farid & his team at Dartmouth, the judge ultimately disallowed a human witness, ruling that neither one could adequately distinguish between real & synthetic images.  The story includes some short excerpts from the judge’s rulings, offering some insight into the legal issues at play (e.g. "Protected speech"–manmade imagery–"does not become unprotected merely because it resembles the latter"–illegal pornography, etc.).

As I’ve mentioned previously, Adobe has been collaborating with Dr. Farid & his team for a few years, so we wanted to know his take on the ruling.  He replied,

The news story didn’t quite get it right. Our program correctly classifies about 70% of photographic images while correctly classifying 99.5% of computer-generated images. That is, an error rate of 0.5%. We configured the classifier in this way so as to give the benefit of the doubt to the defendant. The prosecutor decided not to use our testimony because of other reasons, not because of a high error rate.

The defense argues that the lay person cannot tell the difference between photographic and CG images. Following this ruling by Gertner, we performed a study to see just how well human subjects are at distinguishing. They turn out to be surprisingly good.  Here is a short abstract describing our results. [Observers correctly classified 83% of the photographic images and 82% of the CG images.]

Elsewhere in the world of "Fauxtography" and image authenticity:

  • In the wake of last summer’s digital manipulation blow-up, Reuters has posted guidelines on what is–and is not–acceptable to do to an image in Photoshop. [Via]
  • Calling it "’The Most Culturally Significant Feature’ of Canon’s new 1D MkIII," Micah Marty heralds "the embedding of inviolable GPS coordinates into ‘data-verifiable’ raw files."
  • Sort of the Ur-Photoshop: This page depicts disappearing commissars and the like from Russia, documenting the Soviet government’s notorious practice or doctoring photos to remove those who’d fallen from favor. [Via]
  • These practices know no borders, as apparently evidenced by a current Iranian controversy, complete with Flash demo. [Via Tom Hogarty]
  • Of course, if you really want to fake people out, just take a half-naked photo of yourself, mail it to the newspaper, and tell them that it’s a Gucci ad. Seems to work like a charm. [Via]

[Update: PS--Not imaging but audio: Hart Shafer reports on Adobe Audition being used to confirm musical plagiarism.]

3:50 PM | Permalink | Comments [3]
Copyright © 2014 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)