March 05, 2007

Digital imaging goes to court

CNET reported recently on a court case that involved image authentication software as well as human experts, both seeking to distinguish unretouched photographs from those created or altered using digital tools.  After disallowing the software, written by Hany Farid & his team at Dartmouth, the judge ultimately disallowed a human witness, ruling that neither one could adequately distinguish between real & synthetic images.  The story includes some short excerpts from the judge’s rulings, offering some insight into the legal issues at play (e.g. "Protected speech"–manmade imagery–"does not become unprotected merely because it resembles the latter"–illegal pornography, etc.).

As I’ve mentioned previously, Adobe has been collaborating with Dr. Farid & his team for a few years, so we wanted to know his take on the ruling.  He replied,

The news story didn’t quite get it right. Our program correctly classifies about 70% of photographic images while correctly classifying 99.5% of computer-generated images. That is, an error rate of 0.5%. We configured the classifier in this way so as to give the benefit of the doubt to the defendant. The prosecutor decided not to use our testimony because of other reasons, not because of a high error rate.

The defense argues that the lay person cannot tell the difference between photographic and CG images. Following this ruling by Gertner, we performed a study to see just how well human subjects are at distinguishing. They turn out to be surprisingly good.  Here is a short abstract describing our results. [Observers correctly classified 83% of the photographic images and 82% of the CG images.]

Elsewhere in the world of "Fauxtography" and image authenticity:

[Update: PS–Not imaging but audio: Hart Shafer reports on Adobe Audition being used to confirm musical plagiarism.]

Posted by John Nack at 3:50 PM on March 05, 2007

Comments

  • Alex — 5:28 PM on March 05, 2007

    Now this is a fascinating and important issue. I offer a solution which I think is potentially viable. Every image is tagged with information such as camera and speed, etc. Now why can’t adobe tag modifications that have been done with photoshop with such a tag that would create a watermark so to speak as part of the raw data within the color information to identify it as such? There must be a way to do so without ‘dropping’ the information after a save and resave. You could put it into the rgb color tags themselves?
    [Photoshop does offer the ability to record a history log inside a file’s metadata. By necessity that’s a voluntary process, so it functions as a way to show one’s work, rather than to defeat tampering. –J.]

  • Barry Pearson — 3:00 AM on March 06, 2007

    I was once told that a disadvantage of DNG was that it didn’t support such verification, and so could never be used for forensic work, etc.
    Is this true, (or could sufficient information be stored in DNGPrivateData), and are changes being considered?
    [Good question; I’ll ask. –J.]

  • Peter Baird — 7:44 AM on March 08, 2007

    This Wired article looks interesting, and related: http://www.wired.com/news/technology/0,72883-0.html?tw=wn_index_1
    Can you comment on it at all, or are they just rehashing what you’ve already said?
    [At the moment I don’t have anything new to offer. –J.]

Copyright © 2014 Adobe Systems Incorporated. All rights reserved.
Terms of Use | Privacy Policy and Cookies (Updated)