{"id":30537,"date":"2023-07-03T19:06:38","date_gmt":"2023-07-03T19:06:38","guid":{"rendered":"https:\/\/precisebusinesssolutions.net\/how-easy-is-it-to-fool-a\/"},"modified":"2023-07-03T19:06:38","modified_gmt":"2023-07-03T19:06:38","slug":"how-easy-is-it-to-fool-a","status":"publish","type":"post","link":"https:\/\/reactlocal.com\/blog\/how-easy-is-it-to-fool-a\/","title":{"rendered":"How Easy Is It to Fool A"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/static01.nyt.com\/images\/2023\/06\/16\/technology\/ai-detection-midjourney-stable-diffusion-dalle-promo\/ai-detection-midjourney-stable-diffusion-dalle-promo-facebookJumbo-v3.jpg\"><br \/>\n<!-- data preprocessed in process\/process-data.js -->      <\/p>\n<p>  \tThe pope did not wear Balenciaga. And filmmakers did not fake the moon landing. In recent months, however, startlingly lifelike images of these scenes created by artificial intelligence have spread virally online, threatening society\u2019s ability to separate fact from fiction.  <\/p>\n<p>  \tTo sort through the confusion, a fast-burgeoning crop of companies now offer services to detect what is real and what isn\u2019t.  <\/p>\n<p>  \tTheir tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.  <\/p>\n<p>  \tTo assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos.  The results show that the services are advancing rapidly, but at times fall short.  <\/p>\n<p>  \tConsider this example:  <\/p>\n<p>    \t  \t  \t\tGenerated by A.I.   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/elon.jpg\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t      <\/p>\n<p>  \tThis image appears to show the billionaire entrepreneur Elon Musk embracing a lifelike robot. The image was created using Midjourney, the A.I. image generator, by Guerrero Art, an artist who works with A.I. technology.  <\/p>\n<p>  \tDespite the implausibility of the image, it managed to fool several A.I.-image detectors.  <\/p>\n<p>  \t\tTest results from the image of Mr. Musk<\/p>\n<p>  \tThe detectors, including versions that charge for access, such as Sensity, and free ones, such as Umm-maybe\u2019s A.I. Art Detector, are designed to detect difficult-to-spot markers embedded in A.I.-generated images. They look for unusual patterns in how the pixels are arranged, including in their sharpness and contrast. Those signals tend to be generated when A.I. programs create images.  <\/p>\n<p>  \tBut the detectors ignore all context clues, so they don\u2019t process the existence of a lifelike automaton in a photo with Mr. Musk as unlikely. That is one shortcoming of relying on the technology to detect fakes.  <\/p>\n<p>  \tSeveral companies, including Sensity, Hive and Inholo, the company behind Illuminarty, did not dispute the results and said their systems were always improving to keep up with the latest advancements in A.I.-image generation. Hive added that its misclassifications may result when it analyzes lower-quality images. Umm-maybe and Optic, the company behind A.I. or Not, did not respond to requests for comment.  <\/p>\n<p>  \tTo conduct the tests, The Times gathered A.I. images from artists and researchers familiar with variations of generative tools such as Midjourney, Stable Diffusion and DALL-E, which can create realistic portraits of people and animals and lifelike portrayals of nature, real estate, food and more. The real images used came from The Times\u2019s photo archive.  <\/p>\n<p>  \tHere are seven examples:  <\/p>\n<p>Note: Images cropped from their original size.<\/p>\n<p>  \tDetection technology has been heralded as one way to mitigate the harm from A.I. images.  <\/p>\n<p>  \tA.I. experts like Chenhao Tan, an assistant professor of computer science at the University of Chicago and the director of its Chicago Human+AI research lab, are less convinced.  <\/p>\n<p>  \t\u201cIn general I don\u2019t think they\u2019re great, and I\u2019m not optimistic that they will be,\u201d he said. \u201cIn the short term, it is possible that they will be able to perform with some accuracy, but in the long run, anything special a human does with images, A.I. will be able to re-create as well, and it will be very difficult to distinguish the difference.\u201d  <\/p>\n<p>  \tMost of the concern has been on lifelike portraits. Gov. Ron DeSantis of Florida, who is also a Republican candidate for president, was criticized after his campaign used A.I.-generated images in a post. Synthetically generated artwork that focuses on scenery has also caused confusion in political races.  <\/p>\n<p>  \tMany of the companies behind A.I. detectors acknowledged that their tools were imperfect and warned of a technological arms race: The detectors must often play catch-up to A.I. systems that seem to be improving by the minute.  <\/p>\n<p>  \t\u201cEvery time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator,\u201d said Cynthia Rudin, a computer science and engineering professor at Duke University, where she is also the principal investigator at the Interpretable Machine Learning Lab. \u201cThe generators are designed to be able to fool a detector.\u201d  <\/p>\n<p>  \tSometimes, the detectors fail even when an image is obviously fake.  <\/p>\n<p>  \tDan Lytle, an artist who works with A.I. and runs a TikTok account called The_AI_Experiment, asked Midjourney to create a vintage picture of a giant Neanderthal standing among normal men. It produced this aged portrait of a towering, Yeti-like beast next to a quaint couple.  <\/p>\n<p>    \t  \t  \t\tGenerated by A.I.   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/yeti.jpeg\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t        \t  \t<\/p>\n<p>  \t\tTest results from the image of a giant<\/p>\n<p>  \tThe wrong result from each service tested demonstrates one drawback with the current A.I. detectors: They tend to struggle with images that have been altered from their original output or are of low quality, according to Kevin Guo, a founder and the chief executive of Hive, an image-detection tool.  <\/p>\n<p>  \tWhen A.I. generators like Midjourney create photorealistic artwork, they pack the image with millions of pixels, each containing clues about its origins. \u201cBut if you distort it, if you resize it, lower the resolution, all that stuff, by definition you\u2019re altering those pixels and that additional digital signal is going away,\u201d Mr. Guo said.  <\/p>\n<p>  \tWhen Hive, for example, ran a higher-resolution version of the Yeti artwork, it correctly determined the image was A.I.-generated.  <\/p>\n<p>  \tSuch shortfalls can undermine the potential for A.I. detectors to become a weapon against fake content. As images go viral online, they are often copied, resaved, shrunken or cropped, obscuring the important signals that A.I. detectors rely on. A new tool from Adobe Photoshop, known as generative fill, uses A.I. to expand a photo beyond its borders. (When tested on a photograph that was expanded using generative fill, the technology confused most detection services.)  <\/p>\n<p>  \tThe unusual portrait below, which shows President Biden, has much better resolution. It was taken in Gettysburg, Pa., by Damon Winter, the photographer for The Times.  <\/p>\n<p>  \tMany of the detectors correctly thought the portrait was genuine; but not all did.  <\/p>\n<p>    \t  \t  \t\tReal image   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/biden.jpg\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t        \t  \t<\/p>\n<p>  \t\tTest results from a photograph of President Biden<\/p>\n<p>  \tFalsely labeling a genuine image as A.I.-generated is a significant risk with A.I. detectors. Sensity was able to correctly label most A.I. images as artificial. But the same tool incorrectly labeled many real photographs as A.I.-generated.  <\/p>\n<p>  \tThose risks could extend to artists, who could be inaccurately accused of using A.I. tools in creating their artwork.  <\/p>\n<p>  \tThis Jackson Pollock painting, called \u201cConvergence,\u201d features the artist\u2019s familiar, colorful paint splatters. Most \u2013 but not all \u2013 the A.I. detectors determined this was a real image and not an A.I.-g<br \/>\nenerated replica.  <\/p>\n<p>    \t  \t  \t\tReal image   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/pollock.jpg\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t        \t  \t<\/p>\n<p>  \t\tTest results from a painting by Pollock<\/p>\n<p>  \tIlluminarty\u2019s creators said they wanted a detector capable of identifying fake artwork, like paintings and drawings.  <\/p>\n<p>  \tIn the tests, Illuminarty correctly assessed most real photos as authentic, but labeled only about half the A.I. images as artificial. The tool, creators said, has an intentionally cautious design to avoid falsely accusing artists of using A.I.  <\/p>\n<p>  \tIlluminarty\u2019s tool, along with most other detectors, correctly identified a similar image in the style of Pollock that was created by The New York Times using Midjourney.  <\/p>\n<p>    \t  \t  \t\tGenerated by A.I.   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/painting.png\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t        \t  \t<\/p>\n<p>  \t\tTest results from the image of a splatter painting<\/p>\n<p>  \tA.I.-detection companies say their services are designed to help promote transparency and accountability, helping to flag misinformation, fraud, nonconsensual pornography, artistic dishonesty and other abuses of the technology. Industry experts warn that financial markets and voters could become vulnerable to A.I. trickery.  <\/p>\n<p>  \tThis image, in the style of a black-and-white portrait, is fairly convincing. It was created with Midjourney by Marc Fibbens, a New Zealand-based artist who works with A.I. Most of the A.I. detectors still managed to correctly identify it as fake.  <\/p>\n<p>    \t  \t  \t\tGenerated by A.I.   \t\t<img decoding=\"async\" src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/man.png\" \/>  \t\t<!--   \t\t\tMade by A.I.  \t\t\tIt\u2019s real  \t\t -->  \t\t<!--   \t\t\t  \t\t\t  \t\t -->  \t        \t  \t<\/p>\n<p>  \t\tTest results from the image of a man wearing Nike<\/p>\n<p>  \tYet the A.I. detectors struggled after just a bit of grain was introduced. Detectors like Hive suddenly believed the fake images were real photos.  <\/p>\n<p>  \tThe subtle texture, which was nearly invisible to the naked eye, interfered with its ability to analyze the pixels for signs of A.I.-generated content. Some companies are now trying to identify the use of A.I. in images by evaluating perspective or the size of subjects\u2019 limbs, in addition to scrutinizing pixels.  <\/p>\n<p>  <!-- data processed in process\/freebird\/process-graphic.js -->    <!-- asset wrapper : start -->  <!-- ASSET : START -->                <!-- Intro elements -->                              <!-- inner graphic element -->  <!-- print .html file -->        <!-- Generated by ai2html v0.117.6 - 2023-06-14 16:25 -->  <!-- ai file: comparison.ai -->  <!-- preview: nytnews-projects\/2023-06-08-disinfo-ai-detector -->          \t<!-- Artboard: Artboard_1 -->  \t    \t\t<img decoding=\"async\" alt=\"\" data-src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/comparison-Artboard_1.jpg\" src=\"image\/gif;base64,R0lGODlhCgAKAIAAAB8fHwAAACH5BAEAAAAALAAAAAAKAAoAAAIIhI+py+0PYysAOw==\" \/>  \t\t  \t\t\t<\/p>\n<p>3.3% likely to be A.I.-generated<\/p>\n<p>99% likely to be A.I.-generated<\/p>\n<p>  \t\t  \t\t  \t\t  \t    \t<!-- Artboard: Artboard_2 -->  \t    \t\t<img decoding=\"async\" alt=\"\" data-src=\"https:\/\/static01.nytimes.com\/newsgraphics\/2023-06-08-disinfo-ai-detector\/7343d4ca746b7965141e230d94dd4f5f564f2bfb\/_assets\/comparison-Artboard_2.jpg\" src=\"image\/gif;base64,R0lGODlhCgAKAIAAAB8fHwAAACH5BAEAAAAALAAAAAAKAAoAAAIIhI+py+0PYysAOw==\" \/>  \t\t  \t\t\t<\/p>\n<p>99% likely to be A.I.-generated<\/p>\n<p>3.3% likely to be A.I.-generated<\/p>\n<p>  \t\t  \t\t  \t        <![CDATA[ \t(function (containerId, opts) { \t\tvar nameSpace = opts.namespace || &#039;&#039;; \t\tvar containers = findContainers(containerId); \t\tcontainers.forEach(resize);  \t\tfunction resize(container) { \t\t\tvar onResize = throttle(update, 200); \t\t\tvar waiting = !!window.IntersectionObserver; \t\t\tvar observer; \t\t\tupdate();  \t\t\tdocument.addEventListener(&#039;DOMContentLoaded&#039;, update); \t\t\twindow.addEventListener(&#039;resize&#039;, onResize);  \t\t\t\/\/ NYT Scoop-specific code \t\t\tif (opts.setup) { \t\t\t\topts.setup(container).on(&#039;cleanup&#039;, cleanup); \t\t\t}  \t\t\tfunction cleanup() { \t\t\t\tdocument.removeEventListener(&#039;DOMContentLoaded&#039;, update); \t\t\t\twindow.removeEventListener(&#039;resize&#039;, onResize); \t\t\t\tif (observer) observer.disconnect(); \t\t\t}  \t\t\tfunction update() { \t\t\t\tvar artboards = selectChildren(&#039;.&#039; + nameSpace + &#039;artboard[data-min-width]&#039;, container), \t\t\t\t\t\twidth = Math.round(container.getBoundingClientRect().width);  \t\t\t\t\/\/ Set artboard visibility based on container width \t\t\t\tartboards.forEach(function(el) { \t\t\t\t\tvar minwidth = el.getAttribute(&#039;data-min-width&#039;), \t\t\t\t\t\t\tmaxwidth = el.getAttribute(&#039;data-max-width&#039;); \t\t\t\t\tif (+minwidth = width || maxwidth === null)) { \t\t\t\t\t\tif (!waiting) { \t\t\t\t\t\t\tselectChildren('.' + nameSpace + 'aiImg', el).forEach(updateImgSrc); \t\t\t\t\t\t\tselectChildren('video', el).forEach(updateVideoSrc); \t\t\t\t\t\t} \t\t\t\t\t\tel.style.display = 'block'; \t\t\t\t\t} else { \t\t\t\t\t\tel.style.display = 'none'; \t\t\t\t\t} \t\t\t\t});  \t\t\t\t\/\/ Initialize lazy loading on first call \t\t\t\tif (waiting &#038;&#038; !observer) { \t\t\t\t\tif (elementInView(container)) { \t\t\t\t\t\twaiting = false; \t\t\t\t\t\tupdate(); \t\t\t\t\t} else { \t\t\t\t\t\tobserver = new IntersectionObserver(onIntersectionChange, {}); \t\t\t\t\t\tobserver.observe(container); \t\t\t\t\t} \t\t\t\t} \t\t\t}  \t\t\tfunction onIntersectionChange(entries) { \t\t\t\t\/\/ There may be multiple entries relating to the same container \t\t\t\t\/\/ (captured at different times) \t\t\t\tvar isIntersecting = entries.reduce(function(memo, entry) { \t\t\t\t\treturn memo || entry.isIntersecting; \t\t\t\t}, false); \t\t\t\tif (isIntersecting) { \t\t\t\t\twaiting = false; \t\t\t\t\t\/\/ update: don't remove -- we need the observer to trigger an update \t\t\t\t\t\/\/ when a hidden map becomes visible after user interaction \t\t\t\t\t\/\/ (e.g. when an accordion menu or tab opens) \t\t\t\t\t\/\/ observer.disconnect(); \t\t\t\t\t\/\/ observer = null; \t\t\t\t\tupdate(); \t\t\t\t} \t\t\t} \t\t}  \t\tfunction findContainers(id) { \t\t\t\/\/ support duplicate ids on the page \t\t\treturn selectChildren('.ai2html-responsive', document).filter(function(el) { \t\t\t\tif (el.getAttribute('id') != id) return false; \t\t\t\tif (el.classList.contains('ai2html-resizer')) return false; \t\t\t\tel.classList.add('ai2html-resizer'); \t\t\t\treturn true; \t\t\t}); \t\t}  \t\t\/\/ Replace blank placeholder image with actual image \t\tfunction updateImgSrc(img) { \t\t\tvar src = img.getAttribute('data-src'); \t\t\tif (src &#038;&#038; img.getAttribute('src') != src) { \t\t\t\timg.setAttribute('src', src); \t\t\t} \t\t}  \t\tfunction updateVideoSrc(el) { \t\t\tvar src = el.getAttribute('data-src'); \t\t\tif (src &#038;&#038; !el.hasAttribute('src')) { \t\t\t\tel.setAttribute('src', src); \t\t\t} \t\t}  \t\tfunction elementInView(el) { \t\t\tvar bounds = el.getBoundingClientRect(); \t\t\treturn bounds.top  0; \t\t}  \t\tfunction selectChildren(selector, parent) { \t\t\treturn parent ? Array.prototype.slice.call(parent.querySelectorAll(selector)) : []; \t\t}  \t\t\/\/ based on underscore.js \t\tfunction throttle(func, wait) { \t\t\tvar timeout = null, previous = 0; \t\t\tfunction run() { \t\t\t\t\tprevious = Date.now(); \t\t\t\t\ttimeout = null; \t\t\t\t\tfunc(); \t\t\t} \t\t\treturn function() { \t\t\t\tvar remaining = wait - (Date.now() - previous); \t\t\t\tif (remaining  wait) { \t\t\t\t\tclearTimeout(timeout); \t\t\t\t\trun(); \t\t\t\t} else if (!timeout) { \t\t\t\t\ttimeout = setTimeout(run, remaining); \t\t\t\t} \t\t\t}; \t\t} \t})(\"g-comparison-box\", {namespace: \"g-\", setup: window.setupInteractive || window.getComponent}); ]]>  <!-- End ai2html - 2023-06-14 16:25 -->        <!-- or print custom, pre-generated div with ID and classes -->      <!-- or print missing file warning -->          <!-- asset wrapper : end -->  <!-- ASSET : END -->            <\/p>\n<p>  \tArtificial intelligence is capable of generating more than realistic images \u2013 the techn<br \/>\nology is already creating text, audio and videos that have fooled professors, scammed consumers and been used in attempts to turn the tide of war.  <\/p>\n<p>  \tA.I.-detection tools should not be the only defense, researchers said. Image creators should embed watermarks into their work, said S. Shyam Sundar, the director of the Center for Socially Responsible Artificial Intelligence at Pennsylvania State University. Websites could incorporate detection tools into their backends, he said, so that they can automatically identify A.I. images and serve them more carefully to users with warnings and limitations on how they are shared.  <\/p>\n<p>  \tImages are especially powerful, Mr. Sundar said, because they \u201chave that tendency to cause a visceral response. People are much more likely to believe their eyes.\u201d  <\/p>\n<p> <a href=\"https:\/\/www.nytimes.com\/interactive\/2023\/06\/28\/technology\/ai-detection-midjourney-stable-diffusion-dalle.html\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The pope did not wear Balenciaga. And filmmakers did not fake the moon landing. In recent months, however, startlingly lifelike images of these scenes created by artificial intelligence have spread virally online, threatening society\u2019s ability to separate fact from fiction. To sort through the confusion, a fast-burgeoning crop of companies now offer services to detect&#8230;<\/p>\n","protected":false},"author":2,"featured_media":30538,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"_metasync_otto_title":"","_metasync_otto_description":"","_metasync_otto_keywords":"","_metasync_otto_og_title":"","_metasync_otto_og_description":"","_metasync_otto_twitter_title":"","_metasync_otto_twitter_description":"","rank_math_title":"","rank_math_description":"","_yoast_wpseo_title":"","_yoast_wpseo_metadesc":"","_aioseo_title":"","_aioseo_description":"","footnotes":""},"categories":[85],"tags":[],"class_list":["post-30537","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-it-services"],"_links":{"self":[{"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/posts\/30537","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/comments?post=30537"}],"version-history":[{"count":0,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/posts\/30537\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/media\/30538"}],"wp:attachment":[{"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/media?parent=30537"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/categories?post=30537"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/reactlocal.com\/blog\/wp-json\/wp\/v2\/tags?post=30537"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}