Can We Escape GenAI?
Not in the news, certainly
Hi, and thanks for subscribing to Photo AI! As a quick reminder, I’m on Mastodon at @email@example.com and co-host the podcasts PhotoActive and Photocombobulate. I’m also an independent journalist and photographer, so your financial support is always welcome.
At the beginning of January, I wrote about the kerfluffle (love that word) over Adobe using customers’ data to train its Adobe Sensei AI models. Although people were freaking out as if this was a sudden new development, the company has been doing this for years. What’s changed since then?
Generative AI. As the technology has rocketed into our awareness over the past year, it’s brought up lots of questions about whether photographers’ and artists’ work is being repurposed. As a response, Adobe came out to say that, no, it’s not using your photos to train AI image generators. Brody Ford at Bloomberg interviewed Adobe Chief Product Officer Scott Belsky, who said, “When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features. We are currently reviewing our policy to better define Generative AI use cases.”
Instead, Adobe uses the data to train object recognition and scene detection features that make their way into its products. Adobe is currently working on clarifying the wording on its Privacy and Personal Data page, which is also where you can opt-out of having your Creative Cloud images used.
(Hat tip to Aaron Hockley and his Tech Photo Guy newsletter for the link to Petapixel’s article.)
More on Generative AI
I’ll put my cards on the table and admit that I have yet to do more than dabble with Generative AI, because I’ve been focusing more on how other AI technologies can help photographers and creators. There will definitely be more on this to come, because the photo AI and GenAI fields are swiftly headed for more crossovers. That said, a few articles taking a wider look at GenAI caught my attention.
Steve Lomas of The Roster is writing a three-part series called The Great AI Content Debate: Tool or Threat? He writes:
AI content generators can produce artwork, music, podcasts and videos and do it with breathtaking efficiency. And whether you’re comfortable with this notion or not, AI does have the potential to be more exciting, unexpected and engaging than traditional content. Improving quality and accuracy is still a work in progress. But by now, it’s undeniable that this evolving technology has become a mainstay in our modern world. And that will only become more true as time passes.
Over at CreativePro, my friend David Blatner looks at GenAI from the perspective of creative professionals who have seen several fields impacted by new technologies, from traditional publishing to now.
This kind of upheaval and upset is not new, of course, even in creative fields. I lived through another of these “disruptive technology” moments, back in the late 1980’s, when a funny little computer called an Apple Macintosh was combined with an odd-looking desktop laser printer, and suddenly almost anyone could put high-quality type on a page. “Typesetting,” as it was called back then, had previously been accessible to only a few, highly trained artisans. They called this new technology a “toy,” and insisted it would never replace them… until it did, of course. They grumbled, as they shuttered their shops, that it wasn’t as good, that people and quality mattered, that it was unfair.
Where are they now? Some grumbled themselves off into the sunset, I suppose. But others jumped on the bandwagon and helped the new technology grow and flourish. Some typesetters joined or started software companies, others used their skills to teach. Some said, “well, if graphic designers can now be typesetters, then we typesetters can now be graphic designers!”
Because in any period of disruptive tech, the disrupted people who ultimately succeed the greatest understand that change is inevitable, it’s not fair, and the way forward involves two things: learning how to incorporate the new reality into their own lives, and helping others do their jobs better.
Some people aren’t sitting idly by. Getty Images is suing the creators of Stable Diffusion for scraping millions of images from its service and using them in its library of source images. This is definitely one to watch.
Let’s keep in mind that there are people behind all of these AI technologies. A report alleges that ChatGPT, the AI text writing service du jour, was built using exploited Kenyan labor, who were exposed to graphic content in order to clean up the database.
Nvidia made a splash last week with the release of its Eye Contact feature in the Nvidia Broadcast app, which artificially makes it appear that a person is looking at the camera. As so many of us have used more video chats over the last three years, we’ve all seen the phenomenon of talking to someone but not making direct eye contact because we’re both looking at the other person on the screen, not directly at the camera.
The examples they use are pretty impressive and, for the most part, not too Uncanny Valley (the term for computer-generated people who look vacant or otherwise just off). Apple also has an Eye Contact feature on the iPhone (find it in Settings > FaceTime > Eye Contact). I wonder, however, how helpful this will be. Have we all gotten sufficiently accustomed to being just off of eye contact? I will admit, as a non-corporate employee, I don’t spend hours and hours in remote meetings, so let me know if you think this technology would be actually helpful.
Apple’s MacBook Pro M2 Max Processor
This isn’t specifically computational photography related, but I found it interesting. The first reviews of the new MacBook Pro and Mac mini models with M2 Pro and M2 Max processors are out. Short version: They seem to be nice speed bumps over the previous generation. I own a MacBook Pro with an M1 Max processor, and am still amazed at the performance and battery life after a year, so I have no need to upgrade. But if you’re still running a Mac with an Intel processor, moving to an M2 will blow your mind.
But what I want to highlight is a deeper look at the M2 Max processor by Ben Bajarin, and how “Apple’s relentless focus on performance-per-watt” is making a big difference. It’s a good read.
Thanks again for reading and recommending Photo AI to others who would be interested. Send me any questions, tips, or suggestions for what you’d like to see covered at firstname.lastname@example.org.