(The Center Square) – It’s an increasingly common phenomenon. We see an image or video appear in our social media feed, and we aren’t sure whether it’s real, altered, or generated by artificial intelligence.
The consequences of being unable to distinguish between the two are far reaching, from political and social implications to the integrity of the data that forms our understanding of the world.
Social media abounds with fake images in the midst of real disasters, sowing confusion for those impacted. Rep. Joanne Stehr, R-Hegins, described a situation in which a child in her district had used the technology to create sexually explicit images of his classmates. At an event in Middletown, Gov. Josh Shapiro was critical of the White House using AI to disparage Democratic leadership. The uses, from benign to criminal, are endless.
The origin and history of an artifact is, in the art world, called provenance, and it’s an issue the House and Communications Technology Committee has taken up to investigate potential regulation of the ever-evolving tech space.
The committee hosted Santiago Lyon, head of education and advocacy at the Content Authenticity Initiative, or CAI, to better understand the issue. Lyon spent most of his more than four decade career working as a photographer, and eventually as the director of photography, for the Associated Press. He connects the commitment to authenticity in these experiences to his work with the Adobe-founded initiative for media transparency.
The CAI is made up of more than 5,000 media members across the globe. The group has worked to create open-source tools to track the provenance of a digital asset. This means, whether you’re looking at one of Lyon’s photographs or an AI generated cartoon, you’re ideally able to access information that traces an asset back to its original source.
“The normalization of AI is already happening,” said Lyon. “We’re seeing it creep into our lives. People are using it for different purposes every day.”
He likened the standardization of provenance tools to the use of nutrition labels on food products. He said that while the average consumer may not read every detail of the nutrition label, the fact that it’s there creates a “safety slash compliance event.”
Lyon told the committee that governments should “lead by example” adopting policies that foster “a broader culture of digital content authenticity.” Beyond the tools being developed for provenance, CAI stresses that education and policy are essential to combatting the risks posed by digital content.
Media literacy is already a focus of the Pennsylvania Department of Education. The state’s initiative headed up by First Lady Lori Shapiro seeks to stay on top of the changing media landscape and educate students in what sources to trust and how to verify them.
Critics argue that the government should not be involved in parsing out which sources are trustworthy, pointing to the potential – whether intentional or not – to discourage trust in media content that questions preferred ideology.
Policy remains forthcoming, with neither Lyon nor the legislators knowing exactly who will be held accountable and for what in the Wild West of digital content creation. Lyon said that’s one reason these conversations are so important.
“If I had to summate this all up in one word, it would be safety,” said Lyon. “These are safety tools aimed at making the internet a safer place whoever you are, whether you’re a student, whether you’re a company, whether you’re a journalist, whether you’re a creator, etc.”




