Reno, Nev. (News 4 & Fox 11) —
Artificial intelligence is developing at a rapid pace and a new AI tool is raising concerns about ethics, use, and consequences that come with it.
The latest AI technology called Sora, created by OpenAI. That’s the same company that created ChatGPT. As News 4 has reported, the program can write essays, solve math problems or come up with a study guide. Schools in Washoe County have started to use them as tools in the classroom.
Example%20of%20Sora%20video%20created%20by%20text%20(Courtesy:%20Sora)
Sora can create images and video from text instructions. The videos are stunning and very detailed.
“Looking at this you wouldn’t really know it was AI generated,” said Janet Lara who watched some of the videos.
Sora has not made the service public yet. The company said it still needs to take several safety steps first.
“I think it’s great. And also I suppose a bit scary for a lot of people,” said Dr. Emily Hand, Assistant Professor at UNR’s College of Engineering.
She teaches machine learning and AI classes.
“People are obviously afraid of these kind of AI, especially this generative AI, because it just seems so humanlike,” she said. “There is not a creative element to this. It is not humanistic at all. It is very complicated calculus.”
Dr. Paromita Pain is an associate professor of Global Media at UNR’s Reynolds School of Journalism. She said people should be skeptical of the ways we use and video this AI technology.
“It is our critical thinking response to media that will really help us beneficially use these or be fooled and be taken in,” she said.
Dr. Pain said AI content should encourage people to think twice and check other sources for accuracy.
“Just stopping to ask that question ‘why,’ can often go a long way to creating that gap, that space between immediately believing and immediately sharing,” she said.
Dr. Hand agrees that critical thinking is important, especially as AI develops.
“You really have to think about the sources and really be a critical consumer of information, and that’s really hard to do. I think we used to be really critical of the information we were consuming, and then it became so accessible and at our fingertips that we stopped being as critical. And I think we need to go back to being critical,” she said.
So how does someone know what is real and was is created by artificial intelligence?
OpenAI told News 4 that their videos will have a watermark in the corner of the video. A spokesperson said the company is also building tools to help detect misleading content.
“This could be alerting if it was not used property,” Michael Allen who watched the Sora videos.
At the speed this technology is developing and potentially being misused, the Federal Trade Commission is looking to make the creators of AI tools liable for deep fakes created with their products.
Dr. Pain said lawmakers should step in.
“Don’t we have laws when it comes to health regulation? Don’t we have laws about how we drive on the roads? The information superhighway as we know it also needs regulation,” she said.
There are ways to tell that the videos are created by AI. People should look closely for imperfections and glitches. Hands and feet my be distorted. People walking may appear to float over a surface. People in the background may be in focus but their faces are blurred. Items in the video may appear and disappear too. It’s the small inconsistencies that give it away.
This new technology is a reminder to double check your source. You just can’t trust your eyes anymore.