Artificial Intelligence technology has gotten itself into another ethical quandary, the latest being explicit and fake images of Taylor Swift getting generated and distributed by certain immoral parties. The darling of the music industry may be the latest victim, but she is by no means the first, The Star Malaysia sourcing AFP noting that “96% of deepfake videos were pornographic” in 2019.
Then we also have the ubiquitous ChatGPT from OpenAI being trained on copyrighted material, because it would be “impossible” for the technology to reach its current capabilities without doing so. OpenAI has also been hit with a lawsuit where the company has allegedly obtained personally identifiable information including medical records, with the suit claiming this a level of “unprecedented theft”, wanting compensation and more transparency with regards to the usage and sources of the material they have obtained. There’s also the concern about the technology outright replacing the people it’s alleged to have stolen from.
While we can certainly use artificial intelligence for good, are there sufficient measures taken to prevent ‘immoral’ use of the technology? Has consent been taken for granted because everything can be considered usable data? How long will it take before there’s standardised regulations for use of this technology, if at all?