A popular Twitch streamer called Atrioc was caught visiting a website featuring unequivocal deep fakes of female streamers such as Pokimane and Maya Higa. The apology video of Atrioc was posted right after. A video went viral on January 30 that purported to show Atrioc watching the deep fakes. The video showed Atria’s open tabs, one of which was believed to be the deepfake website.
Twitch streamer Maya Higa has spoken out on the controversy about fellow streamer Brandon ‘ Atrioc ’ Ewing, who was caught to have watched deep fake content featuring herself and other popular Twitch streamers, sparking severe counter-reaction from those involved and the wider streaming community.
On January 30, 2023, Atrioc, joined by his spouse, went live on Twitch with a tearful apology, after accidentally showing a tab open on his PC that showed he’d been looking at Bavfakes qtcinderella videos and pictures of some of the most popular women on the platform.
The clip quickly went viral, with thousands of people sharing their thoughts on the video, including other popular Twitch streamers and those involved.
Pokimane and QTC Cinderella were among those who responded to what had happened, with Poki simply demanding that people “ stop sexualizing people without their consent. ”
The Atrioc Apology Video Came Right After
An apology video was posted right after Atrioc was caught watching Deepfakes. “ This is so embarrassing, ” he said. He also stated that he was very interested in band AI technology, including AI art.
“ But I was on a regular website, and there was an advertisement on every bavfakes qtcinderella video and also I clicked it, and then I’m in this rabbit hole. I got morbidly curious and I clicked something. It’s gross and I’m sorry. It’s so embarrassing, ” he added.
Atrioc said that it wasn’t a repeated gesture and that he only viewed it formerly. He said he could give evidence by showing the receipt of his access purchase on the same day the video was recorded.
He stated, “ There’s no excuse for it. I’m not defending it in any way, I think this whole order of stuff is wrong. ”
People quickly expressed their anger and disappointment over the original video, pointing out that it was indeed more offensive since he knows some of the female streamers and their partners. After the apology, some have become more understanding and backed his assertion that he has always aimed to produce a safe environment for females on his stream.
“This feels like one of the further wholehearted apologies I’ve seen. Doesn’t excuse anything, however, and morbid curiosity isn’t a good reason, ” one user stated.
QTCinderella has reacted to the controversy and prompted people to stop promoting the website. “ Being seen “ naked ” against your will should NOT BE A PART OF THIS JOB. ”
What is Deepfake AI?
Deepfake AI is a type of artificial intelligence used to produce satisfying images, audio, and bavfakes fantopia video. The term describes both the technology and the performing bogus content and is a portmanteau of deep learning and fake.
Deepfakes frequently transform existing source content where one person is swapped for another. They also produce entirely original content where someone is represented doing or saying something they did not do or say.
Also Read: 9 Great Mythical Creatures Around the World
The top danger posed by deepfakes is their capability to spread false information that appears to come from trusted sources. For example, in 2022 a deep fake video was spread of Ukrainian president Volodymyr Zelenskyy in which he was asking his troops to surrender.
Concerns have also been raised over the eventuality of meddling in elections and election propaganda. While deep fakes pose serious pitfalls, they also have legitimate uses, such as video game audio and entertainment, caller response operations, and customer support, such as call forwarding and receptionist services.
What are They For?
Numerous are pornographic. In September 2019, the AI tool Diptrace found 15,000
bavfakes fantopia online, a near doubling over nine months. A staggering 96% were pornographic and 99% of those counterplotted faces from female celebrities onto porn stars. As new ways allow unskilled people to make deep fakes with fake videos, sprinkles of pictures are likely to spread beyond the celebrity world to fuel revenge porn. A professor of law at Boston University, Danielle Citron stated “, Deep Face technology is being weaponized against women. ” Beyond the adult content, there’s much satire, spoof, and mischief.
Are Deepfakes Legal?
Deep Fakes are generally legal, and there’s little law enforcement can do about them, despite the serious pitfalls they pose. Deep Fakes are only illegal if they violate existing laws such as defamation, child pornography, or hate speech.
Three states have laws concerning deep fakes. According to Police Chief Magazine, Texas bans deep fakes that aim to influence elections, Virginia bans the dispersion of deep fake pornography, and California has laws against the use of political deepfakes within 60 days of an election and without consent deep fake pornography.
Also Read: All About the Japanese Comic Site: Manga18fx
The lack of laws against deepfakes is because more people are ignorant of the new technology, its uses, and dangers. Because of this, victims do not get protection under the law in the utmost cases of deep fakes.
How are Deepfakes Dangerous?
Deepfakes pose significant troubles despite being largely legal, including the following:
- Reputational harm and blackmail put targets in legally compromising situations.
- Political misinformation such as nation states’ trouble actors using it for unrighteous purposes.
- Election interference, such as producing fake videos of people.
- Stock manipulation where fake content is created to impact stock prices.
- Fraud is where an existent is impersonated to steal financial accounts and other PII.
What is the Solution?
Ironically, AI may be the answer. AI formerly helps to spot fake videos, but numerous existing detection systems have a serious weakness, they work great for celebrities because they can train on hours of freely available footage. Tech companies are now working on detection systems that aim to flag up fakes whenever they appear. Digital watermarks aren’t reliable, but a blockchain online ledger system could hold a tamper-evidence record of videos, photos, and audio so their origins and any manipulations can always be checked.