时间:2024-01-02|浏览:290
在正在进行的俄罗斯-乌克兰战争期间,出现了一段乌克兰总统泽连斯基要求其军队在战争中投降的视频。 后来证明这是一段深度伪造的视频。
在针对深度造假技术的不断发展的斗争中,各行业越来越多地转向复杂的技术武器来对抗被操纵内容的扩散。 从内容真实性举措到隐形水印技术、算法检测工具、协作项目和平台政策变化——打击深度造假的斗争正在变得多方面和动态。
内容真实性和内容来源
由行业参与者领导的内容真实性倡议旨在以加密方式密封内容的归属信息,从而实现从创建到消费的验证。 同时,内容来源和认证联盟 (C2PA) 发布了开放标准技术规范,重点提供有关内容来源、更改和贡献者的数据。
我们希望科技行业和独立新闻业坚持哪些价值观? 内容来源和真实性联盟是打击虚假信息的重要里程碑 - 更多内容来自 Microsoft 的 @erichorvitz https://t.co/fDaAggbTrV
— 布拉德·史密斯 (@BradSmi) 2021 年 3 月 2 日
微软更进一步,宣布推出“内容凭证即服务”,利用 C2PA 数字水印凭证来帮助选举候选人和竞选活动保持对其内容的控制。
水印技术
Meta 推出了稳定签名,这是一种隐形水印技术,旨在区分开源生成人工智能模型创建的内容。 这种不可见的水印肉眼无法察觉,但可以通过算法追踪,有助于识别被操纵的图像。 谷歌 DeepMind 也加入了 SynthID 的竞争,允许用户将数字水印直接嵌入人工智能生成的图像或音频中。
来认识一下稳定签名,这是一种用于 AI 生成图像的隐形水印技术,可确保 #GenerativeAI 空间中的透明度和问责制:https://t.co/zYbe5Ap9Ek pic.twitter.com/GWn6MTtdaF
- 开发者元 (@MetaforDevs) 2023 年 11 月 6 日
该技术使用户能够扫描内容中的水印,从而了解内容是否是使用谷歌的人工智能模型创建或更改的。
算法检测
该行业正在部署深度伪造的自动检测软件,依靠各种基于人工智能的策略,如说话人识别、语音活体检测、面部识别和时间分析。 微软的 Video Authenticator 和英特尔支持的 FakeCatcher 就是著名的例子。
#Deepfakes are media created by #AI. The content is made to look like something or someone in real life, but it’s manipulated or completely fake.FakeCatcher is combatting #misinformation with Intel AI optimizations and OpenVINO AI models: https://t.co/ZnCehiaRm2 #IntelON pic.twitter.com/ZpfSzmc6eH
— Intel News (@intelnews) May 11, 2022
However, the challenge lies in the transient nature of detection tools as evolving deepfake production techniques continually challenge their reliability. A study found varying accuracy levels (30-97%) across datasets, indicating the need for ongoing innovation to keep pace with emerging deepfake technologies.
Project Origin
Media organizations, including the BBC and Microsoft, collaborated on Project Origin. This initiative seeks to establish an engineering approach to synthetic media, providing digitally signed links for verifiable tracing of media content back to the publisher. The project also aims to implement validation checks to ensure that content remains unaltered during distribution.
The BBC is working with @Microsoft @CBC and @nytimes to tackle disinformation – Project Origin.Read more: https://t.co/c6gdMpkNUw and https://t.co/9IrQEkMe1m#MediaOrigin | @aythora | @mariannaspring | @BBCtrending | @NYTimesRD | @MSFTIssues | @MSFTResearch | #IBCShowcase https://t.co/0EqxwvCqSR
— BBC Research & Development (@BBCRD) September 8, 2020
Platform Policy Changes
In response to the high-risk context of political advertising, major platforms like Meta and Google have announced policy changes to enhance transparency. Meta’s updated political ads disclosure policy mandates advertisers to reveal digitally altered content in election-related or political ads.
YouTube requires creators to disclose the use of synthetic media content, and failure to comply may result in penalties, including content takedown or suspension from the YouTube Partner Program. The Google Play Store has also introduced a policy for AI-generated content, focusing on preventing the creation and distribution of harmful or deceptive content through generative AI applications.
As the battle against deepfakes intensifies, industry interventions and technological solutions are proving essential in preserving the authenticity and trustworthiness of digital content. The collaborative efforts of major players across various sectors reflect a commitment to staying ahead of the evolving landscape of synthetic media.
The post 2024 Tech Trends: Industry Leaders Embrace AI to Counter Deepfake Threats appeared first on Metaverse Post.