智能生活

您现在的位置: 首页 > 智能生活

Google发布AI新技术 识别儿童性虐待图片

作者:灯塔大数据    来源:灯塔大数据   发布时间:2018-09-05 08:14:32

   【流媒体网】摘要:谷歌于9月3号宣布推出新的人工智能(AI)技术,该技术可以识别在线儿童性虐待材料(CSAM)并降低这些材料在网络上的曝光率。


 

  谷歌于9月3号宣布推出新的人工智能(AI)技术,该技术可以识别在线儿童性虐待材料(CSAM)并降低这些材料在网络上的曝光率。

  一直以来,互联网巨头给违规发布者提供了在网络上传播CSAM的平台,互联网公司面临的质疑批评声音也越来越大,于是谷歌推出了这项新AI技术。上周,英国外交大臣杰里米·亨特(Jeremy Hunt)在推特上批评谷歌公司,因为谷歌计划用内容经过审查的搜索引擎重新进入中国,但报道称,谷歌不会在其引擎上审查删除世界其他国家地区的儿童虐待内容。

  9月3日,英国内政大臣萨吉德·贾维德(Sajid Javid)发起了一项新的“行动呼吁”,加强政府推动谷歌和Facebook等科技公司采取更多措施打击在线儿童性虐待行为。国家犯罪署(NCA)的最新数据发现,英国有多达8万人可能对网络儿童构成威胁,这一发现公布之后,该“行动呼吁”计划便产生了。

  谷歌选择9月3号宣布,这个时机当然不是巧合。

  神经网络

  谷歌的新AI技术建立在深度神经网络(DNN)之上,并将通过新的内容安全API免费提供给非政府组织(NGO)和其他“行业合作伙伴”,包括其他技术公司。

  去年有消息称,伦敦大都会警察局正在制定人工智能解决方案,该计划利用机器评估令人不安的图像的严重程度。该方案旨在解决两个问题 ,一方面它将提高在互联网上识别CSAM的速度,同时它也将减轻官员亲自审查图像所遭受的心理创伤。

  谷歌的新技术需有更广泛的应用能力。以往的自动化工具都依赖于匹配先前识别好的CSAM图像。但根据工程主管Nikola Todorovic和产品经理Abhi Chaudhuri共同撰写的一篇博客文章,谷歌的内容安全API,可以识别以前未识别过的CSAM新内容,通过定位来有效追踪违规发布者。

  博客还写道:“快速识别新图像意味着今天遭受性虐待的儿童更容易被识别并免受进一步虐待。我们正在通过内容安全API向非政府组织和行业合作伙伴免费提供这一功能,这是一种工具包,可以增加审核内容的能力,从而减少人们接触这类敏感内容。”

  很多技术公司现在利用人工智能来检测各种违规发布的材料,从裸体到侮辱性评论。而谷歌利用该图像识别技术会在某种程度上,控制CSAM这种大规模的最令人憎恶的滥用形式之一。 “这一举措将大大提高潜在CSAM审查的速度,”Todorovic和Chaudhuri继续说道。 “我们已经亲眼目睹了这个系统可以帮助评论者在同一时期内找到CSAM内容并采取相应措施。”

  谷歌推出的合作伙伴组织是联合国慈善机构互联网观察基金会(IWF),它的使命是“尽量减少'潜在犯罪'互联网内容的传播,特别是儿童性虐待的图片。”

  “我们,特别是我们的专家分析师,对开发人工智能工具感到非常兴奋,这种工具可以帮助专家分析师更大规模地审查材料,并通过定位以前未识别过的图像来追踪违规者。” IWF首席执行官Susie Hargreaves补充道。 “通过分享这项新技术,图像的识别速度可以加快,从而使互联网成为更安全的地方。”

  原文

  Google releases AI-powered Content Safety API

  to identify more child abuse images

  Google has today announced new artificial intelligence (AI) technology designed to help identify online child sexual abuse material (CSAM) and reduce human reviewers’ exposure to the content.

  The move comes as the internet giant faces growing heat over its role in helping offenders spread CSAM across the web. Last week, U.K. Foreign Secretary Jeremy Hunt took to Twitter to criticize Google over its plans to re-enter China with a censored search engine when it reportedly won’t help remove child abuse content elsewhere in the world.

  Earlier today, U.K. Home Secretary Sajid Javid launched a new “call to action” as part of a government push to get technology companies such as Google and Facebook to do more to combat online child sexual abuse. The initiative comes after fresh figures from the National Crime Agency (NCA) found that as many as 80,000 people in the U.K. could pose a threat to children online.

  The timing of Google’s announcement today is, of course, no coincidence.

  Neural networks

  Google’s new tool is built upon deep neural networks (DNN) and will be made available for free to non-governmental organizations (NGOs) and other “industry partners,” including other technology companies, via a new Content Safety API.

  News emerged last year that London’s Metropolitan Police was working on a AI solution that would teach machines how to grade the severity of disturbing images. This is designed to solve two problems — it will help expedite the rate at which CSAM is identified on the internet, but it will also alleviate psychological trauma suffered by officers manually trawling through the images.

  Google’s new tool should assist in this broader push. Historically, automated tools rely on matching images against previously identified CSAM. But with the Content Safety API, Google said that it can effectively “keep up with offenders” by targeting new content that has not previously been confirmed as CSAM, according to a blog post co-authored by engineering lead Nikola Todorovic and product manager Abhi Chaudhuri.

  “Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” they said. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

  Most of the major technology companies now leverage AI to detect all manner of offensive material, from nudity to abusive comments. But extending its image recognition technology to include new photos should go some way toward helping Google thwart — at scale — one of the most abhorrent forms of abuse imaginable. “This initiative will allow greatly improved speed in review processes of potential CSAM,” Todorovic and Chaudhuri continued. “We’ve seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period.”

  Among Google‘s partner organizations at launch is U.K.-based charity the Internet Watch Foundation (IWF), which has a mission to “minimize the availability of ‘potentially criminal’ internet content, specifically images of child sexual abuse.”

  “We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders by targeting imagery that hasn’t previously been marked as illegal material,” added IWF CEO Susie Hargreaves. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

责任编辑:侯亚丽
版权声明:凡来源标注有“流媒体网”字样的文章,版权均属流媒体网站,如需转载,请注明出处“流媒体网”。非本站出处的文章为本站转载,观点供业内参考,不代表本站观点。

相关新闻

{$Hits}