Skip to main content

Microsoft wants to make sure we don't fall victim to murderous AI

AI
(Image credit: Shutterstock)

Anyone worried about the threat of a Skynet-esque rise of the machines may be able to rest a little easier after the release of new protective measures designed to avoid a potential AI uprising.

The nonprofit MITRE Corporation has teamed up with 12 top technology companies, including the likes of Microsoft, IBM and Nvidia to launch the Adversarial ML Threat Matrix. 

The group says the system is an open framework created to help security analysts spot, alert, respond to and address threats targeting machine learning (ML) systems.

ML Matrix

Microsoft says the release was motivated by a continuing growth in the number of attacks against commercial ML sytems around the world. The company surveyed a selection of 28 major businesses, finding that almost all are still unaware of the threat that adversarial machine learning can pose, with twenty-five out of the 28 saying that they don’t have the right tools in place to secure their ML systems.

In order to help reassure and advise such organizations, the Adversarial ML Threat Matrix looks to help empower security teams to defend against attacks on ML systems.

The Matrix contains a number of past vulnerabilities and adversary behaviours spotted by Microsoft and MITRE over the years, as well as a whole host of Microsoft's expertise in the security sector.

"We also found that when attacking an ML system, attackers use a combination of “traditional techniques” like phishing and lateral movement alongside adversarial ML techniques," Microsoft said in a blog post.

The Adversarial ML Threat Matrix GitHub repository is open now for businesses interested in learning more.

“When it comes to Machine Learning security, the barriers between public and private endeavors and responsibilities are blurring; public sector challenges like national security will require the cooperation of private actors as much as public investments," noted Mikel Rodriguez, Director of Machine Learning Research, MITRE.

"So, in order to help address these challenges, we at MITRE are committed to working with organizations like Microsoft and the broader community to identify critical vulnerabilities across the machine learning supply chain. This framework is a first step in helping to bring communities together to enable organizations to think about the emerging challenges in securing machine learning systems more holistically.”