AI Weekly: Microsoft’s new moves in responsible AI

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for discerning conversations and exciting networking opportunities. Register today!


Want AI weekly free every Thursday in your inbox? Sign up here.


We’re enjoying the first few days of summer, but then whether it’s Microsoft, Google, Amazon or AI-powered, Artificial Intelligence News never takes a break to sit on the beach, sunbathe or run a BBQ.

In fact, it can be difficult to keep up. In the last few days, for example, all this happened:

  • Amazon’s re: MARS announcements led to a media-wide face on potential moral and safety concerns (and overall strangeness) surrounding Alexa’s new ability to mimic the voices of the dead.
  • More than 300 researchers signed an open letter condemning the deployment of GPT-4chan.
  • Google introduced another text-to-image model, Party.
  • I have booked my flight to San Francisco to attend VentureBeat’s personal executive summit on July 19 at Transform. (Well, that’s not really news, but I’m looking forward to seeing the AI ​​and data community finally come up with IRL.?)

But this week, I’m focusing on the release of a new version of its Responsive AI Standard by Microsoft – as well as its announcement this week that it plans to stop selling facial analysis tools in Azure.

Let’s dig inside.

, Sharon GoldmanSenior Editor and Writer

AI beat this week

Responsible AI was at the center of Microsoft’s build announcements this year. And there’s no doubt that Microsoft has faced responsible AI-related problems since at least 2018 and pushed for legislation to regulate facial recognition technology.

The release of version 2 of Microsoft’s Responsive AI Standard this week is a good step forward, AI experts say, though much remains to be done. And while it was rarely mentioned in the standard, Microsoft’s widely covered announcement that it would retire public access to facial recognition tools in Azure – due to concerns about bias, aggression and reliability – was part of a larger overhaul of Microsoft’s AI ethics policies. .

Microsoft’s ‘big step forward’ in certain responsible AI Standards

According to computer scientist Ben Schneiderman, author of human-centered AI, Microsoft’s new Responsive AI Standard is a big step beyond Microsoft’s 18 guidelines for human-AI interaction.

“The new standards are more specific, shifting from ethical concerns to management practice, software engineering workflow and documentation requirements,” he said.

Abhishek Gupta, senior responsible AI leader at Boston Consulting Group and lead researcher at the Montreal AI Ethics Institute, agrees, calling the new standard “a much-needed breath of fresh air, as it is a step beyond the high-level principles that were largely the norm so far.” ” He said.

The principles outlined earlier and their application to specific sub-goals and phases of the AI ​​lifecycle and types of AI systems make it an efficient document, he explained, while also implying that practitioners and operators can go “to a tremendous degree. Ambiguity.” That’s what they feel when they try to implement principles. “

Unsolved bias and privacy risks

Given the unresolved bias and privacy risks in facial recognition technology, Microsoft’s decision to stop selling its Azure tool is “very responsible,” Gupta added. “The first step in my belief is that instead of the ‘go fast and break things down’ mentality, we need to adopt a ‘responsibly grow fast and fix things’ mentality.”

But Annette Zimmerman, Gartner’s VP analyst, says she believes Microsoft is eliminating facial demographic and emotional discovery because the company has no control over how it is used.

“It’s a constantly controversial topic to explore demographic issues like gender and age, possibly combining it with emotion to use it to make decisions that will affect the person being assessed, such as a rental decision or selling a loan,” she explained. . “The main point is that these decisions may be biased, Microsoft is removing this technology. Including The search for emotion. “

Products like Microsoft, which have SDKs or APIs that can be integrated into applications over which Microsoft has no control, are different from end-to-end solutions and dedicated products that have full transparency, she added.

“Products that explore market research objectives, storytelling or feelings for the customer experience – all cases where you make no decision other than improving the service – will still thrive in this technology market,” she said.

What’s missing from Microsoft’s Responsive AI Standard

There is still work to be done by Microsoft when it comes to responsible AI, experts say.

What is missing, Schneiderman said, are requirements for things like audit trails or logging; Independent supervision; Public event reporting websites; Availability of documents and reports to stakeholders, including journalists, public interest groups, industry professionals; Open reporting of such problems; And transparency about Microsoft’s process for internal review of projects.

“Especially given the work that Microsoft is doing toward large-scale models,” is a factor in paying more attention to the environmental impact of AI systems, Gupta said. “My recommendation is to start thinking about environmental considerations as a first-class citizen, along with business and functional considerations in the design, development and deployment of the AI ​​system,” he said.

The future of responsible AI

Gupta predicted that Microsoft’s announcements would trigger similar actions from other companies in the next 12 months.

“We may also see the release of more tools and capabilities within the Azure platform that will make some of the standards specified in their responsible AI standards more widely accessible to Azure platform customers, thus democratizing RAI capabilities that are not required. We have the resources to do that, “he said.

Schneiderman said he hopes other companies will move their game in this direction, pointing to IBM’s AI Fairness 360 and related approaches as well as Google’s People and AI Research (PAIR) guidelines.

“The good news is that large companies and small companies are moving towards specific business practices through the need to document certain types of vague ethical principles, report problems and share information with certain stakeholders / customers,” he added. . This is done to make the systems open to the public: “I think there is a growing belief that failed AI systems generate significant negative attention, which makes competitive, secure and reliable AI systems a competitive advantage.”

Similar Posts

Leave a Reply

Your email address will not be published.