Cookies Psst! Do you accept cookies?

We use cookies to enhance and personalise your experience.
Please accept our cookies. Checkout our Cookie Policy for more information.

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

Microsoft logo Illustration: The Verge

Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform.

“We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a...

Continue reading…

It seems like this feed has limited content. Do you want to fetch full content?

Fetch

Unfortunately, we're not able to fetch full content in this moment,
Do you want to check full content in the source blog?

View source 🔗

Last Stories

What's your thoughts?

Please Register or Login to your account to be able to submit your comment.