Earlier this year, South Australia’s Department for Education decided to bring generative AI into its classrooms. But before they opened the doors, one question loomed large: how to do it responsibly?
As different experts, ethicists, and even the government push for increased safety in the development and use of artificial intelligence tools, Microsoft took to the Build stage to announce new ...
Microsoft shipped an Azure AI Content Safety service to help AI developers build safer online environments. In this case, "safety" doesn't refer to cybersecurity concerns, but rather unsafe images and ...
Microsoft has announced a new update for the Azure AI platform. It includes hallucination and malicious prompt attack detection and mitigation systems. Azure AI customers now have access to new ...
New tools for filtering malicious prompts, detecting ungrounded outputs, and evaluating the safety of models will make generative AI safer to use. Both extremely promising and extremely risky, ...
Falling for prompt injection attacks and infringing copyright are just two of the dumb things artificial intelligence may do; Microsoft is offering new tools to help enterprises mitigate such risks in ...
Two vulnerabilities identified by researchers enable attackers to bypass gen AI guardrails to push malicious content onto protected LLM instances. Security researchers at Mindgard have uncovered two ...