Microsoft Corp. said it has identified US and overseas-based criminal hackers who bypassed guardrails on generative artificial intelligence tools — including the company’s Azure OpenAI services — to generate harmful content, including non-consensual intimate images of celebrities and other sexually explicit content.
The hackers scraped customer logins from public sources and used them to access generative AI services, including Azure OpenAI, the Microsoft cloud product that lets customers use OpenAI’s models, according to the company. The hackers then changed the capabilities of the AI products and sold access to other malicious groups, providing them with instructions on how to create harmful content.
The ...
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.