Google Gemini AI Bug Allows Invisible, Malicious Prompts

A prompt-injection vulnerability in the AI assistant allows attackers to create messages that appear to be legitimate Google Security alerts but instead can be used to target users across various Google products with vishing and phishing.

Go to Source
Author: Elizabeth Montalbano, Contributing Writer

This site uses cookies to offer you a better browsing experience. By browsing this website, you agree to our use of cookies.