To identify and address security vulnerabilities in its artificial intelligence (AI) systems, Google has started a new incentive programme. The organisation is rewarding those who find significant flaws that have the potential to cause actual harm with incentives of up to $30,000 (about INR 26 lakh).
Rogue actions—situations in which an AI system is deceived into performing an action it shouldn’t—are the focus of this new AI bug reward programme. Examples include a secret command that compels an AI to summarise a user’s private emails and forward them to an attacker, or an AI question that might cause Google Home to unlock a door.
What is AI-Related Bug as Per Google?
Google has given precise examples of what constitutes an AI bug. These comprise any flaw that allows a huge language model or other generative AI tool to be exploited to get around security, change data, or do undesirable behaviours. For example, in the past, researchers discovered vulnerabilities that made it possible to manipulate smart home equipment by manipulating calendar events, opening shutters, or turning off lights without authorisation. Keep in mind that not all AI problems will result in compensation.
It isn’t enough to just make Gemini make a mistake or produce unpleasant text. Instead, these kinds of problems ought to be reported via Google’s AI products’ regular feedback features, which allow safety teams to examine and correct model behaviour over time.
CodeMender by Google
In addition to the recently launched bug bounty programme, Google also unveiled CodeMender, an AI agent that automatically fixes security vulnerabilities in code. According to the business, 72 vulnerabilities in open-source projects have already been fixed by CodeMender after it was reviewed by human specialists.
Serious rogue action defects in Google’s main products, including Search, Gemini Apps, Gmail, and Drive, are eligible for the top award of $20,000. The sum can reach $30,000 with bonuses for exceptionally creative or high-quality reports. Smaller problems or faults in other products, such as NotebookLM or Jules, are eligible for lower awards.
Researchers have already made over $400,000 from Google’s AI-related reports in the past two years. Simply put, this new initiative makes things more competitive and official. Our daily lives are increasingly involving AI technologies.
They can be found in home appliances, laptops, phones, and even the instruments we use at work. Attackers can become more inventive as these systems become more powerful. In essence, Google is promising that we will compensate someone who can break it before the bad people do.
|
Quick |
|
•Bugs include AI being tricked to perform harmful or •Flaws that allow large language models or •Google’s AI agent automatically fixes •Serious defects in Search, Gemini, Gmail, Drive |

