Connect with us

News

Google reveals government misuse of Gemini AI

Published

on

Google's intelligence unit reports government-linked attempts to exploit its AI chatbot, Gemini. Hackers failed, highlighting AI security risks.

In a paper titled ‘Adversarial Misuse of Generative AI’, Google’s threat intelligence unit outlines how attackers engaged with its AI chatbot, Gemini.

Google has identified instances of government-linked entities attempting to exploit its Gemini AI.

Hackers affiliated with governments tried to misuse Gemini AI for cyber threats but failed to breach its security.

While AI advancements offer new opportunities, they also become potential targets for cyber threats, highlighting the risks of misuse.

These threat actors attempted to exploit the AI by inputting tailored prompts, according to Google’s report.

Furthermore, state-backed cyber threat groups explored ways to manipulate Gemini for unethical activities.

Google confirmed that although hackers attempted to jailbreak Gemini, they did not use sophisticated tactics.

The hackers employed only basic techniques, such as altering the wording or repeatedly submitting the same prompt, but these attempts were unsuccessful.

AI jailbreaks are prompt injection attacks designed to bypass an AI model’s restrictions, potentially exposing private information or creating unsafe content.

In one specific case, Google noted that an APT actor used publicly available prompts to trick Gemini into completing malicious coding tasks.

Google stated that the attempt failed, as Gemini generated a response with built-in safety filters.

Besides the low-effort jailbreak attempts, Google also discussed how government-sponsored APTs interacted with Gemini.

These attackers worked to use Gemini to support their malicious actions.

The attackers focused on tasks such as information gathering, researching publicly disclosed vulnerabilities, and carrying out coding and scripting work.

Google noted that the attackers aimed to enable post-compromise tactics, such as evading security measures.

Google revealed that Iran-based APT actors targeted AI to craft phishing campaigns.

Read also: EigenLayer & Cartesi to drive adoption with AI & DeFi Apps

The AI model helped the attackers gather information on defense experts and organizations through reconnaissance.

Iran-backed APT actors used AI to produce content focused on cybersecurity.

Meanwhile, China-backed APT actors turned to Gemini for troubleshooting coding, scripting, and development processes.

They used AI to research methods to secure deeper access to their target networks.

APT groups in North Korea leveraged Gemini for various phases of their attacks, from gathering information to developing strategies.

The report said:

“They also used Gemini to research topics of strategic interest to the North Korean government,  such as the South Korean military and cryptocurrency.”

0 0 votes
Article Rating
Continue Reading
Advertisement Earnathon.com
Click to comment
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Crypto News Update

Latest Episode on Inside Blockchain

Crypto Street

Advertisement



Trending

ALL Sections

Recent Posts

0
Would love your thoughts, please comment.x
()
x