June 6, 2023
0306 bing indirect prompt injection threats 960 image by kai greshake

Hackers may combine AI chatbots and open websites to attack users

Bing or ChatGPT can accept users to ask and answer questions, but the researchers found that if combined with third-party websites, these AI chatbots can also be used by hackers to perform indirect attacks, such as sending phishing websites or letting users reveal their identity information. The prompt pane provided by the Large Language Model (LLM) represented by Bing and ChatGPT blurs the boundary between input data and instructions. If combined with cunning prompts, it may become an attack tool. At present, some studies have used command injection (prompt injection, PI) techniques to launch attacks on users, such as generating malicious content or program codes, or overwriting original commands to execute malicious attempts.

Ewen Eagle

I am the founder of Urbantechstory, a Technology based blog. where you find all kinds of trending technology, gaming news, and much more.

View all posts by Ewen Eagle →

Leave a Reply

Your email address will not be published.