Bing or ChatGPT can accept users to ask and answer questions, but the researchers found that if combined with third-party websites, these AI chatbots can also be used by hackers to perform indirect attacks, such as sending phishing websites or letting users reveal their identity information. The prompt pane provided by the Large Language Model (LLM) represented by Bing and ChatGPT blurs the boundary between input data and instructions. If combined with cunning prompts, it may become an attack tool. At present, some studies have used command injection (prompt injection, PI) techniques to launch attacks on users, such as generating malicious content or program codes, or overwriting original commands to execute malicious attempts.
