OP 24 January, 2025 - 01:15 AM
In late 2024, AI-powered email security solutions provider Abnormal Security uncovered this new AI chatbot designed specifically for cybercriminal activities. Dubbed GhostGPT, this malicious AI tool, readily available through platforms like Telegram, empowers cybercriminals with unprecedented capabilities, from crafting sophisticated phishing emails to developing sophisticated malware.
Unlike traditional AI models constrained by ethical guidelines and safety measures, GhostGPT operates without such restrictions. This unfettered access to powerful AI capabilities allows cybercriminals to generate malicious content, such as sophisticated phishing emails and malicious code, with unprecedented speed and ease.
According to Abnormal Security’s analysis, GhostGPT is likely designed using a wrapper to connect to a jailbroken version of ChatGPT or an open-source LLM, removing ethical safeguards. This allows GhostGPT to provide direct, unfiltered answers to sensitive or harmful queries that traditional AI systems would block or flag.
This tool significantly lowers the barrier to entry for cybercrime. No longer requiring specialized skills or extensive technical knowledge, even less experienced actors can leverage the power of AI for malicious activities and launch more sophisticated and impactful attacks with greater efficiency.
Furthermore, GhostGPT prioritizes user anonymity, claiming that user activity is not recorded. This feature appeals to cybercriminals seeking to conceal their illegal activities and evade detection.
Continue reading:
Source:
HackRead
https://hackread.com/ghostgpt-malicious-...ime-scams/
Unlike traditional AI models constrained by ethical guidelines and safety measures, GhostGPT operates without such restrictions. This unfettered access to powerful AI capabilities allows cybercriminals to generate malicious content, such as sophisticated phishing emails and malicious code, with unprecedented speed and ease.
According to Abnormal Security’s analysis, GhostGPT is likely designed using a wrapper to connect to a jailbroken version of ChatGPT or an open-source LLM, removing ethical safeguards. This allows GhostGPT to provide direct, unfiltered answers to sensitive or harmful queries that traditional AI systems would block or flag.
This tool significantly lowers the barrier to entry for cybercrime. No longer requiring specialized skills or extensive technical knowledge, even less experienced actors can leverage the power of AI for malicious activities and launch more sophisticated and impactful attacks with greater efficiency.
Furthermore, GhostGPT prioritizes user anonymity, claiming that user activity is not recorded. This feature appeals to cybercriminals seeking to conceal their illegal activities and evade detection.
Continue reading:
Source:
HackRead
https://hackread.com/ghostgpt-malicious-...ime-scams/