Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Claude per anthropic application logo appears on the smartphone screen in Reno, United States, 21. November 2024 (Photography Jakue Silva / Nurpoto via Getti Images)
NurPhoto Via Getty Images
For some time Business model Of the relatively few ciberminal geniuses that create sophisticated malware such as ransumware to offer their malware and services less sophisticated cyber-hrnks in dark web, even providing delivery systems in exchange a general percentage of the purchase received. However, AI quickly changed this model because even less sophisticated criminals are able to use artificial intelligence to perform various frauds.
AI Tools can pick up huge amounts of data from social media and publicly available sources called spefi phishing emails that are more likely to trust them on targeted victims, they often believe them to lead to some spoken matter.
Ready available Deepfake Cloning and voice cloning technology, including various scams, also relying in the Social Engineer in the past that the cheaper is to overcome that the fraud authorized that the fraud has authorized that the fraud is Authorized to be a fraud in relation to some pretense. This fraud, which largely began in 2018. years, losses around the world, compared to more than $ 55 billion between October 2023 and December 2023. Years According to the FBI.
It is now with the advent of AI and the use of deep part technology and voice cloning technology, the fraud increased the roles. In 2024. engineering firm Arup lost $ 25 million To the ciber-hrminal means posed as a CFO company in deep video calls and persuaded an employee company to transfer money.
But things aren’t as bad as you think. They are far worse.
Anthropic, the company that developed Claude Chabot recently announced a report In which he gathered in detail how her chatbot was used to develop and implement sophisticated cyber crime. The report described the evolution of use and by cybercrime not only using AI as a tool for the development of malware, but to use it as an active Ciberat operator, which called “Vibe-Hacking”. The Report Gave The Example of One Cybercriminal Based In The Uk Described As GTG-5004 Whoe Used Claude To Find Companies To A Ransomware Attack By Scanning Thousands Of VPN Endpoints to Find Vulner Map Companies Networks, Create Malware with Evasion Capabilities to Steel Sensitive Data, Deliver The Malware, Steel The Data and Sift Through The Date To Determine Which Data Could Be Best Company and even use psychology for craft messages with their ransom requirements. Claude was also used to steal financial records of targeted companies to determine the amount of bitcoin to be required in exchange for non-publishing of stolen material.
In one month, GTG-5004 used Claude to attack 17 organizations involved in government, health, emergency services and religious institutions that make the requirements between $ 75,000 between $ 75,000 and more than $ 500,000
The GTG-5004 then began selling redemption information on other cybercriminals in the dark web with different levels of packages, including encryption and methods designed to help hackers avoid detection. It is important to note that it is The report is indicated It would be unlike the past if technologically sophisticated criminals sold or leased the dark web-created, the report indicated that “this operator does not seem to implement encryption, anti-analysis techniques or windows interalal manipulation.”
The result is that one cybercriminal could do what has previously taken over the whole team, windows, windows internal and techniques to create a ransom and automatically provide strategic and tactical decisions regarding targeting, exploitation and monetization, as well as adapting defense measures. All of this reduces the criminal bars who want to make cyber criminals.
The report also described in detail that their AI possibilities of abuse by North Korean operatives used to obtain remote jobs in technical companies. According to the report“Traditional North Korean IT Workers’ operation relied on highly qualified individuals in North Korea. Our investigation reveals basic direction to be limited by limited technical skills to fail infiltrate and maintain successfully.”
The report described that AI, Korean operators who could not write a basic code in English to successfully pass the interviews and give jobs in technological companies to earn hundreds of millions of dollars a year financed by North Korea weapons. Creating a substance even worse, through AI, each operator can maintain more jobs in American technological companies that would be impossible without the use of AI
Anthropic replied to the threats that identified the accounts related to these operations and developed a customized classifier to identify this type of activity and measure detection of the Institute in its already existing security systems for security. In addition, Anthropic shared his findings with other companies, as well as the Security and Security Community to help them recognize and defend threats representing criminals using AI platforms. However, the threat of using AI for cyber criminals is great.
Anthropic report is really a return to the whole and industry.