Native News

The Dark Side – Native News Online

Posted on


Technological advancements promise to make our lives better and often make us want something today that we didn’t even know we needed yesterday. The internet promised more efficient work and email that would replace snail mail. It also led to the creation of phishing and online scams—new ways to commit old-fashioned robbery, theft, or deception. Even the invention of trains, which revolutionized transportation, led to the great train robbery.

These are examples of dual-use technologies—tools that can be used for both beneficial and harmful purposes. The federal government even requires special review before certain technologies on this list are exported. Artificial intelligence can be exported if it falls within certain exceptions, including whether it has already been published and the extent to which it has been “trained.”

With the introduction of artificial intelligence to the global community, the darker side of humanity is already adapting it to better commit old crimes and invent new ones. An online financial security consulting firm, TRM, described this evolution as follows:

At first, these organizations turned to AI to aid their human-led criminal efforts, translating phishing scripts into multiple languages or scanning vast codebases for exploitable vulnerabilities. These early applications mirror legitimate uses of AI, but are weaponized to devastating effect.

The next frontier for criminals is autonomy. Just as corporations strive to automate tasks, cybercriminals can develop AI agents capable of operating completely independently. These agents could identify and exploit vulnerabilities without human oversight—executing complex objectives, such as hacking critical infrastructure (e.g., water treatment plants).

This evolution doesn’t just change the scale of criminal operations; it fundamentally reshapes them—making them faster, more efficient, and alarmingly difficult to detect or counteract.

I am going to narrow this examination to cybercrimes, since artificial intelligence operates on the web and originates there.

  1. AI-Powered Ransomware
    Previously, creating ransomware required coding talent and three to five years of training. With “vibe” coding, an account holder using Claude.ai was able “to develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.” The ransomware collected personal data and threatened to make it public on the internet, rather than simply encrypting it. Ransom demands reached as much as half a million dollars. The account holder sold these ransomware packages on internet forums for $400 to $1,200. This represents a significant leap, enabling sophisticated ransomware creation by nearly anyone.
  2. Deepfakes and Financial Fraud
    A company employee was deceived by a deepfake video call that appeared to be the company’s CFO and transferred $25 million to the perpetrators. In another case, a Hong Kong firm also lost $25 million to a deepfake scam impersonating its Chief Financial Officer, after an employee was duped into attending a highly realistic video call featuring multiple “employees”—all deepfakes. This is a major threat. According to consulting company Tech Advisors, only 71% of people globally know what a deepfake is, and just 0.1% can accurately identify one.
  3. AI-Generated Phishing Campaigns
    By April 2025, at least 14% of targeted email attacks were generated using large language models (LLMs), up from 7.6% in April 2024. Additionally, 82.6% of phishing emails now use AI technology in some form, and generative AI tools allow hackers to compose phishing emails up to 40% faster.
  4. Post-Exploitation Malware
    Post-exploitation strategies have evolved to use AI tools to generate commands that continue attacks on a victim’s computer. In one example from August 2025, a JavaScript upload altered a computer’s local command line interface (CLI), such as Gemini or Grok, and directed it to steal authentication codes and cryptocurrency assets.

Prosecuting Crimes Using Artificial Intelligence
Technological advances that aid in the commission of crimes fall into two categories: (1) crimes that still meet the definition of an existing criminal statute or common law offense but are enhanced, accelerated, or made more frequent; or (2) entirely new crimes that do not clearly fit within existing legal definitions and may require new legislation.

Legislators must first understand these crimes, recognize the need for action, and prioritize legislation to address emerging threats. Most prosecutions so far—based on the examples above—have charged perpetrators under existing fraud, computer crime, and extortion laws rather than AI-specific statutes. The fourth example might be prosecuted as “unauthorized access to a computer,” carrying penalties of one to ten years in prison, and up to 20 years depending on the extent of the damage. Still, specific AI cybercrime legislation will likely be necessary.

In February 2024, Deputy Attorney General Lisa Monaco announced a new initiative, Justice AI. The initiative directs DOJ prosecutors to seek stricter sentences for individuals who use artificial intelligence in the commission of crimes.

Internationally, in 2024, the International Criminal Court convened a conference with twelve technology companies to examine AI’s impact on crimes such as foreign influence operations, digital surveillance, and attacks on civilian infrastructure. One proposed solution is the development of a stronger international legal framework, potentially through a global model statute.

Final Thoughts
We are in the early stages of understanding the types of cybercrimes that will emerge from artificial intelligence. We are already seeing how AI enhances the scale, speed, and efficiency of traditional cybercrimes, but we are not far enough along to fully grasp the new crimes that will develop. Perhaps by the end of 2026, we will have a clearer picture of what is possible.

When we let the genie out of the bottle with the release of ChatGPT, we also released a darker force. Like all genies, neither can be put back into the bottle.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Exit mobile version