November 24, 2024

Mark your calendar for Inman’s upcoming events and enjoy the ultimate real estate experience! Travel into the future at Connect Miami, immerse yourself in luxury at Luxury Connect, and connect with industry leaders at Inman Connect Las Vegas.Discover more and join the best in the industry inman.com/events.

Today’s snake oil salesmen are equipped with military-grade psychological warfare techniques capable of influencing the most sophisticated and technologically advanced people on the planet. Even the savviest among us face great risks. Why? Because human nature is to trust by default. Without trust, we as a species cannot survive.

When you mix any form of media (the media we have grown to trust throughout our lives) into a financial scam, the possibility that someone, somewhere, falls for an AI trick is inevitable.

Artificial intelligence-based “social engineering” fraud is quickly becoming the purest and most effective form of psychological manipulation. In short, we are not ready to deal with the situation that already exists.

deep fake Artificial intelligence fraud is the digital version of a sociopath, a psychopath, a narcissist, a gaslighter, a violent predator.

so terrible OpenAI, developer The creators of ChatGPT recently announced the termination of accounts associated with nation-state-related threat actors.

Our results show that our model provides only limited incremental functionality for malicious network security tasks. We partnered with Microsoft Threat Intelligence to stop five nation-affiliated actors seeking to leverage AI services to support malicious cyber activity. We also outline methods for detecting and interfering with such actors to promote information sharing and transparency about their activities. ” — Open Artificial Intelligence

What they call “state-related actors” include Chinese, Iranian, North Korean and Russian hackers. Banning these bad actors is only a temporary solution and does not solve the problem.

PT Barnum was wrong

Artificial intelligence deepfakes are not the root of the problem. We are the problem.As the saying goes: “There’s a fool born every minute.” I’m pretty sure there is about 250 Someone is born every minute. By my reckoning, every one of them is a fool—me and you included.

What does it mean? This means that we all have the potential to be deceived. I bet all of us have been deceived or “fooled.” This is just the danger of “default trust.”

Just look at the simple “evolution”Phishing email scam”. Over the past 20 years, this subterfuge has evolved from a broad-based “liar grammar” communication to an advanced, persistent threat that targets specific individuals by understanding and exploiting every aspect of their personal and professional lives.

We’ve seen AI-based phishing schemes do serious damage because they are dozens of times more trustworthy.

In this era of rapid technological advancement and the integration of artificial intelligence, it is important for everyone to stay informed about the latest scams. Last year, we witnessed turmoil in the cybersecurity world, with major businesses falling victim to malware attacks, ransomware, and opportunities for cybercriminals exploding due to advances in artificial intelligence.

Unfortunately, this forecast indicates that the sophistication and prevalence of online threats and scams will further escalate, so individuals must remain vigilant and proactive in protecting their digital assets.

Consider the ‘wreaking havoc’ caused by deepfakes AI

The rapid proliferation of deepfake websites and apps is wreaking havoc, triggering a wave of financial and personal fraud that poses a unique threat to individuals and businesses alike.

The proliferation of deepfakes, driven by the accessibility and sophistication of artificial intelligence technology, is a troubling trend. Even the average technology user has tools that allow them to impersonate an individual given enough video or imagery.

Therefore, we must expect a surge in the use of video and audio deepfakes in online fraud. This has already happened. Scammers use deepfake videos and/or messages to impersonate superiors and request urgent information.

Likewise, in the personal realm, these manipulative tactics may involve impersonating a family member or friend, tricking an individual into revealing sensitive information, or withdrawing funds from a bank account to pay a kidnapping ransom.

As ridiculous as it sounds, if you heard your daughter screaming during a distant cell phone call, you would probably shell out the cash if you thought your loved one was being held captive.

The rise of artificial intelligence deepfakes has created huge challenges in combating financial fraud, providing cybercriminals with unprecedented capabilities. With the help of artificial intelligence, cybercriminal groups can quickly update and enhance traditional wire fraud tactics as well as sophisticated impersonation schemes.

This rapid evolution jeopardizes the reliability of verification and authorization processes within the financial sector, undermining trust and confidence in the financial system.

this is just the beginning

A financial officer at a multinational company fell victim to deepfake technology in a sophisticated scheme that resulted in a payout of up to $25 million to an impostor impersonating the company’s CFO.

The elaborate ruse unfolded during a video conference call, where unsuspecting employees found themselves surrounded by seemingly familiar faces, only to discover that they were all carefully crafted deepfake replicas. Although initial suspicions were sparked by a suspicious email, the worker’s doubts were quickly put to rest by the convincing resemblance to his alleged co-worker.

The incident highlights the staggering effectiveness of deepfake technology in perpetrating financial fraud on an unprecedented scale.

The authors predict that such programs will easily intrude into the real estate mortgage loan closing process. The closing process, which utilizes phone calls, emails and digital signatures, already involves a great deal of anonymity. One way to reduce the “anonymous threat” is to integrate Zoom meetings with video. Who dares to say that the participants in the video call were not generated by artificial intelligence?

Deepfake markets reach deep into the depths of the dark web, becoming a favored resource for cybercriminals seeking to obtain synchronized deepfake videos and audio for a range of illicit purposes, including cryptocurrency scams, disinformation campaigns, and social engineering aimed at financial theft. attack.

In darknet forums, individuals actively seek out deepfake software or services, highlighting the high demand for developers proficient in AI and deepfake technology, who often fill these requirements.

Protect yourself and your organization

When faced with a video or audio request, the tone of the message must be considered. Does the language and phrasing match what you would expect from your boss or family member? Before taking any action, take a moment to pause and reflect.

If possible, contact the purported sender via a different platform, preferably in person, to verify the authenticity of the request. This simple precaution can help guard against potential deception facilitated by deepfake technology, ensuring you don’t fall victim to an impersonation scam.

  1. Stay informed: Regularly learn about common AI-related scams and tactics used by cybercriminals.
  2. Verification source: Please verify the sender’s identity through multiple channels before taking any action.
  3. Use a trusted platform: Avoid contacting unknown or unverified sources, especially online marketplaces or social media platforms.
  4. Enable security features: Take advantage of security features like multi-factor authentication where possible to add an extra layer of protection to your accounts and sensitive data.
  5. Update software: Regularly check for software updates to reduce vulnerabilities exploited by AI-related scams.
  6. Review request: Cybercriminals may use AI-generated content to create convincing phishing emails or messages.
  7. Educate others: Share your knowledge and awareness of AI-related scams with friends, family, and colleagues.
  8. Verify identidy: Beware of artificial intelligence-generated deepfake videos or audio impersonating trusted individuals.
  9. Be wary of unrealistic offers: AI-powered scams may promise unrealistic rewards or benefits to lure victims into participating in the scheme.
  10. Report suspicious activity: Prompt reporting can help prevent further exploitation and protect others from falling victim to similar scams.

None of the above alone will solve the problem. I must emphasize this: now more than ever it is important for organizations and their employees to participate in consistent and ongoing security awareness training.

Author Robert Siciliano, Protect Now training director and security awareness expert, Amazon #1 bestselling author, media personality and architect CSI Protection Certification. While “security” or fraud prevention is not the essence of a real estate agent’s business, security is everyone’s business and is in the best interest of you and your clients. Agents should prioritize competencies in CSI cyber, social, identity and personal protection areas.