A Reckoning in Minneapolis: The Truth About Alex Pretti and the Anatomy of Social Media Hoaxes

Investigating the tragic death of Alex Pretti in Minneapolis and why Facebook hoaxes spread. Learn to spot AI fake news and protect the truth in 2026.

Close-up of Divine Magazine logo with "DM" initials and star motif, representing LGBTQ+ inclusive media.
By
Divine Magazine
Divine Magazine is your destination for fresh insights on lifestyle, wellness, music, home & garden, and creative trends. Discover empowering stories and practical guides—and become part...

In late January 2026, a national tragedy happened in Minneapolis. Federal authorities shot and killed Alex Pretti, a 37-year-old nurse who worked in acute care for the Department of Veterans Affairs, during a protest.


People who weren’t involved in the shooting filmed it, but the turmoil didn’t stop on the street. Almost right away, a second battle broke out online: a rush of fake reports meant to hurt Pretti’s reputation and confuse the public about what had transpired.

The event is a clear example of how rapidly false information can spread and how poorly sites like Facebook deal with it when fake stories start to gain traction.

The Reality: What Happened to Alex Pretti?

According to verified reports and bystander footage, Pretti was acting as a peaceful observer and “street medic” during an immigration enforcement operation.

  • The Incident: The footage reveals Pretti trying to protect a woman who federal officers had pushed to the ground.
  • The Shooting: Federal officers pepper-sprayed and tackled Pretti before shooting him multiple times. While federal officials initially claimed he was an “armed disruptor,” video evidence shows him holding only his mobile phone.
  • The Legacy: Pretti was a veteran ICU nurse known for his dedication to caring for others—a fact confirmed by his family and the Minneapolis VA Health Care System.

The Anatomy of a Smear Campaign

Within hours of his death, several highly coordinated hoaxes appeared on Facebook and other platforms. These were not random rumors; they were calculated “rage-bait” stories.

  1. The “Crossdresser” Hoax: Photos of a different individual (later identified as Kyle Wagner from a 2022 Pride event) were circulated to claim Pretti was someone else, aiming to trigger discriminatory backlash.
  2. The “Fired” Fabrications: Articles from “spam factories” based in Vietnam claimed Pretti had been fired for inappropriate conduct months prior. These were proved 100% false by Lead Stories.
  3. The AI-Generated Aggression: Videos allegedly showing Pretti “kicking federal vehicles” were scrutinized by forensic experts. While some confrontations were real, the manipulation of many clips made his actions appear more violent than they actually were.

Why Does Facebook Allow Fake News?

The persistence of these stories leads many to ask: Why doesn’t the algorithm stop this? The answer is an architectural blend of law, profit, and technical limitations.

  • Section 230 Protection: Under current U.S. law, platforms are generally not held liable for content posted by third parties. The law generally shields platforms from liability for third-party content. This “shield” allows them to avoid the legal responsibility that traditional newspapers face.
  • Facebook’s algorithm puts “meaningful social interaction” ahead of accuracy. Unfortunately, misinformation—which triggers fear and anger—generates more comments, shares, and clicks than dry, factual reporting.
  • The “Whack-a-Mole” Problem: Meta employs thousands of moderators and AI tools, but spam factories use automated systems to generate thousands of new pages as soon as old ones are banned.
  • The Neutrality Trap: Platforms often hesitate to remove political or controversial content for fear of being accused of “censorship.” This phenomenon permits “gray area” misinformation to persist on the site for several days prior to its eventual flagging.

FAQ: Navigating Information in a Crisis

Q: How can I tell if a story about a current event is fake? A: Look for the source. If the news comes from a website you’ve never heard of (e.g., .co or obscure domain names), verify it against established outlets like the AP News or The Guardian.

Q: Does reporting a post on Facebook actually do anything? A: Yes. While it may not be removed immediately, a high report volume triggers human review and can lead to the post being demoted in people’s feeds.

Q: Are AI-generated “deepfakes” common in these cases? A: Increasingly so. In 2026, AI is used not just to create fake videos but to generate thousands of fake comments to make a lie look like a “popular opinion.”


Digital Forensics: How to Spot an AI “Spam Factory”

In 2026, misinformation is no longer just a few rogue posts; it is an industrial-scale operation. These “spam factories” use sophisticated AI to generate thousands of profiles that look, act, and argue like real people. To protect your digital architecture, you need to look past the surface of a post and examine the behavior behind it.

Here is your 2026 guide to identifying inauthentic AI networks on social media.


1. The Profile Audit: Beyond the Bio

AI-generated accounts often use a “standardized” template for their existence. Look for these red flags:

  • The “Uncanny” Face: AI-generated profile pictures (GANs) often have perfectly centered eyes, symmetrical lighting, and “fused” jewelry or glasses.
    • Pro Tip: Look at the background; AI often creates nonsensical, blurry patterns behind a clear face.
  • The Generic Bio: Many bot accounts use a formula: [Adjective] [Noun] [Number]. If you see dozens of accounts with bios like “Lover of Truth 2026” or “Proud Patriot 99,” you’re likely looking at a factory.
  • The Engagement Gap: Check the followers-to-following ratio. A “spam” account often follows thousands of people but has only a handful of followers, mostly other bots.

2. The Content Trap: Identifying “Coordinated Inauthentic Behavior”

Individual bots are easy to ignore; coordinated networks are dangerous. They work in tandem to create the illusion of “consensus.”

  • The “Copy-Paste” Echo: If you see the exact same sentence—word for word—being posted by different accounts under a news story, it’s a bot swarm. They are designed to overwhelm the audience with a specific narrative.
  • The 24/7 Cycle: Humans sleep; bots don’t. If an account is posting consistently every 15 minutes for 48 hours straight, it is automated.
  • The “Rage-Bait” Loop: AI is programmed to trigger your emotions. If a post seems specifically designed to make you feel immediate, intense anger without providing a single verifiable source link, it is likely “engagement-engineered.”

3. Verification Tools for 2026

You don’t have to rely on your gut alone. Use the modern toolkit to verify information:

  • AI Detectors: Paste suspicious text into tools like GPTZero or Originality.ai. They can often detect the “predictable” sentence structures of large language models.
  • Reverse Image Search: Use Google Lens or TinEye on profile pictures. If a “local citizen” from Minneapolis has a profile picture that appears on a stock photo site or a random blog in Vietnam, the account is fake.
  • Fact-Checking Databases: Sites like Snopes and FactCheck.org are your first line of defense during breaking news events.
Red FlagHuman BehaviorAI Bot Behavior
Posting FrequencyIntermittent, irregularRigid, frequent, 24/7
ToneNuanced, emotional, slangOverly formal or hyper-aggressive
InteractionsDiverse interests, personal repliesRepeats 1-2 slogans, no original replies
Account AgeYears old, clear historyRecently created (last 3-6 months)

FAQ: Protecting Your Feed

Q: Can I get hacked by just looking at a fake news post? A: No, but clicking on the links within those posts can lead to “phishing” sites. These sites are designed to look like login screens (e.g., a fake Facebook login) to steal your credentials.

Q: Why doesn’t Facebook just delete all bots? A: It’s a “cat and mouse” game. As soon as Meta deletes 100,000 bots, the spam factories use AI to generate 200,000 more with slightly different behavioral patterns.

Q: What should I do if I find a bot network? A: Report, don’t retort. Do not argue with a bot; it only increases the post’s engagement and pushes it to more people. Use the platform’s “Report” tool for “Spam” or “Fake Account.”


Conclusion


The loss of Alex Pretti is a human tragedy long before it becomes a digital one. In times like these, false information doesn’t just change the facts; it also takes away the dignity of someone who can’t defend themselves and makes the grief of those who are still alive harder to bear. Social platforms may be engineered to reward outrage over accuracy, but we’re not powerless passengers in that system. Each of us can slow down, question what we see, and refuse to amplify unverified rage-bait. In doing so, we protect both the truth and the memory of the person who’s gone.

Share This Article
Divine Magazine is your destination for fresh insights on lifestyle, wellness, music, home & garden, and creative trends. Discover empowering stories and practical guides—and become part of our vibrant community by contributing your own inspiration or joining us as a guest writer!
Leave a Comment