AI, Apps, and Authenticity: Where Consumer Trust Really Stands

Here’s the paradox of 2025: consumers rely on ratings more than ever, yet they’re less sure who—or what—to trust. As AI, apps, and discovery platforms embed machine learning into every click, the review economy feels simultaneously smarter and shakier. Pew Research Center’s latest report captures the mood: the U.S. public remains wary of AI’s broader impact and wants more oversight and personal control—signals that spill directly into how people judge what’s “real” online.

Trust hasn’t collapsed, but it has cooled. BrightLocal’s 2025 survey shows a sharp slide in the share of consumers who trust reviews as much as personal recommendations—down to 42%, from 79% in 2020—suggesting people now treat reviews as inputs, not verdicts. In other words, users triangulate: skim the stars, but read the text, click the photos, compare sources.

If skepticism has grown, so have safeguards. In the U.S., the Federal Trade Commission finalized a rule banning fake or purchased reviews (including AI-generated fakes) and empowered civil penalties for violators; it took effect in October 2024 and remains a defining line in the sand. Across the Atlantic, the U.K.’s Competition and Markets Authority has pushed platforms to crack down, with Google agreeing to tougher actions—removing sham reviews, warning users on offender profiles, and reporting progress to the regulator.

Platforms aren’t waiting for regulators to act first. Google says it’s using upgraded machine learning to spot and remove fake or low-quality contributed content, and it tightened multiple spam policies in 2024–2025 to demote manipulative material and “parasite” tactics. Yelp, meanwhile, rolled out AI-powered Review Insights and expanded trust-and-safety systems designed to detect incentivized or violative reviews at scale; its 2024 Trust & Safety Report details those mitigations. Marketplaces like Amazon pair ML with legal action, reporting hundreds of enforcement moves against fake-review brokers and, in 2025, court wins enabling broader takedowns.

So, is technology restoring trust—or adding noise? The answer is “both.” AI lowers friction for bad actors to mass-generate plausible-looking reviews, which feeds confusion. But AI also powers countermeasures: anomaly detection, behavior analysis, and cross-signal verification that a human moderation team can’t match at scale. The net effect is a re-baselining of trust: consumers don’t blindly accept star averages; they look for patterns—volume over time, specificity of experience, verified purchasers, platform warning labels, and consistency across multiple sites. That aligns with the broader public’s desire for transparency and control in AI-mediated experiences.

For cannabis shoppers specifically—where brand discovery happens on Google, Yelp, marketplace menus, and social—this recalibration is healthy. It nudges buyers to read the narrative, not just the number; to weigh detailed, time-stamped experiences; and to cross-reference platform-level safeguards and regulatory context. Stronger rules (FTC/CMA), combined with platform ML and consumer skepticism, point to a future where trust is earned through layered signals rather than one metric.

Bottom line: technology isn’t making trust vanish; it’s making trust harder to fake. As regulators raise the floor and platforms raise the bar, consumers are learning to ask better questions—and that may be the most trustworthy upgrade of all.