Searching for a company’s customer support number used to feel routine. You’d type it in, scan the results, click a link, and call. Now, that same search might put you directly in contact with a scammer, and Google’s own AI-powered summary might be the one handing over the fake number. That’s the uncomfortable reality playing out right now as fraudsters learn to manipulate AI Overviews, the AI-generated answer boxes that appear at the very top of Google Search results. This isn’t a theoretical risk. Banks and credit unions have already issued warnings to customers. Early incidents have been documented across social media platforms. Security researchers confirm the method is deliberate, repeatable, and spreading. What Google AI Overviews actually do Google’s AI Overviews feature pulls information from multiple web sources and synthesizes it into a single, polished answer displayed at the top of the search results page. The intent is to make search faster and more intuitive. Rather than clicking through several links and comparing pages, users get one clean summary right away. The experience is genuinely useful in many situations. But the design comes with a hidden tradeoff. When information is presented in a confident, well-written paragraph at the top of the page, people trust it more. The usual friction that prompted skepticism gets removed. Comparing multiple sources, noticing inconsistencies, second-guessing a result. That’s exactly what bad actors are now counting on. How scammers are getting fake numbers into AI summaries The attack method is straightforward. Fraudsters publish fake customer service phone numbers on low-profile websites, embedding them alongside the names of real, well-known companies. When Google’s systems crawl those pages and feed the content into AI Overviews, the fabricated details get absorbed directly into the AI-generated summary. Once that number appears inside a polished, authoritative-looking response at the top of the page, it gains instant credibility through placement alone. Users who call reach impostors, not company representatives, who then work to extract payment information, account credentials, or other sensitive personal data. What makes this particularly dangerous is the interface itself. AI Overviews minimize visible source attribution, so users often can’t tell where the summary’s information came from, let alone question whether it’s accurate. Why even careful people get caught out Traditional search results still required a degree of active evaluation. You’d see a list of links, notice which domain they came from, and make a judgment call. That process, however brief, created small moments of skepticism that could catch a suspicious result. AI Overviews collapse that process. The summary is presented as a finished answer, not a list of options to evaluate. Its confident, neutral tone signals accuracy. Because it sits above traditional results, the position historically reserved for the most authoritative content, users reasonably assume it has already been vetted. This isn’t a failure of user intelligence. It’s a mismatch between how the tool feels and how it actually works. People are adapting their behavior to a product designed to feel trustworthy, without fully understanding its limitations. The problem is already showing up in real cases This isn’t hypothetical. Reporting from WIRED, The Washington Post, and Digital Trends has identified documented instances of scam support numbers appearing inside AI Overviews. In several cases, people searching for legitimate company contact information were connected to fraudsters who walked them through payment or account verification steps. The incidents span multiple industries, which makes sense. Any company with a customer service function is a potential target. The more people search for a company’s contact details, the more attractive a target that company becomes for this style of attack. Early reports of these incidents spread on Facebook and Reddit before receiving wider coverage, suggesting that by the time the issue was formally documented, real people had already been affected. How to protect yourself from AI Overview scams Staying safe isn’t complicated, but it does require a small shift in how you approach AI-generated search results. Here are habits worth building. Never use a phone number sourced from an AI Overview alone. Always verify it against the company’s official website by navigating directly to that site, not by clicking a search result. Be especially cautious with customer service searches. These are the queries most frequently targeted because they carry urgency. People searching for support are often already stressed and less likely to pause and verify. Check the sources behind the summary. AI Overviews typically include a small citation. If you can see where the information came from, visit that source and confirm the details before acting on them. Treat unsolicited requests for payment or credentials as red flags. Legitimate customer service representatives will not ask for payment card numbers or account passwords to verify your identity. Scroll past the AI Overview. The traditional search results below it are often a more reliable starting point for finding verified contact information. These steps add maybe thirty seconds to your search. For a situation involving your financial or account information, that’s time well spent. What this means for the broader security picture The AI Overview scam fits a well-established pattern in cybersecurity. Attackers don’t break systems, they exploit the trust that systems are designed to create. Phishing works because email looks official. Fake login pages work because they mimic real ones. This works because AI-generated summaries feel authoritative. What’s new here is the scale and the speed. Seeding fake content across low-profile websites is cheap and fast. Google’s crawlers are thorough. Once a scam number makes it into an AI Overview, it’s presented to every person who runs that search, potentially thousands of people, with none of the friction that might prompt them to question it. Google has acknowledged issues with AI Overviews in the past and will likely respond here too. But that response takes time, and the scams are already running. In the meantime, the most reliable protection is user awareness. Knowing that the polished paragraph at the top of your search results is not always right, and that verifying information independently is still a worthwhile habit. AI tools are getting better at assembling and presenting information. So are the people trying to abuse them. The most sensible response isn’t to avoid these tools entirely. It’s to stay clear-eyed about what they are. Fast, useful, and imperfect. A quick confirmation check before you dial a customer service number is a small price for a lot of protection. Post navigation The Self-Promotional Listicle Strategy Just Backfired Content Refresh vs New Content: How to Make the Right Call