Last month, Google quietly pulled a single thread, and the internet’s tapestry for AI began to unravel. The company discontinued the num=100 search parameter, an obscure URL trick that let you see 100 search results on a single page. On the surface, it sounds like a minor technical cleanup. In reality, it was a calculated move that effectively erased 90% of the indexed web from the view of most AI systems.
The immediate fallout was staggering. According to analysis from Search Engine Land, about 88 percent of websites saw a drop in impressions overnight. Sites that ranked in positions 11 to 100, previously visible to specialized tools and crawlers, simply vanished from the data reports. This wasn’t just an SEO story, it was an AI supply chain crisis. The raw material feeding the world’s most sophisticated language models just got dramatically scarcer.
The Mechanics of a Data Heist
For years, the num=100 parameter was a cornerstone of the web’s data infrastructure. It allowed tools like Semrush and SISTRIX to efficiently pull a comprehensive snapshot of the search landscape with a single query. Instead of making ten separate requests to see the top 100 results, they could get it all in one go. This was efficient, cheap, and formed the backbone of how many AI systems and analytics platforms understood the web.
On September 14, 2025, Google slammed that door shut. The change was implemented without prior notice, hard-capping results at 10. Now, to gather the same amount of data, providers must execute ten times more queries, increasing operational complexity and costs exponentially. The “revolutionary” policy mostly revolutionized paperwork for crawlers and made deep web analysis prohibitively expensive for smaller players. SISTRIX, a major SEO analytics firm, was forced to discontinue its desktop SERP data updates entirely as a direct result of the change.
A common retort is that this is just a minor inconvenience, a simple matter of pagination. Tell the dev to fix the code, right? This view misses the strategic forest for the technical trees. The point isn’t that the data is now inaccessible, it’s that accessing it is now an order of magnitude more expensive and resource-intensive. Google didn’t build a wall, it built a toll booth with an astronomically high fee.
The Real Target: AI’s Information Diet
This move wasn’t really about SEO tools. They were just collateral damage. The primary target was the rapidly advancing AI industry. Most large language models like OpenAI’s GPT series, Anthropic’s Claude, and Perplexity rely directly or indirectly on Google’s indexed results to feed their retrieval systems and crawlers. By cutting off the long tail of results, Google just reduced what these systems can see by roughly 90 percent.
The web just got shallower not only for humans but for AI as well. This has profound implications for the quality and breadth of AI training data. Models trained on this newly curated, truncated view of the internet will inherently have a narrower understanding of the world. Niche topics, diverse perspectives, and the “long tail” of human knowledge that lives beyond page one are now at risk of being systematically underrepresented in future AI systems.
The timing is deeply suspicious. As SISTRIX founder Johannes Beus pointed out, “ChatGPT has accelerated the dynamics of search massively, political proceedings increase the pressure additionally. That Google makes data collection more difficult right now is unlikely to be a coincidence.” This is a defensive maneuver in the AI wars, a strategic throttling of the data supply chain for competitors.
Welcome to the Algorithmic Gated Community
We’re witnessing a fundamental shift in the architecture of the internet. The era of a chaotic, open, and somewhat-democratic web is giving way to a new era of algorithmic gated communities. Google is no longer just a search engine, it’s the sole gatekeeper of its own walled garden. By making it harder for external AI to access the depth of its index, the company is ensuring its own AI-powered products, like AI Overviews and the forthcoming AI Mode, have a unique and defensible data advantage.
This creates a dangerous feedback loop. Google’s AI will be trained on the most comprehensive data set, while competitors are left with the scraps. This isn’t just a competitive issue, it’s an issue of bias and perspective. When a single corporation controls the primary data source for artificial intelligence, it gains an unprecedented ability to shape what AI knows and, by extension, how the world thinks.
For startups and independent creators, the message is brutal: visibility is now a luxury good. Even if you build a brilliant product or create valuable content, discoverability is harder than ever. Organic discovery, already a myth for many, is now an even more distant dream. If people cannot find you, they will never get to evaluate you.
Google didn’t just tweak a search setting. It reshaped how information flows online and how AI learns from it. They quietly made it harder for anyone else to build a brain as smart as theirs. Welcome to the new era of algorithmic visibility, where the most important information is the information Google decides you’re allowed to see.



