The statistics behind Google’s relentless pursuit of optimization are staggering: annually, over 800,000 experiments and quality tests are conducted to ensure that search results remain relevant and aligned with user expectations. Of these numerous tests, approximately 5,000 improvements are made to the search engine results pages (SERPs) each year, a testament to Google’s dedication to evolution and user satisfaction.
Despite emerging competitors like TikTok, Google’s dominance in the search engine market is undisputed, holding a massive 92 percent market share for all search queries. This figure is not just a number; it represents the immense quantity of data Google collects and analyzes to enhance user interactions on the SERP. Google is more than a search engine; it’s a dynamic platform constantly revolutionizing the way information is presented and accessed.
This relentless innovation is what makes Google’s SERP immensely valuable for analysis and insights extraction to improve your organic search strategy. Instead of relying on producing content aimlessly, let’s discuss strategies for analyzing search results, ranked pages, and their attributes to extract insights to use for SEO strategies – all with the help of machine learning models!
Identify your search competitors and categorize them based on intent with GPT4
In the process of competitor analysis, it’s crucial to begin tagging the sites we observe or consider as our competitors, based on the type of site we are competing against. For instance, sites like Wikipedia and Investopedia are primarily wiki-based and have an informational intent. They don’t aim to sell a product but serve as resources for explanatory content. Other examples include educational platforms like Coursera and Udemy, which offer courses and learning opportunities. There are also news websites, review platforms like G2 or TechTarget, and various technology or business competitors. This categorization is important as it paints a clear picture of what is ranking for the topics and keywords being analyzed.
This approach aids in understanding where the opportunities lie and how Google positions different types of websites. With ongoing updates to Google’s search algorithm, a significant factor in Google’s assessment of topical authority is how consistently a website focuses on its primary content or “stays in its lane.”
This context emphasizes why its important to analyze SERPs by classifying competitors into their intent category using a classification algorithm or even building a custom taxonomy of websites and their intent labels. This can help you identify the real competition in the SERP, as some positions might be reserved for certain ‘big players’ in their niches, as Google still would like to surface a diverse set of results, particularly for ambiguous queries.
Here’s how to get started:
- Tagging Websites Based on Intent: In your competitor analysis, it’s essential to tag websites based on their intent categories: informational, commercial (or transactional), or navigational. This provides a holistic view of intent.
- Understanding Search Intent Analysis: It’s important to note that while these categories are comprehensive, they are not exclusive. Research indicates that most queries can be assigned to one intent category, with about 75% fitting into a specific type. However, 25% may have mixed intent.
These initial steps will allow you to plot your ranked websites in buckets, showing you which types of websites are given more visibility for your target keywords, and what is their average position.
3. Calculate Share of Search: Additionally, calculate the share of search per domain, per site category, or per intent category. This involves a straightforward calculation and is an integral part of understanding the competitive landscape.
4. Provide custom classification categories with GPT4: To enhance our competitor analysis, we can use GPT4 for classifying site categories and analyzing the search intent of titles and meta descriptions. Special thanks to Danny Richman for developing a useful script that allows for custom categories.
Provide categories for website domains like technologies, news, reviews, wiki, courses, learning, social media, or others, based on their published content. By providing sample classifications, we can generate accurate labels for these websites.
With this data, we can then move on to more advanced visualizations, gaining a better understanding of the top categories in terms of the domains within them and the keywords these domains rank for. This approach significantly enriches our SEO analysis and strategy development.
Identify prominent entities from ranking web results with Google’s Natural Language API
Entity extraction using Google’s Natural Language API is an underutilized but highly effective tool in SEO. This process can be applied in SERP analysis to analyze titles and meta descriptions, and it’s straightforward to set up.
Essentially what this analysis does is to demonstrate the most prominent entities that are mentioned throughout the titles and meta descriptions in the ranked results, which might indicate a content direction for your strategy. Of course, for a more detailed analysis, you can scrape the content from the pages, then analyze the content itself for entities.
If you want to know more ways to incorporate entity analysis in SEO, check out my blog post on the topic.
Here’s how to get started:
Set up the prerequisites: You’ll need an API key, which can be obtained following Google’s setup guide and use the provided template for entity extraction, although not originally created by me, is based on code available in Google’s documentation for the Natural Language API.
Input Data: Enter URLs, ID columns, and meta descriptions into the tool. The entities and sentiments are then extracted automatically.
Understanding the Output:
- Entity: Recognized thing or concept based on Google’s database and model training.
- Salience: The importance or relevance of the entity in the context of the snippet.
- Sentiment Score and Magnitude: These indicate the strength and prominence of the expressed emotion.
- Number of Mentions: This shows the prominence of the entity in the snippet.
Visualization: Analyze the most common entities mentioned in all meta descriptions for the SERPs collected. This requires careful keyword and topic research beforehand to ensure relevant and targeted analysis.
Data Integration: Blend this data with other sources, like the data from SEO bulk SERP analysis, to get a comprehensive view.
This approach allows for a deeper understanding of the topics and keywords in your analysis, enabling more precise SEO strategies. It can also help you uncover entities that are not captured by popular keyword research tools like Semrush.
Analyze the sentiment of the SERP’s results to uncover ranking patterns
Sentiment analysis using Google’s Natural Language API is a valuable tool for brand reputation management. See the detailed blog post for practical ways to implement sentiment analysis in your digital marketing and SEO strategy. It allows you to analyze the sentiment of titles and meta descriptions for a set of keywords, providing insights into how your brand is perceived.
Instructions:
- Setup: Obtain your API key following Google’s setup guide and use the Google Sheets template for sentiment analysis. The template uses an Apps Script to analyze feedback and provide sentiment scores and tags for titles and meta descriptions.
- Input Data: Enter your collected set of keywords into the template. Adjust the sentiment tags based on your understanding of sentiment magnitude if necessary.
Analysis and Visualization:
- Analyze sentiment for titles and meta descriptions.
- Visualize the data using simple graphs, plotting titles based on their sentiment score and rank.
- The size of the graph’s bubbles represents the magnitude of sentiment expressed.
Filter for Brand Insights:
- Filter data for negative or positive sentiments to identify opponents or advocates of your brand in SERPs.
- Analyze the frequency of publishing and the general sentiment towards your organization from different domains.
Keyword and Entity Focus:
- Filter for specific important keywords and entities from your dataset.
- Analyze the predominant sentiment of titles associated with these keywords or entities
When delving into the realm of content strategy, particularly in the face of prevalent negative sentiments—often echoed in forum discussions—it becomes imperative to craft content aimed at addressing user concerns and rectifying misconceptions. This approach isn’t just about countering negativity; it’s about engaging in a meaningful dialogue with your audience. Delving deeper, analyzing discussions that revolve around pricing or the usage of products can provide invaluable insights, guiding necessary business conversations and strategic adjustments. But don’t stop at just titles and meta descriptions.
To gain a more rounded and comprehensive understanding, consider a page analysis approach. This involves scraping content from specifically selected pages, be they positively or negatively skewed, to fully grasp the narrative surrounding your brand or product. This holistic view can be instrumental in shaping a robust and responsive content strategy.
This type of analysis, while optional, can greatly influence your content strategy by providing a deeper understanding of public sentiment toward your brand, enabling more targeted and effective communication and business strategies.
Analyze the language structures using ngrams
In exploring language use analysis, consider the power of ngrams – a fuzzy matching method that can reveal much about keyword combinations and their effectiveness. Even if coding isn’t your forte, tools like GPT-4 can be immensely helpful. For instance, using a prompt in OpenAI’s playground, you can create an Apps Script formula in JavaScript to identify the main ngrams (groups of words) in a text. The process might require a few tries to refine the prompt, but the resulting function can precisely perform the intended analysis.
The key here is to analyze bigrams (pairs of words) or trigrams (triplets) in titles or meta descriptions, uncovering the most prevalent keyword combinations. This analysis, especially when refined to exclude common stop words, can yield intriguing results.
For example, in my analysis of a specific set of keywords, the expression “what is” emerged as a frequent occurrence across titles. This insight can be pivotal in understanding the type of content Google favors for your chosen keywords. Extend this analysis to focus on specific aspects like commercial intent or product-related keywords, and you’ll uncover the ngrams that dominate in these contexts.
Such analysis, simple yet profound, should be an integral part of your SERP analysis and can help you formulate the title and meta description structures to generate as part of your content strategy.
Key Takeaways
I hope you feel fired up and inspired to try machine learning yourself. Something I might have said at the start is that correlation does not equal causation. Just because you’re seeing a relationship doesn’t mean that that’s the reason why that content is ranking as it is. Your SERP analysis should help guide your overall strategy but it does not replace a strategy. In other words, the analysis does not replace keyword research or content strategy.
Here is a summary of what you can utilize machine learning for in your SERP analysis:
- Build custom search intent categories with a classifier or GPT4
- Understand the search landscape, based on topics and keyword clusters, not just keywords
- Extract the prominent entities with Google’s Natural Language AI
- Analyze sentiment of ranked search results with Google’s Natural Language AI
- Analyze language structures with fuzzy matching n-gram approach to identify pattern structures for titles and meta descriptions
Don’t forget to get the Looker Studio dashboard for ML-Enabled Search Engine Results Page (SERP) Analysis ✨