Search Engine Optimization: On Page, Off Page, & Technical SEO
What is it, how to implement it, and how to audit and improve a websites ranking
SEO Techniques
- On-page SEO
- Off-page SEO
- Technical SEO
- Local SEO
When a consumer uses a search engine to search for a topic [keyword; keyphrase], the search engine takes the phrase and scans the websites in its knowledgebase looking for sites that are the most relevant for that particular phrase. The algorithm doesn’t pick sites randomly, it looks at many factors including the websites crawlability, the pages that are indexed on each site, and the popularity of the site which is determined by how many links are pointing to that site/page. The algorithm also uses several HTML elements on web pages which include words and phrases that are semantically relevant to the consumers search query.
On-page SEO
On-page SEO refers to the on-page HTML elements that search engines, and visitors, use to determine the page topic. Below is a list of on-page elements followed by a description of each.
On-Page SEO Elements
- URL
- Title Tag
- Meta Description
- Heading Tags
- Body Tags
- Keyword density
- Keyword variation – latent semantic indexing (LSI)
- Alt Text
- Image File name
- Anchor text (internal to the web page)
- Outbound links on page – to relevant sources
- Schema & Structured Data Markup
URL
The first thing you will do before you start a website will be to purchase a domain name. The domain name is the “example.com” that is typed into the browser window if you were to visit a site directly. This is the most important element of a website. When purchasing a domain name, you should abide by the following guidelines:
- Avoid hyphens
- Avoid uncommon top-level domains (.info, .cc, .la)
- Limit length to approximately 15 characters
- Be cautious of permutations
- Always include the brand name
- Do not to use too many keywords
(Dover & Dafforn, 2011)
When crafting a post or page, it is okay to include the target phrase or keyword in the URL. It will typically look like this:
www.example.com/keyword-topic/
Including the keyword or topic in the post or page URL will signal to the search engine what the page is about (Dover et al., 2011; Enge, Spencer, & Stricchiola, 2015; Fishkin & Hogenhaven, 2013).
Title Tag
The title tag is the information that is shown on the tabs at the top of each browser window. It is also used in the search engine results pages (SERPs) as the headline. Placing the page topic, keywords, or key phrases in the title tag act as another signal to the search engine to what the page is about. The general guideline is a title tag should be no longer than 65 characters.
The HTML tag for the title looks like this:
<title>Title goes here</title>
The title tag will be nested within the <head></head> tags.
Meta Description
The Meta description is not used as a ranking factor by search engines, but it does serve an important purpose; the search engine will bold the keyword or key phrase being searched for by the consumer (Fishkin et al., 2013). The Meta description is the textual description of a web page, and also serves to boost the click-thru rate (CTR) in the SERP (Dover et al., 2011). The Meta description should entice the consumer to click a search result because it describes what they were looking for.
The HTML tag for the Meta description looks like this:
<meta name=“description” content= “enter the description in this space” />
The Meta description should be nested within the <head></head> tags.
Heading Tags
The heading tags vary by number with one being the most important, and progressively less important as the tag number increases. The <h1> tag is thought to be the headline tag. It is the title of the post or page, and if you use a theme that uses the post/page title as an automatic filler for the <h1>, you should either adjust the code, or use a plugin to control what the <h1> is. The best practice for the <h1> tag is to only have one on each page. Although not as prominent as it once was, search engines still use this tag as a ranking factor (Dean, 2018). The <h2> thru <h4> (and beyond) tags carry no real weight as a ranking factor, but they do improve the quality of the readers experience so they should be used as necessary (Fishkin et al., 2013).
The heading tags look like this:
<h1>Enter your heading here</h1>
Body, Keyword Density, and Keyword Variation
In the old days, the use of keywords to rank a page was easy. SEO’s would fill the Meta keywords tag with keywords they wanted to rank for. They would also include them in all headings, titles, image alt’s, and many times in the body copy (keyword stuffing). That is no longer the case. There are many differing opinions with regard to keyword density, or the amount of keywords to total words in a document. In the old days 3% – 5% was a general rule, but these days the focus must be on the reader (Larson & Draper, 2018). Many of the industry’s top SEO’s discuss strategies like topical authority, or pillar pages, latent semantic indexing (LSI), and long-tail keywords based on the intent of the search. Your keyword strategy should be based off the AIDA model, the consumer purchase decision process, and the variations to which the intent and method will be used to search. Will the consumer be using a search engine on their phone? Desktop? Local? Will they use Alexa or Google Home to conduct a voice activated search? Will they use Siri? Understanding the nature of the search intent will deliver the topics and tools to be used to develop a proper strategy. For now, when discussing on-page SEO for keywords, think topics, think latent semantics, and think broadly about the reader.
Images
On-page SEO for images is simple. Use some form of a descriptor of the image as it relates to your key phrase targeting strategy in the filename, and include a similar descriptor in the alt text. Again, several well-known SEO’s support this strategy, and make strong inferences towards the alt text as being correlated with improved rankings (Dover et al., 2011; Enge et al., 2015; Fishkin et al., 2013). The reason for including the topics and key phases in the filename and alt text, is the search engine crawler cannot see images. The filename, and more importantly, the alt text describe to the search engine crawler what the image is. The alt text also serves to inform the viewer when image is when the image load is slow.
The HTML tag for images is:
<img src=”image name.jpg” alt=”keyword or phrase”>
Internal Linking & Outbound Links
Information architecture will be discussed later in this post, but the premise of it is the web crawler can only travel to pages on a website if they are linked together. A good analogy is provided care of Dover et al (2011), whereby they compare website information architecture (IA) to that of an ant hill. Ant hills have many different pathways, and they expand the deeper the hill travels underground. Good IA is similar. The more ways that pages are linked together on the site, the easier it will be for a visitor and web crawler visit all pages on the site with few clicks. This means more pages will be indexed, and more content can be found. When crafting any content, the key should be to use words and phrases within the content, that can be used to link to other relevant pages on the website. The other pages on the site should also link back to the new pages that are created. While it’s suggested to not over optimize the anchor text of the link, the anchor text should be topic relevant. Another item to remember is that internal links should always include the HTML syntax that allows the link to be followed. With outbound links, you should set the HTML syntax within the tag to “nofollow”. When no follow is not explicitly stated, the link is do follow.
The HTML tag for links is:
<a href=www.exampleURL.com/>this is the anchor text or the hyperlink text</a>
To make the link no follow:
<a href=www.exampleURL.com/ rel=“nofollow”> this is the anchor text or the hyperlink text</a>
Schema & Structured Data Markup
Schema and structured data markup has been around for a few years, and it is the one competitive advantage to improving a website’s ranking and CTR to all other sites. Many sites in my experience are not using it, but it is an additional layer of SEO that can be used to inform the web crawlers about the specifics of your site. There are many ways to implement schema and structured data. They include:
- HTML coding
- Google Search Console – Structured Data Highlighter
- Plugins
Schema and structured data are the reasons you see rich snippets in the SERPs. There are many variations, and you can learn more at schema.org, but for now, here are some of the categories that you use as an SEO to improve a websites rank and CTR.
- Creative works
- Events
- Organization – Business
- People
- Place, local business, restaurant
- Offers / promotions
- Reviews
- Action
(Schema.org, n.d.)
Final Words about On-Page SEO
The web page must be designed so that it is appealing, easy to navigate, and provides good information that adds value to the reader. With that said, having an understanding about how the web crawlers view the page will provide insights into why all of the items discussed above are important. Web crawlers cannot parse JavaScript links, cannot view images, and cannot tell if a web page has a beautiful design. The web crawler can understand CSS code to determine colors, but it requires the HTML elements discussed above to describe the page topic, and what is important on the page. Below is an example of what the user sees vs. what a web crawler sees. You can view the source code at any time by using “ctrl + U”.
The irony of the image of the Davenport University homepage above, is DU does not have a Meta description. Just by viewing the source code, an SEO can identify areas where improvements can be made to both the optimization of the page, and the CTR in the SERPs. It’s also worth noting the importance of these elements. If the SERP result was an ad, the HTML title would be the headline, and the Meta Description would be the sales copy that motivates consumers to click, and differentiates the SERP offering from the other SERP offerings.
On-Page SEO: Keyword Research & Targeting
Selecting a target keyword/keyphrase should be based on four criteria:
- Relevance
- Traffic volume
- Degree of competition
- The webpage’s current ranking
In returning to last week’s discussion surrounding the AIDA model, the consumer decision process, and the inbound methodology, digital marketers need to consider what stage a consumer is in when they are using the search engines. It’s easy to infer they are in the awareness, or problem recognition/information search stage. The intent of the search query can confirm these inferences. The digital marketer needs to place themselves in the shoes of the consumer to understand what has activated a need, and what types of phrases, words, and topics a consumer will query in the search engine to start looking.
There are many tools that can be used to identify keywords and phrases. The digital marketer starts by entering a seed keyword, or topic in any tool, and the tool will generate tons of results. Some tools include:
- Google Suggestions
- Google Trends
- Quora
- Keyword planner
- Google search console
- Bing webmaster tools
- Keywordseverywhere browser extension
- Keyword shitter
- Ubersuggest
- Keyword finder
- SEMRush
It’s important to identify the search volume (traffic) that each keyword, phrase, or topic generates. It’s also important to identify the difficulty associated with each. Some phrases and topics will be ultra-competitive and can cost thousands of dollars, and take years to rank. The longer the phrase, or more localized the phrase, can be easier. A good workflow is as follows:
- Identify the topic of the page
- Identify the searchers intent – stage in AIDA and decision process
- Enter one or a few keywords into any of the tools above
- Filter by search volume and competition
- Conduct a few searches of the terms to view the SERP
- Select a focus topic – range of phrases
- Optimize the page
- Monitor rankings and make adjustments
On-Page SEO: Web page Relevance
How relevant a web page is in relation to a user’s search query is one of the factors that will determine how the webpage is ranked in the search engine results pages (SERPs) (Larson & Draper, 2018). This again will be determined by the choice of topic, and level of mastery with regard to on-page SEO. Longer posts have been shown to rank better, as does fresh content (Enge, Spencer, & Stricchiola, 2015).
On-Page SEO: Webpage Quality
Many of the top SEO’s suggest a number of criteria act as an indicator for quality (Dover et al., 2011; Enge et al., 2015; Fishkin et al., 2013; Larson et al., 2018). These include:
- Click-thru Rate (CTR) from SERP
- Bounce Rate
- Page Speed
- Structured Data – Schema Markup
- Original content
- Freshness
- Word count
- Content quality
- Mobile Optimized
- Duplicate Content
Local SEO
Local SEO is a niche type of SEO that is important for local businesses. Although the ranking factors for a regular web page still apply, the following are also considered important when trying to rank a site for a localized topic (Moz, 2017).
- Name, Address, Phone (of business – this must be consistent)
- Reviews
- Structured & Unstructured citations – Those directories and NAP listings with and w/o a URL
- Local listings – (My business, Foursqure, Yelp, Yext…etc.)
Technical SEO
Technical SEO is a field within SEO that is often missed. There are several technical methods that can be employed to ensure a website is allowing traffic, is being indexed, and is not being penalized for duplicate content or other server-side errors. Below is a list of technical SEO factors:
- Information Architecture
- Security (HTTPS vs. HTTP)
- Page load speed
- Sitemap
- Robots.txt / Meta Robots (page level directive)
- Canonicalization
- 301 redirects
Information Architecture (IA)
Earlier we discussed IA, and compared it to tunnels in an ant hill (Dover et al., 2011). A visitor to a site wants to find what they are looking for quickly, and do not want to have to complete many clicks to find it. Most consumers will quickly bounce from a site if they cannot find what they are looking for. It’s of utmost importance that the digital marketer works with the web development team to create an easy to use navigation, and an IA which links down to category and topical pages, which in turn, those pages link to deeper pages. Dover (2011) suggests that it should take consumers no more than three clicks to get to any page on the site. Good IA will lead to a better consumer experience, and will allow the web crawlers to access every page on the site, which ensures all pages can be indexed in the search engines.
(Patel, n.d.)
Security – HTTP vs. HTTPS
Google has recently started to place a “not secure” message in the browser bar for all sites that are listed as HTTP instead of HTTPS. Larson et al., (2018) discuss the differences between the two in chapter two. HTTP stands for hypertext transfer protocol, while HTTPS stands for hypertext transfer protocol secure. HTTPS adds a level of encryption not included in HTTP sites. This is why on any reputable site you submit your credit card info to, it will be HTTPS, and you will see a lock symbol in the browser bar. Websites that switch over to HTTPS will enjoy a ranking benefit. It’s been suggested that eventually Google will penalize sites that are not HTTP (Teh, 2018). A large percentage of sites on the Internet also have not switched over. As a digital marketer, now is the time to take advantage of this opportunity before everyone catches up.
Page load speed
The web page load time is a major factor to user experience. As Shaun Anderson (2018) writes on the hobo blog, as page load time increases, so does page abandonment. Consumers are impatient, and this can lead to a negative impact from the view of the search engine algorithm. In addition John Mueller (2015) of Google, suggests that page speed is a ranking factor, and a slow page load speed will lead to a negative impact on rankings. Use the google page speed tool to test pages and learn what is causing the long load time.
Sitemap
A sitemap is exactly what it sounds like. A map of all the pages and posts on a site. Digital marketers should be using both Google Search Console and Bing Webmaster tools to upload a sitemap from their sites often so that the search engines are aware of the current map of the site. This leads to rapid indexing of non-indexed pages, and ensures that the search engines are aware of all pages on a site. If a page is not indexed, it will not be ranked in the search engine for any query.
Robots.txt and Meta Robots
The inclusion of a robots.txt file is a good indication of how much or how little the developer cared about SEO (Dover et al., 2011). The robots.txt file is used to allow, and block access to pages on the site. If the robots.txt file is set to disallow, it could prevent all, or some, pages on the site from being indexed. Using the robots.txt file is important to prevent and allow specific pages from being accessed by the web crawlers. Below is an example of a robots.txt file, as accessed on any website that has one by appending the /robots.txt to the end of the root domain.
There is another way to control which pages are indexed, not indexed, and links followed or no followed. With the Meta robots directive, digital marketers can control at the page level vs. the site level (robots.txt). The benefit of the page level directive is digital marketers can “noindex” a page, but still allow the links to pass juice to the linking pages. This is ideal for some instances of duplicate content, and for controlling the rules for links. Below are the three types of syntax for the Meta robots. The Meta robots HTML tags are inserted in the <head> section of the HTML. Many plugins automate this if you decide to use WordPress to host your site/blog.
<META NAME=”ROBOTS” CONTENT=”NOINDEX, FOLLOW”>
<META NAME=”ROBOTS” CONTENT=”INDEX, NOFOLLOW”>
<META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”>
Canonicalization
In many instances, duplicate content is caused by multiple versions of a URL. For example, www.example.com, http://example.com, http://www.example.com/, http://example.com/ are all different pages when it comes to the search engines, even if they are the home page of a site with the same title, meta’s, and headings. This is one of the main causes for duplicate content. When a search query is carried out in a search engine, which URL will get priority? The answer is they will compete with each other, and Google/Bing will not know which one to serve, so they many not serve any of these pages in the SERP. To avoid this, digital marketers can use canonicalization to tell search engines which URL is the main URL, and which URL’s to ignore. The canonical tag should be implemented in the <head> section of HTML on the pages which are duplicate content (Dover et al., 2011).
The HTML tag for canonicalization is:
<link rel=”canonical” href=”http://example.com/” />
301 Redirects
Sometimes a site will have different types of errors. One common one is the 404 error. This occurs for a number of reasons. One reason is this type of error is caused when a URL is changed. The old URL remains active, but returns a 404 because the page is no longer at that address. These types of errors are a signal to Google that the site may not be as good a quality site as other sites. Digital marketers can create a custom 404 landing page with a link to the new page, but this is a lot of work. Another method is to redirect the old page to the new page. While there are two types of redirects, 301 & 302, the 301 redirect is the preferred method for SEO. A 302 is used when the page is moved temporarily. It does not pass link juice. The 301 redirect signals that a page has permanently moved, and it allows link juice to be passed. Adding a 301 redirect is fairly simple. Digital marketers can do it, or they can ask their developer to add a simple line of code in the .htaccess file. Using WordPress, 301 redirects can be implemented simply using a plugin. It should be noted that the 301 redirect was, and still is, used by “black hat” SEO’s to quickly rank pages. Using the black hat method, SEO’s purchase domains, and redirect them to their “money site”. They then purchase thousands of links and point them to the site that is 301 redirected to the money site. The link juice flows through the 301 to the money site, and the money site achieves a bump in rankings. While this is unethical in terms of proper SEO, it does work. The risk is the money site will be penalized at some point, and de-indexed from the search engine. In most cases this strategy is implemented when the money site can be easily recycled. It is not a recommended practice, but it is one that digital marketers should be aware of in case they outsource SEO to a company and want to audit their practices.
Off-Page SEO
Off-page SEO consists of the link strategies that are used to improve a sites relevance. As reported by many correlational studies, inbound links from authoritative sites has one of the highest correlations to rankings in the search engines (Moz, 2017; Dean, 2018).
Inbound Links
Inbound links are earned and acquired, and the authority of the site that is linking to another site is very important. A long time ago, a metric called page rank was used by Google. Each site had a page rank metric, and acquiring links from sites with a higher page rank correlated with a significant rankings boosts (Moogan, 2013). Google probably still uses this metric, but it is no longer public. In response to the absence of page rank, many SEO tool sites used for conducting link analyses and research developed their own. Moz uses page authority (PA) and domain authority (DA) to signify the authority of a page and the domain (Moz, n.d.). AHREFs uses the URL rating (UR) to identify the strength of the link profile of a page, domain rating (DR) to signify the strength of the domain, and Ahrefs rank (AR) to rank the strength of a websites bank link profile compared against all other sites in the world (Soulo, 2018). Majestic SEO uses trust flow (TF) and citation flow (CF) to signify the trust of a site, and how influential the site is based on its link profile (Schwartz, 2012).
Anatomy of a Link
Source: (Moogan, 2013)
There are two types of outbound links which at one time made a huge difference. A while back, “dofollow” links were highly sought after by SEO’s. Especially if a page had a high link metric (ex. pagerank). The idea with do follow links, was that web crawlers would crawl them and find the linked to site, which would then increase the authority and ranking of that site. Google started to suggest all sites to use the rel= “nofollow” syntax in the <a href> tag shown above to prevent web crawlers from following outbound links, and stop the passing of a large amount of authority. This was to combat link spam among other things. It’s debatable these days depending which source you read as to whether there is a difference between do follow and no follow links. From an internal linking perspective, all links should be do follow. If you allow guest bloggers to post on your site, it’s suggested to use no follow links to prevent link auctions, link sales, and a website from being associated with a spammy site. Link sales, auctions, and farms are against Googles terms of service. I can tell you links are still being purchased, and the business is still lucrative. I have been contacted in the past by many large corporations (which remain nameless) to link to their sites in exchange for monetary compensation. In fact, if you look at some very competitive niches on Google, I’m sure you can uncover some massive sites linking to pages and businesses which might not make sense.
Off-page SEO: Link Building
There are many different strategies for digital marketers to conduct outreach and link building. Of course, creating content that adds value will eventually get linked to, but since inbound links from authoritative sites is highly correlated with rankings, digital marketers need to develop an outreach strategy to stimulate inbound links. Some strategies include:
- Content placement (aka guest blogging)
- Company profile listings
- Open conversation opportunities (blog comments, forums, etc.)
- Editorial mentions
- Directory listings
- Resource lists
- Sponsored links (be careful with Googles Webmaster Guidelines and Terms Violations)
- Article submissions
- Interviews
- Broken link building
- Building a tool or app
- Purchasing (not recommended, but it is still being done)
- Buy established blogs/sites
- Embeddable photos/images
- Copy competitors profiles
- Customers
- Links from copied images
- Discounts / free products
- Infographics
- Industry roundups
- Interactive infographics
- Link bait
- Link exchanges (aka reciprocal links) (violates Googles Terms if they catch you; not recommended)
- Link reclamation
- Live blogging at an event
- Monitoring for brand mentions and requesting a link
- Press releases
- Profile pages for people
- Run a competition
- Volunteering
- Live meetup
- Plugin and theme development
- Tutorials
- Embedded code
(Moogan, 2013; Ward & French, 2013).
One item mentioned above is very sleek; generating inbound links from copied images. It is suggested that you always use original images whenever you can on your site. Whether an infographic, or image, you can insert a line of HTML so if the image is copied by someone, you will automatically get a link back to your site. Here is the HTML syntax:
<a href=”your website URL”><img src=”filename and location” alt=”keyword or phrase”/></a>
SEO in Action
Advanced Search Techniques
Search engines provide SEO’s and digital marketers with a number of basic tools to conduct an investigation of any site; advanced search operators. Below are a few advanced search operators you can use to get more out of search results, and conduct an SEO analysis of any site.
-keyword — placing a minus symbol before the keyword will eliminate that keyword from the search results.
“keyword” — placing a keyword in quotes will target the search results for that specific word or phrase.
Keyword1 or keyword2 — placing or/and between keywords will alter search results for those keywords.
site:example.com — the search results will return pages only for this site (good for checking which pages are indexed).
related:example.com — evaluate how relevant a websites neighborhood is.
info:example.com — learn whether the page/site is indexed. Can also alert whether there is any site issues (debatable whether this operator still works).
cache:example.com — Google’s text version of the page.
inurl:keyword — returns pages with the target keyword in the URL.
intitle:keyword — returns pages with the target keyword in the title tag.
(Dover et al., 2011; Enge et al., 2015; Ward et al., 2013)
Learn more about advanced search operators
SEO Site Audit – Webpage Audit
Now that you’ve learned about the basics of SEO, it’s time to put your knowledge into action with a page/site audit. Digital marketers can take their analyses deep when conducing site audits. In this post, you will learn some basic audit techniques. I encourage you to conduct further research and learn about how to conduct a more in-depth audit, as it is through audits that SEO’s find issues, opportunities, and improve websites organic rankings.
Dover (2011) provides the most simple and comprehensive method to conduct a quick scan of a website and identify on-page, off-page, and technical SEO issues.
The 1,000 Foot View – SEO Audit
- Enter a search query for a phrase that is important to the site you are investigating. Review the search results. Make note of the neighborhood. For example, are the sites mature, what is their PA, DA? Are the search results ultra-competitive? What about ads? Who is buying them? This will determine how difficult it will be for the site to improve its organic ranking.
- Identify any competitors – make a note of them so a competitive analysis can be conducted later
The 100 Foot View – SEO Audit
- Is the sites domain name appropriate? Does the domain meet the guidelines mentioned above?
- Visit the site. What is your impression of the design, graphics, colors, user experience (UX)? Make notes.
- Check for canonicalization errors. Enter the URL’s listed below and see whether the URL redirects to the canonicalized version or not. If it doesn’t, this is a duplicate content issue that was discussed above. The URL will need to be canonicalized using the rel= “canonical” syntax or a 301 redirect will need to be implemented.
- http://www.example.com/
- http://www.example.com/index.html
- http://www.example.com/index.php
- http://example.com/
- http://example.com/index.html
- http://example.com/index.php
- http://example.com
- http://www.example.com
- http://example.com/default.asp (or aspx)
- http://www.example.com/default.asp (or aspx)
- Investigate these URLs with capitalization as well
- Check to see if a robots.txt file exists. Is the robots.txt file blocking any important pages? Refer to the discussion above about how to visit any site’s robots.txt file.
- Check to see if there is a sitemap.xml file exists. If it does, review it to get an understanding of the websites IA, and make notes of any changes to IA that should be made. Note, if changes are made to sites IA, you will need to implement 301 redirects if pages are moved so you don’t accumulate a ton of 404 errors. If a sitemap doesn’t exist, create one, and submit it to Google via Google Search Console, and Bing through Bing Webmaster Tools.
- Verify whether Google Analytics is installed. If it is not, make a note and get the tracking code implemented on the site.
The 10 Foot View – SEO Audit
- Visit the homepage – Does it link to every page on the site? If not, then there will be indexation issues because web crawlers will not be able to access those pages. Neither will consumers.
- Is there category pages? Make a note. If the site has a lot of pages with similar topics, it might make sense to create a category page for better IA.
- Review content on websites pages. Is it written for readers or search engines? If the latter, consider your plan to rewrite the content so it adds value to the reader, or remove it.
- Review URL structure. Does the URL structure of pages meet the guidelines mentioned above?
- Review internal linking. Are pages linking to each other?
- Review outbound links. Does the site link to other sites? Make a note of their relevance and quality.
The 1 Foot View – SEO Audit
- Review on-page factors. Make a note about what you find. Use the ctrl-U keyboard shortcut and review the source code. Use ctrl-F and search for title, meta, h1, h2, h3…, alt. Make a note about what you find compared to the best practices listed above.
- Review for duplicate content. Copy a sentence on each page and search for it in quotes in the search engine. How many results come up?
- Count the number of pages and make a note of their URL. Now conduct the site: search operator for the website URL in Google. Do the same amount of pages show up? Do no pages show up? Do many more pages show up? The answers to these questions will determine whether pages are being indexed, whether there is duplicate content, and whether the site is being penalized by Google if nothing shows up.
Prepare a Report & Action Plan
Now that you have completed a very basic audit, it’s time to put your findings in writing, and prepare an action plan. Use the best practices and your knowledge you’ve learned to make an improvement.
Link Audit
Conducting a link audit is beneficial for two reasons:
- It allows you to identify the metrics for your site and the amount of links pointing to your site
- It allows you to identify competitors link strategy – possibly you can take advantage or identify how they are ranking their website organically
There are many tools that can be used to conduct a link audit. Most cost money, but do provide a free trial. Here are some popular ones:
- Majestic
- Open Site Explorer
- AHREFs
- Google Search Console (free – your site)
Use any one of these, or a combination, to analyze your site, and the competitor’s sites that you noted earlier in the SEO audit. What are your findings? Make notes.
Once you have identified the type of linkable content on the site, develop a strategy (like the ones listed above), to conduct outreach and start generating inbound links to your site. Include your findings and strategy in your SEO audit report.
Search Engine Optimization Summary
This lecture has attempted to cover the broad array of SEO concepts and techniques that can be used to improve a websites organic ranking in the search engines. It is by no means exhausting, and I suggest you take what you have learned here, and continue your research by reading, learning, and most importantly, by testing your own hypotheses on your own site. To test your own hypotheses on your own site, you will need to purchase a domain, purchase domain hosting, purchase an SSL certificate and set the site up to be HTTPS, upload a CMS to your host (I recommend WordPress), design and develop your site, install plugins to automate some tasks like Yoast SEO, and start implementing and testing your SEO strategy.
References
Anderson, S. (2018, March 9). How Fast Should A Website Load in 2018? Retrieved from hobo: https://www.hobo-web.co.uk/your-website-design-should-load-in-4-seconds/
Dean, B. (2018, May 16). https://backlinko.com/google-ranking-factors. Retrieved from Backlinko: https://backlinko.com/google-ranking-factors
Dover, D., & Dafforn, E. (2011). Search Engine Optimization Secrets. Indianapolis, IN, United States: Wiley.
Enge, E., Spencer, S., & Stricchiola, J. C. (2015). The Art of SEO: Mastering Search Engine Optmization (3rd ed.). Sebastopol, CA, United States: O’Reilly.
Fishkin, R., & Hogenhaven, T. (2013). Inbound Marketing & SEO. West Sussex: WILEY.
Larson, J., & Draper, S. (2018). Digital Marketing Essentials. Rexburg, Idaho, United States: Edify.
Moogan, P. (2013). The Linkbuilding Book. Paddy Moogan.
Moz. (2017). 2017 Local Search Ranking Factors. Retrieved from Moz.com: https://moz.com/local-search-ranking-factors
Moz. (n.d.). Domain Authority. Retrieved from Moz.com: https://moz.com/learn/seo/domain-authority
Moz. (n.d.). Page Authority. Retrieved from Moz.com: https://moz.com/learn/seo/page-authority
Mueller, J. (2015, April 25). English Google Webmaster Central office-hours hangout. Retrieved from YouTube: https://www.youtube.com/watch?v=h0thsBnTUyg#t=2760
Patel, N. (n.d.). How to Create a Site Structure That Will Enhance SEO. Retrieved from NEILPATEL: https://neilpatel.com/blog/site-structure-enhance-seo/
Schema.org. (n.d.). Organization of Schemas. Retrieved from Schema.org: https://schema.org/docs/schemas.html
Schwartz, B. (2012, May 14). Majestic SEO Announces New Link Metrics: Trust Flow & Citation Flow. Retrieved from Search Engine Land: https://searchengineland.com/majestic-seo-announces-new-link-metrics-trust-flow-citation-flow-121230
Soulo, T. (2018, October 12). Ahrefs’ SEO Metrics: What They Mean and How to Use Them. Retrieved from Ahrefs.com: https://ahrefs.com/blog/seo-metrics/
Teh, J. (2018, July 27). HTTP vs HTTPS: The Difference And Everything You Need To Know. Retrieved from SEOpressor: https://seopressor.com/blog/http-vs-https/
Ward, E., & French, G. (2013). Ultimate Guide to Link Building. United States: Entrepreneur Press.
Nice article on SEO’s On page & Off page optimization. This post will help newbies very much. It’s good to see someone put forth recommendations on SEO for newbies. As you point out, there are plenty of terms, tricks & strategies in SEO for newbies to apply. Great writing! Thanks for the information, it was very useful but I think you could write a bit about Competitor analysis.
Thanks.
Good suggestion.