Using embed code from right click while video playing on…
Usually people say migrating to HTTPS is a pretty straight forward. But quite often my SEO audit reveals issues caused during secured hosting migration. Google wanted to remove barriers and gave some leeway in the SEO aspects. But it turns out, many people don’t know where the leeway starts and ends. And when you start implementing redirect rules, things can get complicated. Here are 3 common SEO areas where moving to HTTPS goes wrong.
- Google recommends using a 301 (or 302) redirect rule to enforce HTTPS – so don’t try to get away with a canonical. Check if you or your client is using a canonical tag, which is a much weaker signal.Also, from a user perspective it does not protect users who may land on the non-secure version of a page. Using a redirect rule ensures that users are always pushed to the secure version of your pages and unable to access non-secure pages.Cybersecurity awareness is becoming more widespread and your users may actually look for the HTTPS. If it’s not there, do you think they are more likely to try to edit the URL and add the S? Or just play it safe and hit the back button? Don’t take your chances. Enforce with 301 redirect.
- Be sure to consider WWW and non-WWW versions of the website in the redirect rule used to enforce HTTPS. Check for older redirect rules used to enforce WWW. If those are left in place, a users who lands on any secure HTTPS non-www page could be redirected to the non-secure version with WWW inserted. Any old rules should be replaced with a set of rules to look for every variation of the website: HTTPS vs HTTP in combination with WWW vs non-WWW.
- Your rule should attempt to enforce HTTPS and WWW in as few redirect hops as possible. The thinking here is that individual pages may already have more than one hop now — or in the future. Redirect rules usually get implemented before individual page redirect rules. So if Google stops following long chain redirects after the 4th or 5th hop, that last hop is probably the target page. You can use Screaming Frog, DeepCrawl or Moz Crawl Test to find and address long redirect chains. But over the next year, a valuable page could be moved 3 times and get 3 redirect hops added to whatever rules are in place.It is better to have more rules and fewer hops. For example, use separate rules to look for HTTP WWW, HTTP non-WWW and HTTPS non-WWW so that each variation only uses one redirect hop to enforce HTTPS with WWW. Sometimes developers want to minimize what is in the htaccess file (or whatever file houses redirects) to keep the site speed up. However, the bulk is generally caused by having a lot of individual page redirects for old pages. Redirect rules are by nature efficient because one line of code is used for multiple URLs. You can make everyone happy by doing an exercise to look for long chains and loops as well as eliminate older blogs, press releases, etc. that do not perform well. Trimming the fat and killing these off is essentially the same thing as PageRank sculpting.