The Verge Makes A Merge & An SEO Fail

EDIT: The error appears to be fixed, but The Verge never replied to any of my attempts to contact them about this. Hopefully they fixed it in time.

Popular online news website The Verge announced via social media today that they were merging their mobile and tablet versions of the site into a new, responsive version of the website. Typically when large moves like this are made errors are committed I was curious if a large site like the Verge had properly tested their URL structure for SEO and UX before finalizing the merger or, if like so many others, they missed something.

Please note, I’ve done my best to notify The Verge about this error so hopefully they clear it up soon. I’m only writing this quick blog post as a warning to others who might want to move URLs or launch a new version of their site.
comment on The Verge facebook

Previously to their new, responsive, version of the site The Verge appears to have used 2 sets of URLs. They had their main, pc and laptop friendly, site located on http://www.TheVerge.com and they had their mobile site located at http://mobile.TheVerge.com other than the subdomain being different the URL structures for pages were the same. So for example the page /2012/3/29/2910536/google-investigating-play-store-error was found on both domains and still appears on the WWW (main) version of the site.

Before a site is launched an SEO should use a tool to check what code is being reported from the old domain when seen by bot. These tools are called “Server Header Checkers” and SEOBook.com has a good one here you can use. If The Verge had done their redirects properly they might have done a 1 to 1 redirect from each MOBILE URL to the same URL on WWW. This is known as URL mapping and can be a bit tedious especially for a site the size of The Verge. There are other solutions for large sites based on Apache such as using REGEX code (Regular Expressions) inside of an HTACCESS file to create rules to direct each URL over which in this case would have been fairly simple since the only thing changing was the subdomain. Either way it’s a pretty easy redirect to make and even a novice programmer would be able to build it and have it up and running in a days work or two.

So running a few of the mobile URLs through the above mentioned Server Header Checker tool it appeared that the 301 redirects were indeed being handled properly. Each MOBILE version returned a server status code of 301 Permanently Moved to the corresponding WWW URL. I tried this with several pages I found by doing a search on Google and Bing for “site:mobile.theverge.com” and made sure to include the main URL, categories, and story URLs. All seemed to be a go, but when checking to ensure that URLs are being handled correctly merely pasting a URL into a checker tool ins’t enough. Bots don’t copy and paste URLs to assign link juice, they follow them via other websites. A very important distinction that could cause problems.

I used MOZ’s OpenSiteExplorer to find pages that linked to documents on the MOBILE version of the site and then clicked those links. This is what uncovered the SEO Fail made by The Verge. Every single MOBILE url, when clicked from a referring site, is redirected to the WWW HOMEPAGE. This is a big problem from an SEO and UX standpoint. First off users will get redirected to a home page that makes it incredibly difficult to find older documents and is designed to funnel traffic to new, fresh content. Secondly a search bot is getting a different server status code when they follow these links that is a 301 Permanently Moved code to the WWW HOMEPAGE (i.e. http://www.TheVerge.com)

The net result for The Verge will be a very quick drop in search rankings for all of the longtail stuff their old news stories ranked on, this will also directly impact their advertising revenue and could cause other unforseen issues. It’s never enough to merely click a link or pass a link through a server header checker tool. You must do both on numerous URLs when making a site move or a URL merger like this to ensure that everything is being handled properly for both the sake of users and bots.

New Website / Website Update Tips:

  • Catalog all URLs prior to doing work
  • Setup new site / updated site on a test server
  • Ensure new site / updated site is not indexable by search engines (meta robots / robots.txt)
  • Decide how you’ll do URL redirects
  • Prior to launch check URLs status code via “Server header checker tool”
  • Prior to launch check URLs by clicking on them from linked websites
Joe Youngblood

Joe Youngblood

view all posts

Joe Youngblood is a top Dallas SEO, Digital Marketer, and Marketing Theorist. When he's not working with clients or writing about marketing he spends time supporting local non-profits and taking his dogs to various parks.

2 Comments Join the Conversation →


  • Great post, and congrats for jumping on this so soon. I have a question, albeit slightly out of content (I haven’t done any prior research on The Verge but writing this).

    You refer to URL mapping, but if the URL structures are identical as mentioned, then why wouldn’t they just 301 all “mobile.” requests to the “www.” version, as with a non-www to www redirect, for example? Wouldn’t URL mapping be more appropriate if they had different URL structures between mobile/desktop sites, and wanted only map out the most important posts (not that that is a necessarily good idea).

    • joeyoungblood

      You’re absolutely correct. When you release a new website with a new URL structure, like say going from an HTML/CSS template to a CMS like WordPress, you’ll probably have to map the URLs, but for a huge News site that could be cumbersome. The Verge, however, doesn’t appear to have changed URL structures at all, but merged in the mobile URLs to the traditional ones. They could have still ‘mapped’ them in a way by dumping them into a spreadsheet, finding and replacing the MOBILE with WWW and generating out the appropriate code, but that might still take too much time and make a massive file. So in this case simply using REGEX code in an HTACCESS file could have solved the problem fairly easy. I mention that at the end of the same paragraph.

      I am uncertain how they are initiating redirects so I have no clue where the mistake might be, but it’s pretty clear they are not using either method mentioned in the article.