Although finding expired domains can be pretty easy, finding expired domains with decent metrics can be a pain. A pretty decent side income can be made from selling domains if you can find the right ones.
The best domains for our purposes are those that have a decent link profile, have not been spammed, have expired and are ready to buy. The best metric to use when evaluating expired domains are Majestic Trust Flow (TF) and Citation Flow (CF). It is best to ignore Moz’s Domain Authority can be very easily manipulated by just building more links.
I personally aim for domains with 20TF 20 CF.
There are loads of different ways to find domains. One big myth is that you need to build your own crawler and have it running on multiple servers. This is simply not true; you can use one of the industry standard tools, such as Screaming Frog to get the job done simply.
Let’s use Screaming Frog to crawl some websites and look at the external links for certain response codes.
Finding suitable seed sites
Your success in finding expired domains rests mainly in seed sites. Seed sites are those that you crawl to look for errors in their external links.
In this tutorial, we are going to be looking for old web directories in the niche ‘gardening’. The first step is to search for ‘gardening directory’.
While this may seem a good place to start, it is also a big waste of time. We are looking for expired domains, Google is going to give us the most relevant, up to date information, so the chances of finding expired sites is pretty low.
Let’s narrow down our search criteria a little by selecting a custom date range.
Once the custom range drops-down button has been clicked, the search results can be set to only show those from a set date range.
For this example, we are going to only look at search results from the year 2000 to 2005.
Now we have a nice selection of potential seed sites that are full of outbound links. The next step involves manually reviewing the sites. We are looking for sites that have not been updated for a while, and so increase the chances that we can find some broken links.
Sift through until you have a nice list and pull them all out into an Excel document.
The first result in the above search looks promising because:
- It’s a directory of gardening links;
- The design looks slightly dated.
Configuring Screaming Frog
Configuring Screaming Frog takes less than 30 seconds. Just click the ‘Configuration’ tab in the top navigation and choose ‘Spider’.
Screaming Frog will, by default, crawl everything, so decrease the crawl time by customizing a few options. The ‘Spider Configuration’ settings I use are:
Check Images [UNCHECK]
Check CSS [UNCHECK]
Check SWF [UNCHECK]
Check External Links [CHECK]
Check Links Outside Folder [CHECK]
Follow internal “nofollow” [CHECK]
Follow External “nofollow” [CHECK]
Crawl All Sub-domains [CHECK]
Crawl Outside of Start Folder [UNCHECK]
Crawl Canonicals [UNCHECK]
Ignore robots.txt [CHECK]
Click ‘OK’ and we are ready to start crawling.
- Enter the URL of the site you want to crawl and click ‘Start’
- Click the External Tab, we are only interested in external links in this crawl so make sure you are looking at this tab.
- Sort the results by Status. The external links will all have a status (301, 302, etc). We are specifically looking for the ‘DNS lookup failed’ status.
- Wait until the crawl is finished (100%)
- Keep an eye on how many URLs are crawled
Analyzing the results
Once the domain has been crawled and you have a decent list of status code ‘DNS lookup failed’ status codes and paste the domains out of Screaming Frog and paste than straight into a nice free tool called URL to domain.
This tool is essentially going to trim out all the unneeded information from the links e.g. sub-folders.
Then all that is left to do is check the availability of the domains and to check the metrics. Any domains that are suitable, you can buy and sell for a tidy profit.