Without this metric most PR teams usually show the entire site-wide readership of a publication next to their coverage. e.g. We got coverage on this website which has 20 million visits a month. We think that inferring that a website's 20 million readers in a month will all see the specific article where you achieved coverage is unrealistic, to say the least.

Publications only tend to provide monthly site-wide visit data to 3rd party data providers. They never give breakdowns by page. Which is what we're attempting to estimate here with our article level estimate.

You can read more about the rationale & approach in this blog post "why estimate article level metrics"

A summary of our approach to calculate article level views:
We take the site-wide data from SimilarWeb, then apply our own research in an attempt to give a more realistic estimate of how many people are likely to have actually viewed each piece of coverage. 

Our estimated number shows how many impressions this article will receive in the lifespan of the article. 

The algorithm has been built from own our research project into how much actual referral traffic comes from websites of all shapes and sizes.

  1. We then reverse engineered this research using sensible estimates about how many views of an article was likely to generate that level of clicks using industry standard click thru rates. So for a website of x popularity & x traffic (taken from similarweb) we apply a % share of that site’s total traffic.
  2. We then codified this into a scalable estimator.
  3. We detect whether the coverage is a homepage or article. From this we assign an expected shelf life for this page to take a % share of a site’s overall traffic.
  4. We then factor in the overall size & popularity of the website to derive how likely your coverage is to be viewed by a site this size. E.g. a blog with 600 visitors a month and 50 pages. You’re more likely to reach a high % of those 600 vs a large site like The Telegraph where you will reach a much smaller % of their total because their site is bigger/more popular. But of course that small % would likely yield a higher amount of views than the smaller blog.
  5. A big part of the research was based on looking at real visit data from websites. We have access to a wide range of Google Analytics accounts. e.g. a client featured in the Telegraph online (an average article) gets around 100-600 referral visits on average from the coverage based on our research.
  6. We can then ask how many views it took of that coverage to generate that many click throughs to the website. Using basic laws of click thru rates will then dictate the thresholds of this. E.g. an advert in the Telegraph probably gets on average 0.2-1% best case click thru, so we set an estimated % for editorial in order to then extrapolate out and get to a coverage view estimate. We carried out this research on a wide variety of websites & applied our findings to our equation.
  7. We take the website’s total estimated visits and apply an estimate share of traffic using the data above.
Did this answer your question?