Recently in the first week of April, there have been reports of web pages getting de-indexed from Google. A lot of website owners along with SEO experts have raised concerns about the falling of page rankings and Google responded affirmatively to the reports that a bug had been triggering the de-indexation.
MozCast has displayed a multi-day rise in the temperature inclusive of a sudden 105-degree hike on the 6th of April. During the de-indexing process, pages automatically fell out of the ranking due to the flux and then reappeared back on the list. But Search Engine Result Pages monitoring tools failed to identify the varying reasons contributing to the flux.
Is it possible to isolate the de-indexing flux?
Some of the tools by Google are one of the best ways to check the indexing of a page but to use the tools at such a large scale is tough. Also, after the passing of an event, access to the data history is also ruined. But in case the URLs can be isolated, the stability can be expected to some extent and could even be used for the detection of abnormal patterns. The properties dependent on which the URLs can be isolated include:
Thus these qualifiers can help in the isolation of those URLs that have shown significant stability and have been performing well for quite some time.
Zooming in a bit further
The switching to the contacts with multi-focus is disturbing. We have thus included a trend line to come up with a better explanation with the trend line displayed in purple.
The setback from the trend visible on the 5th of April to that on the 7th of April is more perceivable when zoomed in. There was a 4.0% drop over the days on 5th April, followed by a 4.4% drop on the second day. Although the recovery was almost instant and did not require any special effort to gain back the ranks, it was quite a headache. Do not ignore the fact that the metric shifted a little during the algorithmic flux in March and that includes the core update of the month. But there is no way to prove the clear representation of de-indexing when it comes to the drop of stable URLs which are generally out of impact when the typical updates of the Google algorithm are taken into consideration.
Was it systematic or random?
Coming to a definite conclusion about the random or systematic existence of the bug is a tough task. This is mainly due to the fact that the effect was systematic to the sites in one way or the other. There is a possibility of result skewing of the URL stability during the analysis. In comparison to that, the trial of measuring the URL instability can be a bit of nonsensical, mainly due to the skewing of the data set towards the ‘head’ terms, devoid of multiple queries within the native questions coming in long-tailed versions.