Bill Hartzer https://www.billhartzer.com Sun, 28 May 2017 16:24:09 +0000 en-US hourly 1 Google Crawling AMP Pages 20 Percent Less Since Fred Update https://www.billhartzer.com/pages/googlebot-crawling-amp-pages-20-less/ Tue, 23 May 2017 17:44:20 +0000 https://www.billhartzer.com/?p=8719 googlebot crawl activity
I analyzed the past 6 months of Googlebot’s crawling activity on my website, specifically looking to see how often AMP pages, versus blog posts, versus pages are being crawled by Google on my site. What I found was pretty interesting.

With the help of Oncrawl, I was about to analyze the log files on my site and specifically pull out Googlebot’s crawling activity. During the past 6 months, prior to around March 9, 2017, Googlebot’s crawl, on average, was made up of about 50 percent AMP pages. In other words, about 50 percent of the time they crawled pages on my site that are in the AMP format. I have installed the AMP WordPress plugin on my site, and those URLs all end in /amp/ on my site. So, if you go to a blog post on my site, and add /amp/ to the end of the URL, you’ll see the AMP version of the same page.

googlebot crawling activity 6 months

After around March 9, 2017, though, there was a big drop in Googlebot’s crawling of AMP pages. It dropped to about 30 percent, whereas they started to favor crawling blog posts much more than AMP pages of those blog posts. Prior to March 9, 2017, Googlebot crawled the blog posts about 30 percent of the time, and other URLs about 20 percent of the time:

AMP Pages – 50 percent
Blog Posts – 30 percent
Pages & other URLs – 20 percent

Also with the help of OnCrawl, I analyzed the past month’s worth of log files on my website. I wanted to fully understand Googlebot’s crawling activity, which pages are crawled, what type of pages are crawled, and how often Googlebot crawls those pages. After looking at a month’s worth of data (April 24, 2017 to May 23, 2017), I discovered that Googlebot crawls, on average, AMP pages about 28 percent of the time, as compared to blog post pages, which they crawl about 44 percent of the time.

Googlebot crawled blog posts on my site, on average, 44 percent of the time. AMP pages are crawled, on average, 29 percent of the time. Googlebot crawled other URLs, such as pages, category pages, and other URLs on the site 27 percent of the time.

last 30 days googlebot crawling

To put it another way, here’s the makeup of bot crawling activity over the last 30 day period, on average:

Blog Posts – 44.48 percent
AMP Pages – 28.82 percent
Pages – 12.2 percent
Other URLs and Category pages – 14.5 percent

What’s interesting to me is this overall shift, since March 9th, 2017, in the change of crawling of AMP pages. I know that according to Marie Haynes, the Google Fred update, a major quality update, came out March 7-8, 2017. This was a quality update that apparently rewards sites for a good EAT (Experience, Authority and Trust).

Prior to the Google Fred update, I’m not seeing any changes in crawling activity related to Google’s crawling of AMP pages versus regular blog posts or pages. From January through March, though, I saw a rather large increase of crawling around pages on the site (static pages) versus blog posts. For example, during that time, Googlebot kicked up their crawling activity on static pages, and didn’t crawl blog posts as often as they did in the past. At the end of March 2017, though, the crawling activity of static pages went down.

In comparison, looking at Google organic traffic since the Google Fred update, traffic from Google organic search is up 42 percent. So, with the drop of Google crawling AMP pages (down to only 30 percent of pages crawled), that’s good for me, as I don’t really see much traffic at all from Google AMP. I have Google Analytics installed on my AMP pages, and you can pretty much say that the traffic is non-existent to AMP pages.

The Google Fred algorithm update definitely had an effect on AMP pages, as they’re crawling them much less than they did before. It went down from 50 percent to around 30 percent currently (in the past month). As far as I can tell, visitors tend to like to go to actual blog posts much more than AMP pages.

This is the crawling activity on my own personal site, this website. I recommend that you look at your own site’s log files and analyze them in order to see how often Google is crawling your own site.

]]>
Google Search Console Shows No Indexed Pages for Website https://www.billhartzer.com/pages/gsc-no-indexed-pages/ Fri, 19 May 2017 00:58:05 +0000 https://www.billhartzer.com/?p=8479 Google’s Search Console appears to be having another possible data issue. Google Search Console is reporting no indexed pages in the Indexed Pages report. One particular website that I have verified in Google Search Console does, in fact, have pages indexed in the Google search results. However, viewing Google Search Console, they show that no pages are indexed in the Google search results.

Google indexed pages

We know that in the past Google has had data issues, and many of these have been resolved. However, this is the first time that I personally have seen a website being reported as having zero pages indexed. It’s just not true, and must be a Google Search Console data issue. In this particular case, it’s a brand site that has been around for a while. The site has links, traffic, and no current manual actions or other major issues. There are some other issues we’re working on currently, but that has nothing to do with pages being indexed in Google. The site ranks for their brand name, as well as many other competitive keywords.

This same site has it’s site crawled regularly, so I don’t think this is an issue with the site.

Google search console crawl stats

I’m not exactly sure what’s causing this site to show no pages indexed, but I can, though, verify that there pages indexed for this site, which is about 167 or so pages:

Google pages indexed

In Google’s current status reports here, there are no issues listed for the indexed pages.

]]>
Microsoft Fails to Protect Bing SEO Domain Names https://www.billhartzer.com/pages/microsoft-fails-renew-bing-seo-domain-name/ Thu, 18 May 2017 20:22:51 +0000 https://www.billhartzer.com/?p=8446 search engine penalty

Back in 2011, I reported that Microsoft bought a series of 49 domain names on one day–but what was most interesting to me was that they bought 8 SEO domain names, primarily Bing SEO domain names. These were mainly to protect their brand, I suspect. Here’s a list of the SEO-related domain names that they bought that day:

searchengineoptimizationforbing.info
searchengineoptimizationforbing.net
searchengineoptimizationforbing.org
seoforbing.info
seoforbing.org
bingsearchengineoptimization.info
bingsearchengineoptimization.org
bingseo.info

Missing from that list, though, was SEOforBing.com. They had bought the .INFO and the .ORG domain name, but apparently couldn’t purchase the .com (which I suspect happened), or they didn’t try to force the owner to surrender it to them. Nonetheless, the SEOforBing.com domain name is currently available for registration as of the writing of this post. Perhaps Mark Monitor, the service Microsoft uses to protect their brand, should pick up that domain name since it’s available. Apparently they think seoforbing.info and seoforbing.org are worth keeping, as they’ve held onto those domains since 2011.

There are other versions of these Bing SEO domains available currently, as bingseo.info is owned by Microsoft, but bingseo.com is not, for example.

At one point, seoforbing.com redirected to a 404 error page on Google, according to the Internet Archive. That’s why I do think that Microsoft or Mark Monitor should pick up those domain names. It doesn’t look like Microsoft or Mark Monitor owned seoforbing.com, as it did redirect to Google.

Microsoft currently owns the .com, .net, .org, or .info of these:
searchengineoptimizationforbing
seoforbing
bingsearchengineoptimization
bingseo

Bing SEO

I checked NameCheap.com and these are currently available:
searchengineoptimizationforbing.com
seoforbing.com
bingsearchengineoptimization.net
bingseo.org

And for those of you reading this post and thinking, “gee, what a great opportunity to pick up a Bing SEO domain”, I don’t recommend it–you may be hearing from Microsoft’s lawyers.

]]>
How Not to Redirect an Old Blog Post: Courtesy Microsoft https://www.billhartzer.com/pages/not-redirect-old-blog-post-courtesy-microsoft/ Tue, 16 May 2017 19:09:37 +0000 https://www.billhartzer.com/?p=8411 Well, here’s one that has got me scratching my head. Actually, I’m trying to decide if I should be scratching my head or doing a face-palm thing. An old post from Microsoft, which was a pretty important announcement, is being redirected to another URL. But wait, it actually doesn’t redirect. And then when you click the link that they tell you to, because it’s been moved to another location, you get another notice that you’ll be redirected to the new location. Oh wait, it doesn’t redirect you. Confused yet? You should be.

What’s going on? Why all the redirects?


So, here’s the story. Back in 2009, the search engines got together and decided to create this new cool tag that would solve all sorts of issues with crawling, as well as solve the duplicate content issue (which now apparently doesn’t really exist). They called it the Canonical Tag. That day, the announcements were be posted by Google, by Microsoft, and others that supported this new tag. That day, I registered CanonicalTag.com and Canonical-Tag.com, put up a WordPress site, and for year and years it ranked in the top 10 in Google for “Canonical Tag”. It’s slipped now, and that’s okay with me, as I haven’t maintained it very well. But I digress.

Fast forward to today, when I decided to update the site–and use some new AI technology to auto-generate a blog post called “What is the Canonical Tag?”, which is posted on that site. Checking the links on the site, I came across Microsoft’s announcement about the Canonical Tag, which I had linked to in the sidebar of the site: because it’s important. Their post was titled “Partnering to help solve duplicate content issues”, which is still linked in that sidebar. Looking at Majestic, there are roughly 5500 links historically to that blog post, from over 650 domains:

links to canonical tag blog post

Not bad for a blog post, to have over 5,000 links to it from over 650 unique domain names, right? That’s called getting natural links to it because of the content. It’s their announcement about the Canonical Tag, which, in fact, is a pretty big deal now (well, it’s used a lot on a lot of sites).

Object Moved This document may be found here

So, when you now click on that link, you get this:

Object Moved
This document may be found here
THE BLOG YOU ARE ATTEMPTING TO ACCESS IS SCHEDULED FOR MIGRATION, PLEASE CHANGE YOUR URL TO GO TO HTTP://BLOGS.MSDN.COM/ OR HTTP://BLOGS.TECHNET.COM/, ONCE THE BLOG IS MIGRATED, YOU WILL AUTOMATICALLY GET REDIRECTED TO THE NEW BLOG SITE

Wait, what?!? OK, they have moved the blog post. OK, so why didn’t you redirect it with a 301 Permanent Redirect? Well, okay, Microsoft wants to tell us it’s moved.

OK, fine. At least they link to it, right?

Nope!

Well, you click on the link and it goes to ANOTHER notice:

THE BLOG YOU ARE ATTEMPTING TO ACCESS IS SCHEDULED FOR MIGRATION, PLEASE CHANGE YOUR URL TO GO TO HTTP://BLOGS.MSDN.COM/ OR HTTP://BLOGS.TECHNET.COM/, ONCE THE BLOG IS MIGRATED, YOU WILL AUTOMATICALLY GET REDIRECTED TO THE NEW BLOG SITE

with a “website unavailable” title tag. I mean, really?

Which, in fact, is a 404 error:

HTTP/1.0 404 Not Found =>
Content-Length => 2926
Content-Type => text/html; charset=UTF-8
X-Pingback => https://blogs.msdn.microsoft.com/webmaster/xmlrpc.php
Link => ; rel=shortlink
Strict-Transport-Security => max-age=31536000; includeSubDomains; preload
Request-Context => appId=cid-v1:1e40abc5-5fc0-4c08-939b-4be4566cca4b
X-XSS-Protection => 1; mode=block
X-Content-Type-Options => nosniff
ARR-Disable-Session-Affinity => true
Date => Tue, 16 May 2017 19:00:37 GMT
Connection => close
Set-Cookie => msdn-blogs-aad-state-parameter=2710EEA1-A9E7-32A2-3DD6-76AFB677596C; path=/; secure; httponly

It ends being a 404 error, with a somewhat helpful link to go to the new location, but that link doesn’t work. So clicking on it ends up being another 404 error.

So, I did some digging around. On Google. And, look what I found: it turns out that they’ve moved the content to the Bing blog, but they didn’t bother to redirect the old blog content to the new blog content. Here’s the old post, but it’s now on the Bing blog:

https://blogs.bing.com/webmaster/2009/02/12/partnering-to-help-solve-duplicate-content-issues

Makes total sense, right?!?

Seems to me that all they’d have to do is set up redirects from the old subdomain blogs.msdn.microsft.com to blogs.bing.com. Such an easy fix. Would take only a few minutes to complete. At this point, http://blogs.msdn.microsft.com/ doesn’t even redirect to blogs.microsoft.com.

This is a fine example of the fact that sometimes even though we preach SEO, site migrations, and technical SEO (it’s the Bing Webmaster blog, after all!), sometimes even the largest companies don’t know what they’re doing.

]]>
Beware of Scam Apps in Apple App Store, Links to Innocent Websites https://www.billhartzer.com/pages/beware-scam-apps-apple-app-store-links-innocent-websites/ Fri, 05 May 2017 17:13:53 +0000 https://www.billhartzer.com/?p=8399 Beware of scam apps that are in the Apple App Store that have been submitted by rogue developers. There are developers that are scamming Apple users, and making money from apps that don’t work. Dishonest app developers are creating apps that don’t work as described—and are linked to other innocent websites.

Here’s how the scam works:

1. A developer creates an app with a ‘popular’ name and description. Sometimes this will be copied directly from another popular app.

2. The app gets approved by Apple, despite the fact that the game or app is not working or doesn’t work as described.

3. In the App Store’s description, there are support links to contact the developer. The support links in the descriptions typically link to sites that don’t have anything to do with the developer. They link to the victim’s website.

4. You (or someone else) purchases the app and find out that it doesn’t work or doesn’t work as described. As a result, you (or someone else) request(s) a refund from Apple for the $2.99 or $3.99 you paid for the app.

5. When you request the refund, Apple simply tells you to contact the app developer for a refund. But, it turns out that the developer knows that people will request refunds. So, anticipating this, they created a link in the description of the app to an innocent website (not their website). That way all the complaints and requests for refunds will go to someone else, not the original app developer.

Take a look at the example below:

scam app from rogue developer

The “support” link in the description of the app links to a site, MiniGames.com, which technically doesn’t actually create apps. They never have and never plan on creating apps. The scammer developer has linked to this innocent website–and people are complaining to them, asking for refunds.

In this case, the app I’m showing above as an example has now been deleted and removed from the Apple App Store. You may still be able to see it in Google’s or Bing’s cache in their search results, as I have verified it just now.

This continues to happen over and over again, rogue developers are scamming Apple and Apple customers who are purchasing and downloading games or apps from the App store. When a refund is requested, an innocent website is involved–who is not the original developer of the app.

I thought Apple did a better job at screening the apps that are in the App Store?

How to Protect Yourself

There are a few things that you can do in order to protect yourself from this scam in the Apple App Store. Before you download an app, make sure that you take a look at a few things:

1. Take a look at the name of the developer. If you’re downloading an app from a company, like the American Airlines app, make sure that the developer is actually the name of the company and not someone else.

2. Look to see how many reviews and stars that the app has. If it doesn’t have many start (or if it has no stars), and it doesn’t have any reviews, you should be wary of the app.

3. Take a look at any support links or links to support websites that are linked in the description of the app. If the link doesn’t match up with the name of the developer, then be wary of the app. In the case of the scam apps being uploaded to the Apple App Store, the website linked to doesn’t create apps. So clicking on the link would most likely reveal this.

The scam may be working (and there might be a lot more victims than we think) because of the amount that is spent for the app is typically $2.99 or an amount under $10. Many victims may not report the app because it’s a small amount of money.

]]>
Buy a Domain With a Search Engine Penalty? Do This https://www.billhartzer.com/pages/buy-domain-search-engine-penalty/ Tue, 02 May 2017 18:37:24 +0000 https://www.billhartzer.com/?p=8393 search engine penalty

Have you bought a domain name, without doing the proper domain name due diligence, and put a developed it–to then find out that it’s banned in Google or has a search engine penalty? Well, according to a recent post a Search Engine Roundtable, there are people discussing this very issue.

The recommendation is to just get rid of that domain name and forget about it. Or use it for spam (the horror!). They recommend that you just use another domain name.

Well, if you might recall, I wrote two years ago about how ZDNet bought a domain name at GoDaddy’s expired domain name auctions. And, the domain name was banned in Google. ZDNet learned of the search engine penalty after they moved their existing website to the new domain name. They lost traffic from Google organic search. Turns out that the domain name already had a manual action associated with it because the previous owner used it for search engine spam.

All the new domain name owner had to do was submit a reconsideration request with Google. Google will manually look at the domain name, the website associated with it, and verify that the site is, in fact, owned by someone else. Within days this can happen–you can get rid of a manual action (penalty) from Google if you buy a domain name and it has a search engine penalty.

The “advice” given by others regarding giving up on a domain name name that has a search engine penalty is bad advice. Granted, if you wanted to stay “under Google’s radar” and not verify the site with Google’s Search Console and ask for a reconsideration request, you might want to do that. But for us folks that do everything in a “white hat” way (cough, cough), simply filing a reconsideration request will take care of the problem.

]]>
Removed a URL from a Disavow File? Here’s How Long it Will Take to See Results https://www.billhartzer.com/pages/removed-url-disavow-file-take-time-see-results/ Thu, 27 Apr 2017 19:32:29 +0000 https://www.billhartzer.com/?p=8387 remove a URL from disavow file

If you remove a URL from a disavow file it could take some time before you see actual results in the search results. I asked John Mueller and Gary Illyes from Google about removing a URL from your disavow file, and whether or not we need to wait until the URL is crawled again.

“Hey @JohnMu @methode if you remove a URL from disavow file, do you have to wait til URL is crawled/cached again?”

In other words, if you disavowed a link to your site–but later remove that link from the disavow file, how long will it take for Google to recognize that you’ve removed that link from your disavow file? When you disavow a URL, or tell Google to ignore a certain link to your site, we know that once the URL has been crawled and cached by Google, Google gives you “credit” for disavowing that URL.

But what if you were to disavow a URL and then remove it? What would happen then, and how long would it take to see any results from removing the URL from your disavow?

The final answer seems to be “All indexing changes take an indeterminate amount of time, so I wouldn’t worry about the timing. Do the right thing, let it settle.” according to John Mueller from Google.

One of the responses, though, was from Gary, who implied that you’d need to only wait until your page was recrawled and re-cached in order for it to take effect.

You can read the entire thread on Twitter or see it below:

Here is my personal experience with recently removing a URL from a disavow:

I reviewed a client’s disavow file recently, and they had a lot of URLs in there that they were disavowing. The previous disavow file was uploaded in January 2017, and they hadn’t seen a whole lot of lift from the last disavow file. So, there were URLs in there that needed to be removed because I felt that they just weren’t bad links–they were questionable, but not link spam that the site had created intentionally. So, I removed a LOT of URLs. In other words, I removed a URL from a disavow file. Again, LOT of URLs, hundreds.

Five days after the disavow file was uploaded, we’re seeing a 2 percent lift in referrals from Google organic search. However, it appears that this is better quality traffic–the leads are seeing a 12.5 percent lift since the updated disavow file was uploaded.

]]>
Don’t Miss the Domain Name Houston Marketing Cruise https://www.billhartzer.com/pages/dont-miss-domain-name-houston-marketing-cruise/ Wed, 26 Apr 2017 23:11:41 +0000 https://www.billhartzer.com/?p=8378

This Friday night, April 28th, 2017, from 7:00pm to 10:00pm in Houston, I’ll be speaking about SEO Audits and Domain Name Due Diligence as a part of the DNHouston.com Evening Dinner Cruise. Join me on an evening dinner cruise around Clear Lake, the Kemah Boardwalk and Galveston Bay.

During the cruise, you’re hear from technology marketing experts like myself, as well as:

Kevin Kopas, Channel Manager, APAC PIR.org
Andee Hill, Director of Business Development, Donuts Inc.
Victor Pitts, Director of Premium Sales, MMX
Jeff Sass, Chief Marketing Officer, Dot Club Domains, LLC

I personally know each of these speakers and they’re great–you won’t want to miss this event. As I mentioned, I’ll be speaking about performing SEO Audits of websites, as well as performing due diligence on domain names.

Here are the details:

As of the writing of this post, there are still a few tickets available. See the DNHouston.com website to order tickets. Or, you can just show up before boarding time, which is at 6:45pm.

Houston Party Boats:
2500 S Shore Blvd,
League City, Texas 77573
www.houstonpartyboats.com

]]>
Can You Submit URL to Google Someone Else’s Page and Get It Indexed? https://www.billhartzer.com/pages/can-submit-google-someone-elses-page-get-indexed/ Fri, 21 Apr 2017 16:07:33 +0000 https://www.billhartzer.com/?p=8365 Can you submit a web page URL to Google from someone else’s website, that’s not indexed yet, and get that page indexed quickly? Well, Marie Haynes asked that question on Twitter and thus this blog post was born.

marie haynes submit url google

As you might recall, I’ve posted earlier about the submit URL to Google feature, and that it’s in the search results if you search for it. Another way to get a page indexed in Google fairly quickly is to use Google’s Fetch and Render tool. But, with the Fetch and Render or Fetch As Google tool, you have to be logged into Google in order to use it. Yesterday, by the way, that submit url to Google feature was broken for a while.

In this case, with this blog post, we’re going to use the Submit URL to Google feature that shows up in the search results and see if someone else can submit this URL and see if it gets indexed quickly. I suspect that it will work, just as planned. But it will also say that I posted this five hours ago, which will show that Google’s time stamp on search results is still broken.

If this feature works as planned, I’ll update this post below.

Update:
After posting this, Marie Haynes was nice enough to submit this URL to Google using the submit url to Google feature, and she was able to get this blog post indexed right away, within minutes.

submit url to google

]]>
Google Thinks AMP Pages & Blog Posts Contain Duplicate Title Tags https://www.billhartzer.com/pages/google-thinks-amp-pages-blog-posts-contain-duplicate-title-tags/ Fri, 21 Apr 2017 11:57:42 +0000 https://www.billhartzer.com/?p=8359 If you’re a website owner, you have the opportunity to verify your website with Google and get access to the Google Search Console. Previously called Google Webmaster Tools, the GSC’s goal is to provide helpful information about your website. In most cases, you can get helpful insights into areas of your website that could be improved. Sometimes the information Google provides in GSC is helpful–and sometimes it’s not. Let’s look at an example of when Google has screwed up in GSC, providing us with inaccurate (or misleading information). Once example is the HTML improvements.

When you log into Google Search Console and you’ve successfully verified that you’re the website owner, one area that you can look at is the HTML improvements. You can navigate to it by going here:

Google search console html improvements

In this section of GSC, the html improvements gives you (mostly) helpful information about missing title tags, duplicate title tags, long title tags, short title tags, and non-informative title tags. They also alert you to any non-indexable content on your website, if they find any. In the case of this website (billhartzer.com), I only have duplicate title tags. This is one area that I am typically concerned about, as having a duplicate title tag (especially on a WordPress site) can mean that there are WordPress settings or other site-wide issues that are causing the duplicate title tags. That also can mean that there is duplicate content on the site. It’s very rare, on a WordPress site, to have duplicate content or duplicate title tags that are manually created. Usually it’s the fault of how the WordPress site is set up that’s causing the duplicate content or, in this case, duplicate title tags.

So, imagine the horror I experienced when I saw that Google Search Console is pointing out that I have duplicate title tags on my site! (Cue the creepy horror movie audio sounds.)

duplicate title tags

Well, it turns out that it’s not my fault. Google Search Console is reporting that my I have duplicate title tags on my website because I have AMP pages on my site.

Yes, apparently my AMP pages’ title tags are duplicates of the title tags found on other pages on my site. Uh, Google, if you’re reporting these as duplicate title tags, then you have a serious problem in Google Search Console that needs to be fixed.

Here’s an example of a recent blog post that apparently has a duplicate title tag of it’s very own AMP page:
/pages/google-maps-ms-pacman-cant-play-country/
/pages/google-maps-ms-pacman-cant-play-country/amp/

Luckily, I really don’t have duplicate content and duplicate title tags on my website, as Google says I do. But sheesh!

]]>