Day 53 - Sitemaps and Google search indexing progress

by alex 25. March 2013 19:41

Last night, I reconfigured the blog to use the same Google Analytics tracker currently used by the product site.

I also submitted two sitemaps to Google: a map of the pages in the product site and the RSS feed from the blog. After submitting the site maps, the Google Webmaster tools is empowered to make a few suggestions. The only actionable data I have received so far is that the meta description of the blog was too short.

Submitting the sitemap directly caused 17 pages to be submitted to Google for scanning and 1 of them was immediately indexed. An index is an entry in a search engine's database - you could have the most interesting and useful content in the world, but if your site isn't index it will never appear in search results.

You can gain indexes by submitting sitemaps or by generating cross links from other blogs and sites.

Here's a chart of my index history over the last year:

(I just lost half this blog post because I accidentally closed the tab, and that's very frustrating.)

The product site didn't exist until sometime in December - my first record of traffic from my current tracker is on December 29th. It's interesting to note that I didn't get any indexes until almost a week after it was published.

Was this because I was queued by Google in some way or did I do something to bootstrap the process? I can't remember.

However, if you are starting a new site, you should submit a sitemap and use the Google URL submitter as early as possible in order to allow Google to begin building indexes and trust.

Early internet traffic for a software startup

by alex 25. March 2013 02:37

This post reviews the internet traffic patterns to my product site and blog for the first three months of their existence.

My traffic from this year, January 1st through March 23rd, is discussed below. My gut says that my traffic rates for today are certainly not enough to sustain the business, but I don't have enough experience to form a baseline to compare my progress against. Maybe this information will be useful to others.

I have two web applications - one is a homebrew ASP.NET MVC project hosted at and the other is a Blog Engine.NET installation hosted at

Since they're separate applications, I configured them in Google Analytics to track the root site and the blog independently. After creating the reports below, I think this was an error.

Initially I thought that separating the sites would be useful, as I could more easily isolate traffic patterns generated by my blog and by the root product page. However, using two tracking numbers doubles the amount of research I have to perform to create actionable conclusions from the data. Also, it makes it much harder to see how visitors travel between the root site and the blog. AND it probably makes more sense to use one tracking number because Google and other search engines view and as the same site without regard to how the server is configured to serve up the content.

My pattern in the future will be to use one tracking number per domain. I would probably use a unique tracking number for each sub domain, based on the traffic patterns I expected between sub domains.

Product Site

The traffic is illustrated by the following graph:

The first spike, on February 1st is when I sent my farewell email to the employees of my day job; the email had a link to my product site. Four days later, the largest spike, was when I announced to my friends and family on Facebook that I quit my job and started a software company. The most recent spike was an announcement to Facebook again that my site was redesigned and updated.

Audience summary

  • 145 unique visitors
  • 241 visits
  • 726 page views (3.01 pages per visit)
  • 2 minutes and 38 seconds for the average visit duration
  • 52.70% bounce rate

Traffic source information

  • 4.15% search traffic
  • 34.44% direct traffic
  • 61.41% referral traffic

My top referrers, in order: Facebook, this site, and (I made a comment or two on this blog and got some traffic from trackbacks).

SEO results

  • 320 impressions
  • 5 clicks
  • 280 average position
  • 1.56% click through rate

I'm trending on a few queries, including "release management", "code review process" and "software build process". Google Analytics reports a click through rate of 1.56%, of all of this is for the term "website build process". I'm inclined to think that this is noise and my CTR is actually 0%.

My product site needs to trend higher for my keywords. Since my site has been recently redesigned, tomorrow I will take a close look to make sure I hit all the basics of SEO, and work on some useful content (blog posts).


I created my blog about a month after setting up the root product site. The traffic to the blog is illustrated by the following graph:

Some immediate thoughts -

The blog's peak day is not as high as the website root. I attribute this to abnormal traffic spikes generated by Facebook traffic. When saying farewell to my old coworkers and when announcing the formation of the software business and the 2nd site design, I linked to the root website. These three days had higher than normal traffic due to direct links to my non-target market; each day was also marked with high bounce rates.

However, the blog has a better daily traffic rate. The following graph shows both graphs together, with the blog normalized to fit in scale to the root site (the blog is red):

I attribute the blog's general higher traffic rate to the quality of its content. The blog has a number of SEO keyword-rich posts and is perhaps mildly helpful. I've had a fair bit of idle time with the blog, but it had fewer zero-traffic days than the root site.

Audience summary


  • 43 unique visitors
  • 131 visits
  • 345 page views (2.63 pages per visit)
  • 6 minutes and 56 seconds for the average visit duration
  • 67.18% bounce rate

The average visit for the blog is almost triple that of the root site. I think this is again due to the content. The bounce rate, however, is higher. I suspect this might be because the blog is young, doesn't have much content yet and I link to external resources in many of my posts.

To decrease my bounce rate, I could do a few things:

  1. Add a page break to every post - this requires the visitor to click on the "read more" button to get to the meat and potatoes of the post. I don't like this solution because it hurts the user experience (more clicks). The copy before the break has to sell the visitor that the post is worth clicking on; if there is no break, the visitor can easily mouse-wheel down to see images or headers that might catch their eye. If this content was hidden behind a break then the bounce rate might actually increase for posts with weak leads.
  2. Stop linking to external websites - this is an interesting idea. Ultimately the posts are designed to be useful to the visitor. If I have content to link to then I should link to it over another website. The trouble is that I don't have much content yet and it's easier to link to a Stack Overflow article. I should make an effort to reduce external links by increase my own content, but not to the point that it is harmful to the visitor's experience. For example, if the post is about a new Javascript library, it would be appropriate to link to the external site. But if the post is about password hashing, I should make an effort to document password hashing best practices and pitfalls rather than linking to someone else who already has.

I'm not going to add page breaks, but I will consider creating more content rather than linking to other resources, as appropriate.

Traffic source information


  • 4.58% search traffic
  • 74.81% direct traffic
  • 20.61% referral traffic

My top referrers, in order: Facebook, this site, and Same as the root site.

SEO results


  • 3000 impressions
  • 5 clicks
  • 1.0 average position
  • 0.17% click through rate

The blog has been active for about half the time the root site has been, but it has 10x the impressions. An impression happens when one of my pages is visible to a potential-visitor performing a Google search. So, this means that a page from the blog was actually visible in search results on Google 3000 times.

There's something wrong with the configuration of the blog's analytics - there isn't any query data, but the supporting data (such as landing pages) is populated. As such, the 1.0 average position is probably just the default value. I have 5 clicks recorded for each of two posts and my author page. It's suspicious that all of my click activity is in 5's; also, the summary reports 5 clicks, but the landing page report shows 15 clicks... all this leads me to believe that something isn't configured correctly. Anyway, the two posts which have reported clicks are ones that I would expect - they are both how-to articles.

Conclusion and what happens next

  • I need to get the blog and root site on to the same tracker.
  • I'm going to move the blog since that is the one that currently has suspicious issues with the reported data.
  • I will make an effort to reduce my external links in blog posts.
  • I will continue efforts to create unique content for the blog - that seems to be getting impressions.
  • I should try to improve the leading copy for my blog posts.

About the author

Something about the author

Month List