Day 54 and 55 - Logos, Twitter and wasting time

by alex 26. March 2013 19:34

Last week I settled on Build Keeper.NET for my product's brand name. Yesterday I worked on a few logo ideas based around that name. The next post will include some design samples.

I posted one of my recent traffic assessments to Twitter. Writing to the #SEO and #startup hash tags is kind of like joining a long line of people who are shouting at the same wall.

There's so much traffic to these tags that unless you're a major player already, these hash tags will not generate any traffic or interest. So hey, let's tweet all the posts from the last three days or so. I'm not in a vacuum!

I'm going to tweet every post to this blog until the Twitter police come and take away my keyboard.

They'll never find me.

Yesterday and today were not very fruitful. I've been editing a long post on security, preventing XSS attacks and rich editing in websites. I think this has stagnated me a little; I've taken a "weekend" but without any relaxing. Sometimes the next step isn't 100% clear.

Tags:

SEO: Canonical URLs and correct redirecting with 301

by alex 25. March 2013 21:05

I just finished setting up my canonical URLs and redirections for this domain and blog. If this is configured incorrectly your various pages may appear as two different sites to search engines.

Canonical URLs - what are they and why should you care?

Is your URL http://example.com or http://www.example.com? Or are these the same site? If they're the same, is http://blog.example.com also the same? To most search engines, URLs with the www do not refer to the same content as those without the www sub domain.

That means, if you have links 'all over the internet' to both www.example.com and example.com, them these appear to be two different websites to search engines. Deciding on whether or not you will use www, and ensuring that all of your links include it, is part of the effort to make canonical URLs.

I don't remember ever reading a definition of this term that was worth writing home about, but here's my best shot after reading everything on the internet -

Canonicalization of URLs is the process of deciding the best URL to use for a given resource. A canonical URL, therefore, is the preferred URL to access a resource.

For example, these links could all refer to the same file, index.html stored in the wwwroot folder of a server:

  • www.example.com
  • example.com
  • www.example.com/index.html
  • example.com/index.html
  • www.example.com/index
  • example.com/index

There are a few negatives to not canonicalizing your URLs:

  1. Webcrawlers will hit your server multiple times for the same content
  2. Search engines will generally attempt to merge the rankings of similar URLs providing the same content (e.g., example.com and example.com/index.html), but if they can't automatically resolve the duplication, the ranking of your content may be distributed between multiple URLs.

That bears repeating. If you don't have your URLs setup properly then search engines may dilute your page's ranking by distributing it across multiple URLs.

Do you add the www or remove it? There are fights on both side of this argument, but for all it's worth, this decision might as well just come down to aesthetics.

After you make the decision of how links to your site should look, you need to change all of your internal links to use the canonical format. That just means that you have to make sure that you always include the www or you always omit it. If you have any other resources that are available through multiple sources (for example, the Home controller and the Index action for ASP.NET MVC projects), make sure to also always link to these via the same URL.

I've always elected to include the www. This means that otsix.com will return a permanent redirect (301) indicating the content is located at www.otsix.com. To see a sample of this, open otsix.com - your browser will automatically redirect to http://www.otsix.com. Additionally, I don't have anything in the root of this domain at this point, so I want to redirect to my blog subfolder.

To achieve these goals, I added the following snippet to my web.config file, located in the wwwroot folder:

<system.webServer>
	<rewrite>
		<rules>
			<rule name="Redirect non-www to www - loses path info" patternSyntax="ECMAScript" stopProcessing="true">
				<match url=".*" />
				<conditions>
					<add input="{HTTP_HOST}" pattern="^otsix.com$" />
				</conditions>
				<action type="Redirect" url="http://www.otsix.com/blog/" />
			</rule>
			<rule name="Redirect root to blog" stopProcessing="true">
				<match url="^$" />
				<action type="Redirect" url="http://www.otsix.com/blog/" />
			</rule>
		</rules>
	</rewrite>
</system.webServer>

These rules direct IIS to return 301, permanent redirects for the the following URLs. If I navigate to any of these URLs, the browser is automatically redirected to my canonical URL, http://www.otsix.com/blog/

  • http://otsix.com/
  • http://otsix.com/blog/
  • http://www.otsix.com/

Later, when I have content for the root site, I'll remove the "redirect root to blog" rule.

I'll comment on this later, but this post only dealt with top-level canonicalization. These URLs also need to be canonicalized if they serve the same content:

  • http://www.example.com/browser?category=bags&sortOrder=desc
  • http://www.example.com/browser?sortOrder=desc&category=bags

Choosing a page title scheme for higher CTR in search results

by alex 25. March 2013 19:55

When I do a Google search for "site:http://www.eisenhartsoftware.com", I see the following results:

Please note that I kept this as an image because webcrawlers can't read them - I don't want this blog to trend above my product sales site.

The current titles are ineffective and stupid

The titles of the pages in the search results are not click-inspiring. My target audience will be searching for "build management", "component control" and so on. Pages with titles like "Features - Eisenhart Software" are not going to be clicked on when competing against titles like "Wrestle your software components".

The titles are also not good for SEO efforts. "Features - Eisenhart Software" will be perhaps relevant for anyone searching for that exact phrase, "features" or "Eisenhart Software". No one will search for that exact phrase; I don't want to trend high for generic searches like "features". But, I do want to be the first result for Eisenhart Software.

Page titles for increased CTR in search results

In the page template file (this is the master page for you ASP.NET MVC people out there) I changed the title formatting to be:

<title>@Constants.ProductName - @ViewBag.Title | .NET Component Management Simplified</title>

ViewBag.Title is the title of the particular page the visitor is currently on - this might be "Features", "Sign in", "Pricing" and so on. Constants.ProductName is a constant string that contains the product name. I created this constant because I couldn't decide on a product name until day 50 or so and this constant made it easy to try out new names. (Note to self: next time, choose a brand name much faster.)

The product name was moved to the front of the title. "BuildKeeper.NET" is a more interesting way to start a title than "Features".

"Eisenhart Software" was removed. This string occurs often enough in the content to make it trend well (and it's fairly unique to this site). It is also unlikely that visitors interested in purchasing the product will find the site by searching for the business name.

I added ".NET Component Management Simplified" to the template because I need to rank high for these keywords. I ordered the keywords by the decisions a visitor might make. 

  • "Will this work for .NET? Yes."
  • "What does it do? Oh, component management."
  • "And look, it's simple to use."

The maximum title length Google will display in results

The longest title in the search results is "Build Keeper.NET - Simplified Dependency Management - Eisenhart Software", which is 72 characters long; it was truncated to the lowest significant whole word, to a final length of 51 characters. The 2nd longest was "Eisenhart Software Dev Blog | By Coders For Coders - Ginger" which is 59 characters; it was not truncated. This suggests that there's a title-length threshold somewhere between 60 and 71 characters, and that passing this threshold will cause Google to trim the page title and add an ellipsis.

This new title template is 58 characters long before accounting for page-specific titles. Most of my page-titles are less than 10 characters long. If the full title is truncated by Google, I will probably only lose "Simplified" on average; I'm okay with that.

Conclusions

I feel like this template change is a solid enough idea that I'm going to push it to production right now.

Ultimately, I'm making this change because "BuildKeeper.NET - Features | .NET Component Management Simplified" is more interesting and informative than "Features - Eisenhart Software" and that should increase my click through rate. However, if I want to really great a potential-visitor's attention, I should probably deviate from this templated pattern, or reduce the template to 20-30 characters and put more interesting unique content at the start. I will probably experiment with this in the future with various landing pages.

Tags:

Advertising

Day 53 - Sitemaps and Google search indexing progress

by alex 25. March 2013 19:41

Last night, I reconfigured the blog to use the same Google Analytics tracker currently used by the product site.

I also submitted two sitemaps to Google: a map of the pages in the product site and the RSS feed from the blog. After submitting the site maps, the Google Webmaster tools is empowered to make a few suggestions. The only actionable data I have received so far is that the meta description of the blog was too short.

Submitting the sitemap directly caused 17 pages to be submitted to Google for scanning and 1 of them was immediately indexed. An index is an entry in a search engine's database - you could have the most interesting and useful content in the world, but if your site isn't index it will never appear in search results.

You can gain indexes by submitting sitemaps or by generating cross links from other blogs and sites.

Here's a chart of my index history over the last year:

(I just lost half this blog post because I accidentally closed the tab, and that's very frustrating.)

The product site didn't exist until sometime in December - my first record of traffic from my current tracker is on December 29th. It's interesting to note that I didn't get any indexes until almost a week after it was published.

Was this because I was queued by Google in some way or did I do something to bootstrap the process? I can't remember.

However, if you are starting a new site, you should submit a sitemap and use the Google URL submitter as early as possible in order to allow Google to begin building indexes and trust.

Early internet traffic for a software startup

by alex 25. March 2013 02:37

This post reviews the internet traffic patterns to my product site and blog for the first three months of their existence.

My traffic from this year, January 1st through March 23rd, is discussed below. My gut says that my traffic rates for today are certainly not enough to sustain the business, but I don't have enough experience to form a baseline to compare my progress against. Maybe this information will be useful to others.

I have two web applications - one is a homebrew ASP.NET MVC project hosted at EisenhartSoftware.com and the other is a Blog Engine.NET installation hosted at EisenhartSoftware.com/blog.

Since they're separate applications, I configured them in Google Analytics to track the root site and the blog independently. After creating the reports below, I think this was an error.

Initially I thought that separating the sites would be useful, as I could more easily isolate traffic patterns generated by my blog and by the root product page. However, using two tracking numbers doubles the amount of research I have to perform to create actionable conclusions from the data. Also, it makes it much harder to see how visitors travel between the root site and the blog. AND it probably makes more sense to use one tracking number because Google and other search engines view eisenhartsoftware.com and eisenhartsoftware.com/blog as the same site without regard to how the server is configured to serve up the content.

My pattern in the future will be to use one tracking number per domain. I would probably use a unique tracking number for each sub domain, based on the traffic patterns I expected between sub domains.

Product Site eisenhartsoftware.com

The traffic is illustrated by the following graph:

The first spike, on February 1st is when I sent my farewell email to the employees of my day job; the email had a link to my product site. Four days later, the largest spike, was when I announced to my friends and family on Facebook that I quit my job and started a software company. The most recent spike was an announcement to Facebook again that my site was redesigned and updated.

Audience summary

  • 145 unique visitors
  • 241 visits
  • 726 page views (3.01 pages per visit)
  • 2 minutes and 38 seconds for the average visit duration
  • 52.70% bounce rate

Traffic source information

  • 4.15% search traffic
  • 34.44% direct traffic
  • 61.41% referral traffic

My top referrers, in order: Facebook, this site, and simpleprogrammer.com (I made a comment or two on this blog and got some traffic from trackbacks).

SEO results

  • 320 impressions
  • 5 clicks
  • 280 average position
  • 1.56% click through rate

I'm trending on a few queries, including "release management", "code review process" and "software build process". Google Analytics reports a click through rate of 1.56%, of all of this is for the term "website build process". I'm inclined to think that this is noise and my CTR is actually 0%.

My product site needs to trend higher for my keywords. Since my site has been recently redesigned, tomorrow I will take a close look to make sure I hit all the basics of SEO, and work on some useful content (blog posts).

Blog eisenhartsoftware.com/blog

I created my blog about a month after setting up the root product site. The traffic to the blog is illustrated by the following graph:

Some immediate thoughts -

The blog's peak day is not as high as the website root. I attribute this to abnormal traffic spikes generated by Facebook traffic. When saying farewell to my old coworkers and when announcing the formation of the software business and the 2nd site design, I linked to the root website. These three days had higher than normal traffic due to direct links to my non-target market; each day was also marked with high bounce rates.

However, the blog has a better daily traffic rate. The following graph shows both graphs together, with the blog normalized to fit in scale to the root site (the blog is red):

I attribute the blog's general higher traffic rate to the quality of its content. The blog has a number of SEO keyword-rich posts and is perhaps mildly helpful. I've had a fair bit of idle time with the blog, but it had fewer zero-traffic days than the root site.

Audience summary

 

  • 43 unique visitors
  • 131 visits
  • 345 page views (2.63 pages per visit)
  • 6 minutes and 56 seconds for the average visit duration
  • 67.18% bounce rate

The average visit for the blog is almost triple that of the root site. I think this is again due to the content. The bounce rate, however, is higher. I suspect this might be because the blog is young, doesn't have much content yet and I link to external resources in many of my posts.

To decrease my bounce rate, I could do a few things:

  1. Add a page break to every post - this requires the visitor to click on the "read more" button to get to the meat and potatoes of the post. I don't like this solution because it hurts the user experience (more clicks). The copy before the break has to sell the visitor that the post is worth clicking on; if there is no break, the visitor can easily mouse-wheel down to see images or headers that might catch their eye. If this content was hidden behind a break then the bounce rate might actually increase for posts with weak leads.
  2. Stop linking to external websites - this is an interesting idea. Ultimately the posts are designed to be useful to the visitor. If I have content to link to then I should link to it over another website. The trouble is that I don't have much content yet and it's easier to link to a Stack Overflow article. I should make an effort to reduce external links by increase my own content, but not to the point that it is harmful to the visitor's experience. For example, if the post is about a new Javascript library, it would be appropriate to link to the external site. But if the post is about password hashing, I should make an effort to document password hashing best practices and pitfalls rather than linking to someone else who already has.

I'm not going to add page breaks, but I will consider creating more content rather than linking to other resources, as appropriate.

Traffic source information

 

  • 4.58% search traffic
  • 74.81% direct traffic
  • 20.61% referral traffic

My top referrers, in order: Facebook, this site, and simpleprogrammer.com. Same as the root site.

SEO results

 

  • 3000 impressions
  • 5 clicks
  • 1.0 average position
  • 0.17% click through rate

The blog has been active for about half the time the root site has been, but it has 10x the impressions. An impression happens when one of my pages is visible to a potential-visitor performing a Google search. So, this means that a page from the blog was actually visible in search results on Google 3000 times.

There's something wrong with the configuration of the blog's analytics - there isn't any query data, but the supporting data (such as landing pages) is populated. As such, the 1.0 average position is probably just the default value. I have 5 clicks recorded for each of two posts and my author page. It's suspicious that all of my click activity is in 5's; also, the summary reports 5 clicks, but the landing page report shows 15 clicks... all this leads me to believe that something isn't configured correctly. Anyway, the two posts which have reported clicks are ones that I would expect - they are both how-to articles.

Conclusion and what happens next

  • I need to get the blog and root site on to the same tracker.
  • I'm going to move the blog since that is the one that currently has suspicious issues with the reported data.
  • I will make an effort to reduce my external links in blog posts.
  • I will continue efforts to create unique content for the blog - that seems to be getting impressions.
  • I should try to improve the leading copy for my blog posts.

About the author

Something about the author

Month List