The $1000 Case Study Technical Details

Want to see the process and tools we used to build our $1000 case study?

We'll break down our thought process, the testing rubric, and the tools that we used to collect all the information and parse it before it made it into the study.

We encourage you to see what we did, ask questions, and give us feedback about what you would like to see when we do it again.

Installation and Pricing

When we talked about was the most important metrics to collect and share in the installation and pricing section of our case study, the biggest pain points was the technical side of the "add-ons" that are offered with every host and TCO (total cost of ownership).

Add-On Technical Details

It isn't always clear what is needed and what is just for show so we made it a point to go through all of the major addons and explain what they do, why someone might want them, and any alternatives that might exist for the same behavior.

TCO (Total Cost of Ownership)

Most hosting companies offer a low introductory rate to try and draw new customers and they know that most people will stick with them for multiple years. That means that they can increase their renewal price and the customers are left with the decision to either find another host and copy their site or pay the premium. Turns out... most people just pay more.

So, we calculated the average TCO for a standard site over the 1, 2, and 3 year time periods so that you can see the real cost of hosting, not just the "get them in the door" price.

Performance

When looking at the performance metrics we could collect we wanted to focus on real-life scenarios instead of the "best case" scenarios we found with other popular reviews. We wanted to focus on testing multiple simultaneous visitors as well as the speeds over time so that we could understand what happened at high and low periods of traffic.

Page Load Time

After looking through the most popular reviews of web hosting platforms we found that there was a common theme and it left a lot of room for error. We wanted to tackle more than just how fast a page loaded once. In reality, websites and blogs have multiple visitors at a time and visitors don't always come consistently. There are high and low periods of traffic and you need a host that can stand up to the peak times.

So, we set up a testing rig on AWS that would drive traffic to our testing sites at multiple times of the day and record the results. Every few days, we would collect the data, put it into tableau, and update the case study with the results.

AWS

We used AWS due to its ease of use and the fact that I (Julian) am familiar with using their deployment systems and have a fair amount of Linux management experience. It was easy to set up and I trusted the results.

Locust

To actually drive the traffic, we used an open source program called Locust that we configured to perform multiple tests that simulated 1, 10, and 100 simultaneous visitors to the website we were testing. It is important to understand that there are some assumptions with our testing here in that all of the visitors are coming from a single source. In practice, your visitors are coming around the world so your results may be slightly worse than our tests. The benefit of using a single host is that we get some semblance of determinism in the results and we felt it still gave a fair comparison of the hosts under the same environment.

In the next iteration of this case study we may step up to an enterprise level load generator that can simulate multiple sources locations as well as traffic patterns.

Tableau

Collecting all of the data was actually the easy part. Once you have a truckload of data you need to figure out how to present it in a way that is actually useful to the people reading through our case study. I actually had to learn how to use this tool, but I have heard plenty of good things about it and it was actually pretty straight forward once I got the hang of it.

Demo Data Creator

To try and mimic an actual website for our tests, we used a WordPress plugin to create a bunch of users, pages, and blog posts to make sure that the thing we were testing looked like a real site.

Availability

The problem with using data from the hosting companies themselves is that they have control over the time period and data they want to show. In the beginning, we didn't have a ton of historical data since we just started out accounts. So, we did our best to track down third-party data that was unbiased so we could give the best information possible.

Once we get a few months into the testing we will have enough data from our own tests that we won't need external data any longer. We used New Relic to constant query our site and track the uptime history on an hourly basis. It also creates a great graph that we can just insert into our case study as time progresses.

New Relic

Some may argue that this was overkill for what we needed, but we wanted to use a well-known and credible service to gather the uptime history for our websites. There are free sources such as Pingdom, but we wanted to take it one step further to get the best data possible.

# Of Websites on Shared Host

It is actually surprising how many people don't realize that shared hosting means that they are physically (or I guess virtually) sharing the server with other websites. That means that the other websites traffic and performance has a direct performance impact on your own site.

For many people, this doesn't matter, but if you are having strange performance issues it is usually a good idea to check to see who else is in your sandbox and if you need to relocate to get consistent results.​ 

Domain Tools Reverse IP Checker

We used domain tools since they are an established company (with real employees) that constantly map the internet to find the common IP addresses for websites. While it is true that some web hosting companies may share a common IP on a load balancer, but have separate servers we didn't get any confirmation that this was the case so we are working under the assumption that there is a 1:1 map between ip and physical server in these tests.​

Support

In the ideal case, nobody ever has to use support. It is kind of like insurance and in many ways it should be viewed as insurance for when things go wrong. It is hard to put a price on it since in the moment having good support can be priceless. We wanted to make sure and cover what support options were available and take it one step further to understand how quickly they would respond to our requests and how good their answers were.

It was hard to put a "goodness" metric to support so we shared the raw information with you and depend on the comments of long-time users to help fill in the gaps.

Available Support Options

There are really only a few ways that you can communicate (live chat, email, and phone) and we usually expect most 21st century companies to have all of them. We collected the information as well as the hours of service (not all of them are 24 hours a day).

Support Response Time

Getting a response from support is only half the battle. It also needs to help solve the problem. So, we sent support requests at different times and days and recorded the amount of time it took for them to get back to us as well as their response.