Skip to main content

Identifying Convergence and Divergence in Web Design

Identifying Convergence and Divergence in Web Design

Every client has an idea of what they would like their website to look like. These opinions can be incredibly helpful or dangerous to the outcome of a presence. As much as it sounds a little strange to our new clients, we always lead the conversation of creative and aesthetic the same way.

This isn’t our project. And it’s not yours either. This project is for the set of eyes on the other side of the screen.

While gathering the general likes and dislikes of the client is important, extra bonus has to be placed on the agency to appropriately guide the client into making sound long term decisions. As is often the case, the client can’t see the forest because of the trees. On the other hand, the agency’s view is macro and incomplete. Mashing the right mix of critical thinking and a dual take on the inside and outside will yield a thoughtful and effective presence.

Experimentation

An exercise we repeat for every project is the 20 second site test. Before our first creative meeting, our team will assemble a list of 20 highly effective websites at least tangentially related to the industry or vertical the client aims to serve. These 20 websites are always forward, complete, and for all intents and purposes, quite good.

We use the Likert scale when collecting data. This scale is a widely used approach to scaling responses in survey research. We modify the scale a little, but only in nomenclature. Our scoring system is as follows:

  • Love It (5)
  • Like It (4)
  • Indifferent (3)
  • Dislike It (2)
  • Hate It (1)

Essentially, we have a 5-point scale that we ask each participant to use when grading each site. You can find an example of the Likert scale we used in the Resources section of this article.

Establishing Rules Up Front

Ground rules have to be established to maintain integrity of data. In order to do that, we set the following rules.

  • Absolutely no commentary during the 5-minute exercise. Each prticipant’s grade should not be influenced, especially by the highest paid person’s opinion (HiPPO).
  • It is human nature to grade each of the sites against each other. While we realize that this may consciously and subconsciously occur, we do warn against it.
  • Always grade each website not according to personal taste or distaste, but through the lens of company, industry, or vertical applicability.

With those rules established, we project or broadcast our screen on a common source and simply scroll through each website for approximately 20-seconds while the participant’s score each one.

Examining The Findings

We calculate the scores on a common spreadsheet to get a bird’s eye view of what happened. It’s important to analyze each score as a collective. Find the convergence. Which sites were universally hated? Which were universally liked? Pick a few of these and talk it out with the room.

However, these universal tastes, while valuable, are secondary measures.. The scores where there’s more than a 2-point deviation from the high and low end show true divergence in opinion and thought. Don’t leave without talking these issues out. Find out why a person liked the site and why a person disliked it. Having the participant’s talk out these issues will quickly put them in a position to solve their own problems before making them yours.

While the participants may be unaware, you’re collecting much more data than the simple scores. You will find aptitude for photography, even or uneven blocks, white space, negative space, symmetry and asymmetry, cleanliness versus grunge, and other general creative concepts. Typically, these design decisions are not major contributors or inhibitors of a successful presence. It’s important that you give the client a presence they’ll love while also being loved by the peering and fickle eyes on the other side of the screen.

Keep It Focused But Have Some Fun

We typically get an average score of 3.0 – 3.5 (on the positive side of indifferent). Although it may seem surprising, this is precisely what you want. All 5s and all 1s give little or no value at all.

A large group for this exercise is counter productive. Too many opinions will make it hard to find the golden pieces of data. Invite the point person as well as any other key decision makers in the process. Limit to no more than 5.

Keep the mood light. An immediate ice breaker after 5-minutes of silence is to ask the room to vote on the two individuals who have the lowest average rating and highest average rating. We usually provide some sort of superlative. This portion of the exercise is typically light-hearted with laughter about. Gauge your audience appropriately.

Resources

We’re providing a few free resources cooked up by our staff and used extensively for each of our projects. To successfully expedite a 20 Second Site Test, you’ll need two resources. The first is a worksheet for your participants and the second is a means to quickly and efficiently derive the necessary data so you can you work through the results in real time.

Once you complete the exercise, ask for a few minutes to calculate the results. There are two sheets on the grading matrix, List and Scores. Populate the List sheet with the site names and URLs that correspond to the order given on the worksheet. Input the Scores and a few data points will automatically be calculated for you. On the Scores sheet, each person will be given their average score (great for picking out immediate outliers – either those who are too cantankerous or too agreeable). On the List sheet, you’ll be given each site’s average and standard deviation.

Share this article on

Related Content