Improve conversion rate with emotion tracking: the Coach example

Every webshop knows the problem of conversion rate optimization. How to improve conversion rate has been addressed many, many times all across the web with a lot of tips. At the end, they all boil down to one thing though, A/B testing your adjustments. One post had a list of 39 things you can adjust. My question to you is, when was the last time you ran 39 A/B tests?

Instead of just giving the standard, ‘do this and then that’ outline, I’ll tell you how we analysed the webshop of the popular fashion brand Coach. 

Coach reached out to us in July 2018 with the goal of improving the conversion rate of their webshop. Apparently they went through multiple A/B test scenarios and none of that was working well and was actually making them lose money.

The most important aspect for us was to ask the question, ‘what is it exactly that you’re after?’. Is it the conversion rate of one particular product that should be improved or something within the conversion funnel itself?

When it comes to conversion rate optimisation, a lot of variables can be tweaked. To narrow it down, it’s always important to ask yourself or your client these questions first.

The answer came quick: ‘We don’t know, we just want the conversion to go up’.

Ok, that was not very helpful, but our goal was to provide Coach with the biggest bang for their buck. Meaning, which changes would result in the highest return on investment (ROI) for Coach?

What did we do?

It is never a good idea to start testing anything blindly. To make sure we don’t take shots in the dark but rather deliver on their request, we asked Coach for their webshop analytics data.

Analytics data

When it comes to optimization, they understood that it’s good to start with data in the first place. They have provided us with analytics data of 1 week, having a total of 866130 visitors! Luckily that meant we got enough statistical power, even breaking it down into different landing pages would not be a problem. They also added that the percentages of conversions stay roughly constant with slight fluctuations in visitor rates. All these details matter when it comes down to how much we can rely on the data itself.

The bad apple

Going through the data, we started looking at the conversion rates based on landing pages. There was one landing page in particular that performed terribly. It was the landing page of the Dreamer handbag. 

The conversion rate itself that resulted from this landing page was 0.05%. Now you might be thinking that this type of conversion rate is not so bad, especially when you compare it to average search statistics across the web.

Let me add two more details:

  1. Other landing pages had on average 1.5% conversion rate
  2. 10% of the total traffic came in through this landing page

That basically means a lot of potential conversions are lost and advertising money is going down the drain. The next questions is, ‘why is the conversion rate so low compared to other pages?’.

Why?

The first, most obvious reason for this conversion rate could be the product itself. But is it? 

To find an answer to that question, we looked again at the analytics data. The answer was quite clear: the product is not the problem!

The Dreamer handbag itself was responsible for 7% of the total revenue. That is massive for a single product, given that other popular products on the Coach webshop accounted for 7.84% of revenue, combined.

Then what is it on that specific landing page?

As you most likely found on multiple other online articles, now it would be time to go through the list of conversion rate optimisation tips. Oh and don’t forget to finish your guessing with an A/B test.

Luckily, we don’t have to guess blindly. 

Testing

During our test, we exposed 100 testers to the Coach landing page of the Dreamer handbag. While the testers were viewing the landing page, we recorded their eye movements, their emotions and their hear rate through their webcam. We only showed the page for 10 seconds to really get the initial implicit reactions and viewing patterns.

This makes it much easier to understand what people are looking at and also how they process / react to it emotionally. From this, we can infer the ‘why’ much easier than guessing blindly.

Plus, we had all the data collected and analysed within 3 days. Generally quicker than preparing for all the A/B tests.

Results

From the raw data, we create easy to interpret visualisations. For the eye tracking we get a heat map. Red = most attention, blue = least attention, no color = no attention received.

From this is it easy to see that the image within the webshop and the slogan on top receive the most attention.

Now that we know what received attention, let’s have a look at how that was processed. For that we combine all the emotions we measure (7 in total, you can have a look here) into an easy interpretable Valence score, which goes from -100 to +100. Negative score means negative emotion, positive score means positive emotion.

The Average Valence score for this landing page was -0.7. You might be thinking that is not so bad considering the range of the score. But you need to keep in mind that this is just a webshop that people are looking at and nothing controversial, which would much easier stir up emotions.

This is already quite telling but let’s dive in 1 step further. We combined the heat map and the Valence score into a Valence map. This makes it simple to see where exactly what type of emotion was present. This way we know what exactly people felt when they looked at something.

Conclusion and impact

From the Valence map you can easily see that the strongest negative peak is caused by the the slogan on the landing page. Looking back at the timeline of the Valence score, that peak comes at ~6 seconds. The image on the left of the products doesn’t get a lot of positive reactions either.

After we saw this reaction to the image, we looked at the other landing pages and only then we noticed that they don’t have an image nor a slogan.

There are now 2 potential routes you can go:

  1. Get rid of the image and the slogan
  2. Tweak image and slogan

If you’d go down route 1, then with a fair amount of confidence, given the analytics data of other landing pages, the product itself, and the data we collected, the conversion rate should go up. But how much?

It should get close to the average of the other landing pages (1.5%), but let’s be conservative and say it will only go up to 0.8%.

The Dreamer handbag costs ~ $700, with a conversion rate of 0.05% and ~93800 visitors that means around $33k. Coach is a strong brand but they do use online advertising. Because they are such a strong brand they could potentially pay only $0.1 per visitor coming to their website via advertising. That means then $9380 in online advertising costs. That results in revenue minus advertising costs = $23620.

Redoing this with a conversion rate of 0.08% would result in the same costs for advertising but final revenue minus advertising costs = ~ $43k. Oh and remember that this is per week!

As you can see, the impact itself is quite strong and we managed to deliver the biggest bang for their buck.

AI Expo Europe: experiences and preparation

Imagine you’re at work and all is going well. You’re working towards your goal and are making progress. All you want is to focus and get things done. Then out of the blue you get an email that offers you something for free. Yeah, right… When did that ever happen?

I still remember when it did happen to us. We got an email from Anna Fry, who was one of the organizers of the AI Expo Europe 2018 that took place at the RAI convention centre in Amsterdam.

They had a startup area and she offered us a startup stand, free of charge. I have been offered free things before, which shortly turned out not to be free at all (more on this in later posts). I was sceptical to put it mildly. But in the end it was not a joke or a sell. She genuinely wanted us to be part of the AI Expo 2018. We didn’t think twice about this opportunity and agreed.

I didn’t want to let this opportunity go to waste. It very often happens that people simply don’t value things that they get for free. Behavioural economics backs me up on this. I didn’t want us to fall into this trap. How could we avoid it? Preparation!

Preparation

We’ve been to conferences before, that was nothing new. But at those conferences it was pretty clear what you should do and what you should get out of them. This was a business convention. What do you get out of this one?

That was the first question we needed to answer for ourselves. There are many aspects you can get out of an expo/convention. You most likely hear these standard phrases:

  • Exposure – often also called Brand Awareness
  • Recognition – often also called Brand Recognition
  • Feedback
  • Customers – acquisition/retention
  • Close look at the competition
  • Networking

The biggest focus for us was feedback and customer acquisition. Now that we have that clear, we don’t want to leave that up to chance. You can always go two distinct routes when it comes to customers or feedback:

  1. get a lot of people and have superficial interaction
  2. fewer people and deeper conversations.

Our goal was not to get a pad on the back and hear the ‘wow, that’s great’. We wanted input and new ideas from within the industry, which then means route 2. But who from within the industry?

The good thing is that all speakers are listed on the Expo website. That provides you with a good idea of who will be there, what they do and if their input could be useful.

No place for shame – you gotta go for it.

After filtering through all the speakers, we simply wrote an email to each to ask if they’d have time to drop by our stand. We could of course also meet anywhere else but people preferred to come to us.

Make sure that you are genuine in your email and it is not an obviously mass send email. People are generally very willing to help you, but not if they notice that you’re playing a numbers game.

Show and tell

Now that we took care of some of the traffic, we need to make sure that we have things to show and to give to visitors. I’d say this part falls into 4 categories:

  1. traffic magnet – getting attention
  2. interaction/engagement
  3. retention – make sure to be remembered
  4. tracking – keep track of actions taken

Traffic magnet

We have already some visitors that will come to us for a chat. But what about all the attendees that will come? To attract them, we need a traffic magnet. We simply took our laptops and had videos of cases we ran before on loop. The eye tracking heat map overlay onto a video is still super interesting and most people have not seen something like this before.

Interaction / Engagement

The good thing about the traffic magnet we chose was that you can use it immediately to start a conversation. Most people ask ‘what is that?’ and off you go!

To have even more engagement, we had a demo of our analysis service in combination with the ultimatum game prepared. The ultimatum game is a simple game where you have to interact with someone else. The crucial aspect is that you can predict the behaviour based on facial expressions. More on the game will follow in later posts.

If people wanted their results, they’d have to also give us their email address. This is a nice way to collect email addresses and have it part of a regular conversation that doesn’t feel like a standard sales pitch.

Retention / tracking

At the end of an interaction people generally exchange business cards. We prepared special business cards just for this event. It made that clear in the text we used. At the same time, we added a dedicated link on the card that you’d only know if you had the card. That allows us to track the performance online.

We also asked to take a picture of them holding our face analysis board. This also had to be prepared and took a lot of time. But keep in mind, you do it once well and you can use it more often.

At the expo

We were so lucky that we got upgraded to a real size stand. But the problem was now that it might look quite empty, as we were preparing for a startup size stand (half the size). We were promised 2 chairs and a table, when we got there we found a very flimsy table and no chairs. The booth looked very very empty. What do you do?

First, try the official route. There was a help desk to deal with issues like these. I went there, but guess what, it was super crowded there already. You have to ask yourself “where on the priority list will we be?”. Most likely not that high and it also has to go through so many ‘people in charge’ that it’s easy to be forgotten.

Steal a bench. Well, I mean borrow a bench.

Second, take matters into your own hands. Right around the corner of our booth were a few wooden benches. Nobody was sitting there, so we took one and placed it on the side of our booth. Looks cosy, doesn’t it?

Now is the time to shine. All the preparation work will pay off, but only if your behaviour and attitude are right. What do I mean by that?

There are plenty of subtle signals we send that turn people off, even before we had a chance to exchange words. Would you approach a stand where the person from the company is looking at the phone? NO!

Here are a few parts that we focused on to make sure we look easily approachable:

  1. Keep your phone in your pocket.
  2. Don’t sit down!
  3. You see someone looking, approach them and introduce yourself.
  4. Don’t stand behind a desk, be next to the desk.
  5. Put a smile on your face and mean it.
  6. You are already in a conversation and other people come by? Simply ask your current visitor if he/she would mind if you involved the new visitors. So far no one has ever said no to this.

The result

We had a lot of visitors coming to our stand, so many, that we periodically caused ‘traffic’. We have fully reached our goal and ended up with a lot of very valuable feedback and new customers. The greatest thing was that some visitors enjoyed the experience with us so much that they started advocating for us and brought new people to our stand.

Post expo & lessons learned

After the expo ended, we took the weekend to rest. It was a lot of work but it did pay off as we reached the goals we’ve set for ourselves. Now it is time to process all the collected emails, phone numbers, contact information and make sure to act on it!

Also here is no place for shame. People gave you their card, that means they won’t be too surprised if you reached out. They might have had a great experience at our stand but make no mistake, you need to be the one to keep the connection. That goes for new customers that expressed strong interest during the expo but also for visitors you got a lot of good feedback from.

Saying ‘thank you’ matters.

The biggest lesson for us was that the demo needs to be better. Some people really enjoyed playing it, but because we had so many visitors we didn’t have the time to show it to more people. For next time we need to have a demo prepared that is part of the traffic magnet itself!

Oh and if you were wondering how other stands looked like, you can check them out here.

Other stands

The deep digital twin connects art and technology

It has been now some time ago, but we were part of an art piece at the Biennale Istanbul 2018. Plus, we are delighted that it is now located at the Design Museum in London.

You might be wondering “how does this tech company fit into art?”. Well, I was thinking the exact same thing when we were approached by Eva Jäger and Guilemette Legrand who are running their own studio and can be found here on Instagram.

They have been fascinated by how technology shapes our society and the speed by which it happens today. More and more of our communication happens via messages. Remote interactions are also dominated by camera interactions. Just think of Skype meetings, Facetime, Google Hangouts, and there are countless more options.

Instead of writing or giving long explanatory presentations, Eva and Guilemette focus on interaction. Communicating what they want to tell by letting people interact with their art.

Their idea was ‘simple’. Create an art piece that simulates a close physical interaction, while at the same time creating distance.

The deep digital twin was born

The people interacting with the deep digital twin sit down and are ‘facing’ each other. But in this scenario they are looking at a screen right above their chair.

On that screen they see their conversational partner and their emotional analysis. That was the part where mindtrace came in.

So close and yet so far.

People loved it! Technology moves at a pace that makes it hard for people to keep up with what is possible. The emotion recognition still sounds to many like science fiction.

Futuristic picture

Instead of painting a bleak picture of the future, the deep digital twin does exactly the opposite. It engages and sparks conversation and discussion. The goal was to show what fantastic things are possible and also what we should be careful of.

Guillemette and Eva have now been invited to display their deep digital twin in France and Belgium. I’m excited that we are part of this journey!

mindtrace-toolbox-schema

Setup the MindTrace toolbox

If you haven’t already done so, you can download the toolbox with the button above. It’s also important to install all necessary software for it first. You can do that by checking this post (how to install the toolbox).

If you did all of that already then great job on installing everything that is necessary for the toolbox to work. The last part you have to do is to adjust the MT_server.py file. That file is the heart of the toolbox. You need to adjust it so it works with the webcam you want to use.

The most important part that you need to adjust, is to select the webcam you want to use. You can also adjust the resolution and the fps (frames per second), which are set to 1280×720 pixel and 30 fps by default. The resolution and fps settings should work right out of the box. But you can adjust them to your liking. Just make sure all works after you adjust them.

Selecting your webcam

These steps are slightly different from operating system to operating system. But don’t worry, we will take you through it step by step. Within the Python subfolder you can find 2 important scripts that are helping you out:

MT_system_devices.py – This script tells you which devices are available in your system.

MT_device_formats.py – This script helps you to figure out the supported formats of that device. If you want to tinker with it.

You can adjust the webcam, resolution, and the fps (framerate), right at the top of the MT_server.py script.

The default settings for Mac and Linux use the standard integrated webcam. For Windows you have to check what the name of your webcam is (see below). Next we show you how to find out which webcam to use if you want to use another one.

Mac


Open the terminal application on your Mac and go to the correct folder that contains the script. As it shows on the image below, simply type in:

python3 MT_system_devices.py

On the example system it was python3 but on yours it could be python. It all depends on how you’ve set it up. You will now get a list of accessible devices on your system. In this example we are going to use the FaceTime HD Camera, which has the index 0. This is the default setting of MT_server.py. If you want to use the FaceTime Camera, you don’t have to adjust anything. If you want to use another webcam in your list, assign the mac_webcam the index you want to use.

The small issue with Mac is that it does not give you any supported formats of the selected device. So if you use the MT_device_formats.py script like this:

python3 MT_device_formats.py 0

The ‘0’ at the end is the device index. In the end it doesn’t matter though, because you’ll get the following:

Which means you have to check your system settings or if you use an external webcam you have to check its specifications.

Run a quick command routine

Now that you have everything setup and selected the device you want to use, you can run a quick command routine. This is also a script that is available in the Python subfolder and is called MT_connection_example.py. This script does the following:

  1. connects to the MT_server
  2. takes a snapshot through your webcam
  3. starts recording through your webcam
  4. sends information about a calibration point (x, y, duration)
  5. sends information about a fixation cross (x, y, duration)
  6. stops recording
  7. closes the connection to MT_server

You can see an example of this execution in the video below. To do it on your system, open two terminal windows and make sure you are in the correct folder. In one terminal start the MT_server by executing:

python3 MT_server.py

In the other terminal start MT_connection_example by executing:

python3 MT_connection_example.py

You can then find all the recorded data in a subfolder within the Python subfolder.

Linux


Open the terminal and go to the correct folder that contains the script. As it shows on the image below, simply type in:

python3 MT_system_devices.py

On the example system it was python3 but on yours it could be python. It all depends on how you’ve set it up. You will now get a list of accessible devices on your system. In this example we get only the internal webcam to show, which is already the default setting. If you want to use another webcam in your list, assign the linux_webcam the /dev/video# you want to use.

In the line right after, you can see that we want to find out what formats are supported by that device. You can use the MT_device_formats.py script like this:

python3 MT_device_formats.py /dev/video0

Run a quick command routine

Now that you have everything setup and selected the device you want to use, you can run a quick command routine. This is also a script that is available in the Python subfolder and is called MT_connection_example.py. This script does the following:

  1. connects to the MT_server
  2. takes a snapshot through your webcam
  3. starts recording through your webcam
  4. sends information about a calibration point (x, y, duration)
  5. sends information about a fixation cross (x, y, duration)
  6. stops recording
  7. closes the connection to MT_server

You can see an example of this execution in the video below. The video shows it in a Mac terminal but the process is the same. To do it on your system, open two terminal windows and make sure you are in the correct folder. In one terminal start the MT_server by executing:

python3 MT_server.py

In the other terminal start MT_connection_example by executing:

python3 MT_connection_example.py

You can then find all the recorded data in a subfolder within the Python subfolder.

Windows


Open the command line (cmd) and go to the correct folder that contains the script. As it shows on the image below, simply type in:

python MT_system_devices.py

On the example system it was python but on yours it could be python3. It all depends on how you’ve set it up. You will now get a list of accessible devices on your system. In this example we get only the internal webcam to show, which is already the default setting. If you want to use another webcam in your list, assign the windows_webcam the “webcam name” you want to use.

Now you can find out what formats are supported by that device. You can use the MT_device_formats.py script like this:

python3 MT_device_formats.py "webcam name"

Run a quick command routine

Now that you have everything setup and selected the device you want to use, you can run a quick command routine. This is also a script that is available in the Python subfolder and is called MT_connection_example.py. This script does the following:

  1. connects to the MT_server
  2. takes a snapshot through your webcam
  3. starts recording through your webcam
  4. sends information about a calibration point (x, y, duration)
  5. sends information about a fixation cross (x, y, duration)
  6. stops recording
  7. closes the connection to MT_server

You can see an example of this execution in the video below. To do it on your system, open two command line windows and make sure you are in the correct folder. In one command line start the MT_server by executing:

python MT_server.py

In the other command line start MT_connection_example by executing:

python MT_connection_example.py

You can then find all the recorded data in a subfolder within the Python subfolder.

How to install the MindTrace toolbox

The MindTrace toolbox functions as an interface between your experiment code and the webcam. After connecting to the toolbox from within your experiment, you can easily send commands to the toolbox in order to e.g. start recording via the webcam or take snapshots. Sending the appropriate commands for calibration points can also be easily communicated to the toolbox. Additionally, the toolbox ensures that all the data is stored in an appropriate format, so that it can be uploaded seamlessly into our system and analyzed immediately.

mindtrace-toolbox-schema

What do you need on your computer to make use of the MindTrace Toolbox?

First of all, you need Python 3 (up to version 3.6.5). Normally, Python comes pre-installed with all unix systems (Mac OS and Linux). However, you need to make sure that you are indeed using Python 3 (up to version 3.6.5). You can easily check your python version by pasting this line in your terminal:

python -V

If you are already running Python 3, there is no need to install it again. If you execute the commands below in a Mac or Linux environment, the installation command for Python 3 will be ignored, if Python 3 is already installed on your system.

Mac


Mac OS in general has Python 2 pre-installed. Because our Toolbox requires Python 3 and a few additional packages for Python itself and your system, we want to make sure that everything is installed properly. In order to do so, you need to install Homebrew, which is a package manager for Mac OS. Paste the following line into your terminal to install Homebrew:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Now that we have Homebrew installed, we can continue to install Python 3, ffmpeg (package for handling multimedia data), pip (python package manager), and numpy (scientific computing package).

brew install python3 ffmpeg
sudo easy_install pip3
pip3 install numpy

After you’ve installed everything, you can check here how to set the toolbox up to use the device that you want to use – Setup the toolbox

Linux


Linux already has its own package manager. You can directly continue to install Python 3, ffmpeg (package for handling multimedia data) , pip (python package manager), and numpy (scientific computing package) by executing the following commands:

sudo apt-get install python3 ffmpeg
sudo easy_install pip3
pip3 install numpy

After you’ve installed everything, you can check here how to set the toolbox up to use the device that you want to use – Setup the toolbox

Windows


You can download Python 3 via the following URL: https://www.python.org/downloads/. Select Python 3 up to version 3.6.5! Make sure that during the installation you select to add the Python install to your path so that you can easily run commands via the terminal. If the installation process doesn’t offer you that option it is most likely doing it automatically.

You can download ffmpeg via the following URL: https://ffmpeg.zeranoe.com/builds/. Unzip ffmpeg directly to your C:\ drive so that the resulting path and folder is C:\ffmpeg. Finally, you need to add the ffmpeg.exe to your environment variables. That will make it easy to call ffmpeg from within the terminal.

In order to do that, open the Start menu and right click on “Computer” and then click on “Properties”.

add ffmpeg windows path step 1

Select “Advanced system settings”:

add ffmpeg windows path step 2

Click on “Environment variables”:

add ffmpeg windows path step 3

Edit the Path variable:

add ffmpeg windows path step 4

Add C:\ffmpeg\bin at the end. Make sure that this path is separated by a semi-colon (;) from the previous folder.

The package manager pip is generally installed with Python directly, so no need to install it separately. The last step is to now install numpy through the command line:

pip install numpy

After you’ve installed everything, you can check here how to set the toolbox up to use the device that you want to use – Setup the toolbox