There’s no ‘eye’ in Google

When we’re creating content, first and foremost we think about user experience. Users love images, so images have had and always will have a place in SEO.

But the fact that Google can’t ‘see’ images is the reason why SEO has – so far – been extremely text-heavy. It’s the reason why we have to spell it out to Google with alt tags. It’s the reason why Captchas stop robots in their tracks. It’s the reason why old-school black-hat SEO involved repeating the same keywords over and over against a white background: Google can ‘read’ words, but it can’t ‘see’ images.

Yet.

The role of images in SEO might be set to change dramatically in the not-too-distant future. Techies are working hard to teach machines to interpret and understand images on a page just as humans do.

You may remember Google’s Image Labeller from 2006 to 2011. Users were shown a series of random images and asked to label them. This was a win-win: users enjoyed playing the game, while Google used the information to improve its image search results (it doesn’t always get it right, though – see here).

Von Ahn came up with a way of combining human and robot intelligence to perform computational tasks, e.g. image recognition, in order to acquire complex metadata that neither are able to do alone. Von Ahn was also the guy behind Duolingo (translating the web for free), Captcha (anti-robot tests) and reCaptcha (digitising books for free).

But while humans are able, they are not necessarily willing – which is where the idea for a game comes in.

The game worked by pairing users up at random and presenting them with an image to label. They were unable to communicate with each other, and could only pass the level by giving the same answer. This, and other anti-cheat measures, maintained the accuracy of results (can only imagine the ‘hilarious’ wrong answers – but that’s another post). Pretty smart way of getting free labour!

It is obvious that Google is giving more and more weight to image recognition. The Google Goggles mobile app was released in 2010, allowing users to search the web based on a photograph. In other words, take a picture of that tall clock building in London and Google will tell you it’s Big Ben. Since 2010, the Goggles have evolved to recognise artwork, brand logos and even wine bottles. (Incidentally, this same year Facebook developed the group tagging function, which saved users time by recognising their friends’ faces.)

Then in 2011 Google rolled out its ‘search by image’ function, fetching up results based on visual similarity.

So where does this leave image recognition today?

Well, while the implications for future SEO are huge, even more important are the potential applications for the blind and people with visual impairments.

I believe that image recognition is going to be a big deal in the next few years. Fortunately, the best SEO advice is easy and simple to follow. Pay attention to your relevance signals, and use images whenever and wherever you can.

Because users love images. And at the end of the day, however long it takes for Google to grow eyes (and it will), it’s the users you’re working for.

 

Cookie Consent

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Read more in our Privacy Policy.