Using Neural Networks to extract meaning from data

 In Definitions, Overviews, Technologies

Last Tuesday, I attended a MeetUp Group of the Society of Data Miners in London. To be honest, I was partly intrigued by the fact that there was a Society of Data Miners, which is the nascent professional body for the Data Science profession. The talk title, however, was the principle draw, ‘An introduction to deep neural networks’. I’m sorry, I just can’t help wanting to learn.

I was keen to understand more about neural networks and how these are being used commercially. The evening proved educational and thought provoking. It covered off much of the basic theory in a very visual way, but also demonstrated a practical commercial application that brought this theory to life. Interestingly, though, the speaker assumed that the audience knew what Neural Networks were, which I’m sure wasn’t the case for everyone. So let me explain this first…

WHAT IS A NEURAL NETWORK?

If I showed you the following picture, you will undoubtedly be able to describe what you’re seeing. You may describe the image in a slightly different way than me or other people. You are, however, likely to describe the key features within the image in a similar way. Most people viewing the image will likely understand what’s going on. So what does this have to do with neural networks?

 

Throughout our lives, we have been trained how to recognise and interpret what we see. Our guides were our parents and families, teachers, others we’ve come into contact with and through our own learning and experience. During this process, we have not only learnt to recognise and describe things, but our developing brains have enabled us to store a whole variety of information about what we have been taught or have discovered. This includes both generic features, e.g. shape, size etc, and specific features, e.g. faces, colours, patterns etc. Importantly, though, it also includes associations with other things, which may include the other senses like taste, touch, hearing, and smell, along with timescale, location, who else was present etc. This stored information can subsequently be used to test, validate, and retrieve the knowledge used as the basis for decision-making or actions. A neural network, therefore, describes not only the knowledge we have, but the way we have stored it to aid its subsequent recall, and the process by which it is recalled. Your personal neural network will be very different to my personal neural network and to everyone else’s personal networks. Fortunately, though, we still have enough common shared experience to perceive much of the world similarly, irrespective of language.

So, what if we could model neural networks as computer programmes, training them to interpret correctly what they ‘read’ or ‘see’? If that was possible, how could we then use that encoded information to extract consistently the additional implicit knowledge that’s present in the world that we see, and what could then be achieved? This is now very much a reality and a fascinating reality at that. Much of this forms the basis for the experiments in Artificial Intelligence that are already starting to impact our lives in a positive way.

AN INTRODUCTION TO LYST

Our speaker for the evening was Eddie Bell from Lyst.com, @ejlbell. Eddie heads up a small group of eight Data Scientists. Lyst is creating a huge repository of text and images scraped from Fashion Retailers’ websites. Once retrieved, they’re then processing this data to extract additional implicit knowledge from both text and images using a range of data mining techniques. This implicit knowledge supplements what’s already known about the images and text. It will enable Lyst to provide a range of commercial offerings that benefit consumers, retailers, market trackers and commentators. The commercial potential is huge. Lyst processes text and images using different approaches.

EXTRACTING SEMANTIC KNOWLEDGE FROM TEXT

Eddie introduced us to semantic analysis of text, using the J R Firth quote, ‘You shall know a word by the company it keeps’. Essentially this is stating that the semantic definition of a word is how it occurs in language based on the words that surround it. An example of this would be the words that surround the word ‘spring’. These words would likely define whether you are talking about a water source, an action of leaping forwards, a spiral metal device or a season. Without the surrounding context, you are none the wiser, but with that surrounding context, you can make inferences.

Lyst use the Word2Vec text analytical technique. This text analytic approach does one of two things. Either, it tries to predict the central word(s) from the known context words, known as a continuous bags of words (cbow)), or it tries to identify additional context words from known central words (skip-gram). By identifying the context in this way, you can then build up semantic knowledge. This resultant knowledge is then used as additional and very granular descriptive metadata, which can be used to facilitate discovery.

EXTRACTING SEMANTIC KNOWLEDGE FROM IMAGES USING NEURAL NETWORKS

Images are analysed in different ways, using industry standard neural network algorithms for image recognition like Alexnet and VGG.

The neural networks interrogate images using a variety of techniques known as layers. Each layer analyses a specific aspect of the image, e.g. edges, key edges, colour, placement within image etc. Descriptive data about the image is generated for each layer. The combination of data from these layers provides unique descriptive information about the image. Semantic knowledge can, however, only be identified from this unique descriptive information when it is subsequently compared against image reference data sets. These reference data sets are constructed from very large bodies of analysed sample images.

Image showing the different type of data extracted from an image when processed using a neural networks

To give an example, we generally know that a cat is a cat. We know this whichever way we are looking at it. This is based on our knowledge of the variety of shapes it normally forms, the range of sizes it normally comes in, along with the likely range of colours. Our brain knows these patterns, so that when we see a cat, we reference our personal data set that defines what a cat should look like and then confirm that it is a cat. We can build computer-based neural networks to generate equivalent reference data sets to define the equivalent concept of a cat. These reference data sets are built using large volumes of image data. Unique descriptive information is built up for each unique item that appears in this volume of image data. When an image is subsequently analysed against this reference data set and it includes information that conforms to the concept of a cat, it can then infer that the picture includes an image of a cat. It can also likely define its colour, orientation, and placement amongst other things.

REFERENCE DATA SETS TO SUPPORT IMAGE ANALYSIS USING NEURAL NETWORKS

Large image reference data sets already exist, which can be used as reference data by the neural network algorithms. Using the neural networks and associated reference data sets enables Lyst to extract additional semantic knowledge, which can be used to drive their commercial propositions.

This concludes part one of this blog post, which provides an overview of the means by which semantic meaning can be extracted from text and images. Part two will go into more technical detail about how the text analytic and neural network techniques work. It will also cover how this extracted semantic knowledge can be used within digital products.

If you enjoyed this blog post, please sign-up to my monthly newsletter.

Recent Posts

Leave a Comment

Start typing and press Enter to search