What is deep learning?

Deep Learning(DL) is a subfield of Machine Learning (ML) that uses algorithms similarly to the way neurons are used in the human brain. Deep learning creates artificial neural networks and layers based on how the human brain works. Deep learning is a technique of machine learning that teaches computers what we humans do easily and naturally – learn by the examples we come across.

You can see deep learning technology in driverless cars, which are able to distinguish a red from a green light, a human from a curb side and even gauge the distance between two cars. It is the technology that enables voice activation of your mobile phone, face recognition on your TV and gesture operations on your personal devices. Deep learning has been getting a significant amount of attention in recent years and with very good reason.

The history of deep learning

Going back to its origins, deep learning first appeared in 1943 when Warren McCulloch and Walter Pitts used math and algorithms to create a computing system that replicated neural networks. Small advancements were made through the 1950’s, 1960’s, 1970’s and 1980’s. The biggest evolution for deep learning took place in 1999, when computer processing speed and graphic processing units were developed. Over the next ten years, clumsy and inefficient systems became 1000 times faster.

It was only in the mid-2000s that deep learning as a term began to crop up regularly in technology conversations. The term became popular when Geoffrey Hinton and Ruslan Salakhutdinov published a paper that explained how a neural network comprising multiple layers could be trained – one layer at a time. Google took things to the next level in 2012 with an algorithm that could recognize cats. Known as The Cat Experiment, it used unsupervised learning to show 10,000,000 images of cats to a system and to train it to recognize cats. It was a partial success, doing better than its forebears but recognizing less than 16 percent of the cats it was shown.

Google then invested in UK-based intelligence start-up DeepMind two years later and in 2016 Google DeepMind's algorithm AlphaGo created history by learning the complex board game Go, going on to beat a professional human player in a competition in Seoul.

Deep learning, a subfield of machine learning, is a field that is constantly learning and improving by looking into its own algorithms. Deep learning bases its work on artificial neural networks that are created to imitate human thinking. Until recently, these neural networks had limited computing power resulting in limited complexity.

With big data analytics advancing in leaps and bounds, neural networks have been getting more complex and sophisticated. This has resulted in computers picking up the pace with observing, learning and reacting to complex situations, sometimes faster than a human mind would. Models continue to be trained with large sets of labelled data and neural networks that have a multitude of layers. Aided with image classification, translation capacities and speech recognition technology, deep learning even decodes pattern recognition with no human aid at all.

What does deep learning achieve?

Deep Learning is a part of our everyday lives. For instance, when you upload photographs on Facebook, deep learning helps you by tagging your friends automatically. If you use digital assistants like Siri, Cortana or Alexa, natural language processing and speech recognition is what helps them be of service to you. When meeting with your international clients on Skype, you listen to translations in real time. Your email service provider recognizes spam without you needing to do it yourself. The list goes on and on.

A giant like Google has been leveraging deep learning for years and is now working on delivering next-level solutions. They have been able to generate speech which mimics human voice and sounds as natural as it can get with its speech systems. Google Translate employs deep learning and image recognition to work on voice translation and recognition of written languages. Google’s PlaNet can tell you where any photo has been taken and their TensorFlow has produced a range of artificial intelligence (AI) applications.

There are a range of industries that now have deep learning at the core of their functioning:

Aerospace and defense

Deep learning is utilized extensively to help satellites identify specific objects or areas of interest and classify them as safe or unsafe for soldiers.

Medical research

The medical research field uses deep learning extensively. For example, in ongoing cancer research, deep learning is used to detect the presence of cancer cells automatically. An advanced microscope has been created by the minds at UCLA which uses high-end data to teach a deep learning application how to identify cancer cells with precision. The scope of deep learning will eventually allow medical research to create personalized medicines that are tailored for a person’s genome structure.

Industrial automation

The heavy machinery sector is one that requires a large number of safety measures. Deep learning helps with the improvement of worker safety in such environments by detecting any person or objects that comes within the unsafe radius of a heavy machine.

Chatbots and service bots

Deep learning drives all chatbots and service bots that interact with customers and enable them to provide intelligent answers to increasingly complex voice and text based queries. This is constantly evolving.

Image colorization

What was once a task done manually and over a long period of time can now be entrusted to computers. Black and white images can be colored using deep learning algorithms that are able to place the contents of the image in context and recreate them accurately with the right colors.

Facial recognition

This feature utilizing deep learning is being used not just for a range of security purposes but will soon enable purchases at stores. Facial recognition is already being extensively used in airports to enable seamless, paperless check-ins. Deep learning will take things a step further enabling facial recognition to be a means to make payments even in situations where the person has had a hairstyle change or where the lighting is less than optimal.

How does deep learning work?

To understand how computers use deep learning, they use a similar process to that of a toddler trying to learn how to identify a dog. A toddler first learns to associate a picture with the word dog as said by an adult. The child moves on to associate the sound of barking to a dog. The child then begins to say the word with several variations to pronunciation until they get it right.

In the same manner, computer programs have a hierarchy and the algorithm in each level applies a level of transformation to the input (which is the learning it does) and creates a statistical model as reference for the output. Various iterations (just like the child learning to recognize the dog) get factored in until the level of accuracy needed is achieved. The several layers or feature sets that data needs to go through to be able to reach the final level is what led to the technology being called ‘deep’ learning.

In the case of a child and supervised machine learning, every level has to be supervised and the instructions have to be specific. For a child it is dependent on the parent, for ML, it is based on the skills of the programmer or data scientist who defines the set of data that identifies a dog. In the case of deep learning, the program builds feature sets for itself sans supervision, faster and accurately.

A child will take months to be able to make the right association with a dog. For a computer program which is based on deep learning algorithms, this can be achieved in a matter of minutes as it accurately scans through scores of images and picks out the dogs in them. For such accuracy levels to be achieved and constantly maintained, deep learning programs need enormous amounts of data for training as well as processing power. Neither were easily accessible to programmers until cloud computing and big data arrived.

With enough data now available, deep learning programs have gained the ability to create complex hierarchical models with their own iteration driven output. They are able to create extremely precise predictive models from huge amounts of unstructured raw data. Going forward, this is going to play a large role in enabling the Internet of Things (IoT) since most of the data produced by humans and by machine is unstructured and thus best handled by deep learning rather than humans.

Creating strong deep learning methods

There are several approaches to creating strong DL models.

Learning rate decay

This is a hyper-parameter that might be the most important parameter for deep learning. It determines how much the model alters itself as a response to estimated amount of error, each time there is a change in factors. When the learning rate is high, the system becomes too unstable for training processes. If they are too low, the chances of the training taking longer than needed are high. Accurately configuring the learning rate decay means that the learning rate adapts to keep increasing performance capabilities while reducing the training time taken.

Transfer learning

This process is about having a model perform an analysis on a related task to one that it already knows. The existing network is fed new data which is not classified. When adjustments are put in place, any new task undertaken will be performed with better categorization abilities. With this approach, the amount of data needed is reduced, thus bringing down computing time as well.

Training from the bottom up

In this approach, the developer will aggregate large amounts of labelled data. The network architecture will then be configured so that it can learn about features and the model. This approach works great for the creation of new applications as well as for those that may require several outputs. However, it remains one of the lesser used approaches because of the massive amount of data needed, which adds on to the time taken for training.

Dropout approach

This method solves the issue of over-fitting in networks considering the large number of parameters. Over-fitting is when the algorithms developed on the training data do not fit the real data. The dropout approach has a proven track record with enhancing the performance of neural networks as far as supervised learning goes, particularly in the sectors of speech recognition and document classification as well as computational biology.

Deep learning has given the field of AI and ML a massive boost. The ability of deep learning to dissect tasks in such a way as to make it easier to assist any machine and make previously human-only tasks possible is its strong point. AI is the future and with the help of deep learning the things that you see in movies may just be a reality in this lifetime.

Deep learning diagram

Ready for immersive, real-time insights for everyone?