How did we get here so fast?

Last year was the year of AI. 2017 began with the anticipation of organisations jumping on the technological bandwagon to implement nascent technologies as new, innovative solutions to old problems. There was an extraordinary 'Let's look at the art of the possible' attitude, which ended in December with Google’s AutoML AI producing its very own object detection “child” AI, NASNet.

In the final entry of the Journeys into Data Science series, I’ll discuss factors that have brought us to this particular stage in our technical evolution regarding AI, what we can expect from here on in, and how your business can take advantage of such an extraordinary time in human history. (To check out the previous blog in the series, click here. Or if you want to start from the beginning, head over here.)

Data-driven startups, monolithic vendors, Ray Kurzwei and fellow transhumanists aside, few saw the age of AI self-replication arrive this quickly. After a shallow technological climb that lasted centuries – from golems to Babbage and Lovelace – how did we arrive at a future with Skynet?

AI: The child of three parents

So, what drove this growth? Three major factors come to mind, aka the “parents” of AI:

Data - Think back to 2013 when ‘big data’ was the buzzword. According to IBM, 90% of the entire world's recorded data was created in the last two years! That works into AI, as it needs vast amounts of data to train necessary algorithms. And the right training data set is absolutely essential for accurate AI-based predictions.

Computing Power – Specifically, cloud. Over the last couple of years, cloud services have become a commodity nearly all industries rely on. Beyond storage and the principle extract, transform, load (ETL,) you now have accessible, affordable and powerful analytics at your disposal.

Algorithms - When referring to algorithms in AI, the majority are utilised by machines to learn – hence the term ‘machine learning’ (ML). While the algorithms used in AI & ML have been around for decades, we haven’t had the necessary computational power nor architecture to fully exploit them – until now.

Know the Fundamentals!

Some definitions before we move forward:


Introducing ML Algorithms

There are hundreds of machine learning algorithms available, so below is a list of the 10 most common. Many of these algorithms now help form commoditised cloud services – and with proprietary algorithms being created daily by a variety of organisations, we have truly entered into the algorithmic economy.

10 of the most used ML Algorithms (Follow this link for a more extensive list!)

  • Naïve Bayes Classifier
  • K Means Clustering
  • Support Vector
  • Apriori
  • Linear Regression
  • Logistic Regression
  • Artificial Neural Networks
  • Random Forests
  • Decision Trees
  • Nearest Neighbours

ML algorithms also fall into three main categories:

Supervised - Algorithms that make predictions on a given set of samples. A supervised machine learning algorithm searches for patterns within the value labels assigned to data points. These algorithms can be used to determine everything from the "trivial" (e.g. whether or not a picture shows a cat or a dog) to life-changing scenarios (e.g. diagnosing a disease).

Unsupervised - Used when dealing with data where no labels are associated with data points. These algorithms organize data into clusters to describe its structure, making complex data simpler for analysis. These algorithms are valuable in marketing, as thousands of categories are used to group together individuals of similar traits to target specific offerings and services.

Reinforcement - These algorithms choose an action based on each data point, and then later learn how beneficial the decision was. Over time, the algorithm changes its strategy – that is, its mathematical approach – to learn better and maximize the reward. The most well-known application of reinforcement learning is self-driving vehicles.

AI as we know it – and beyond!

So, it’s clear that machine learning is a major aspect of AI. Learning from past experiences and using that information to improve future behavior is the key to success. Stories of successful AI development are as frequent as they are compelling, with the evolution of AI categorized into four stages.

Type 1: Reactive Machines

This is the most basic type of AI. Reactive AI has no capacity to recall past experiences to inform current decisions, only directly perceiving the world. Reactive machines perform specific tasks and have no capabilities beyond those duties.

For instance, AI that play games are reactive. Google’s AlphaGo reactive AI machine hit the headlines by beating a top human Go player. Surprising experts, the technology behind AlphaGo isn’t extremely advanced – it uses a neural network to watch developments in the game and responds accordingly.

Type 2: Limited Memory

This type of AI depends on both pre-programmed knowledge and observations of the world around it, whilst simultaneously incorporating new information over time. These machines use past experiences to create responses and are used in autonomous vehicles. But sadly, when receiving information, it’s only stored for a limited time – it’s not incorporated into a library of experiences, e.g. human memory.

So, the next step is to build AI that can maintain full representations and memories of the world and learn to handle new and more difficult situations. The challenge is on!

Type 3: Theory of Mind

The psychological term ‘Theory of Mind’ is the concept that considers any entity existing in the world has the capacity to not only form representations of the world in its own mind but also representations of other entities. AI would have an understanding that the world contains living creatures which have their own motivations and feelings, and their interaction with each other and inanimate things can result in both predictable and unpredictable outcomes.

For AI to walk among us, it must function like a human in society. The most recent endeavour towards theory of mind AI is a robot head that was made in the late 90s by a Massachusetts Institute of Technology researcher. Named Kismet, it can mimic and recognise human emotions. Sadly, while both abilities are key advancements in theory of mind AI, Kismet can’t follow gazes or convey attention to humans, an essential component to natural interactions.

Type 4: Self Aware

Self-aware AI is an extension of theory of mind AI, where machines both understand behavioural cues and have their own self-driven reactions to them. These systems can form representations about the world, and not just understand their place within it, but be aware of it – conscious.

While we are still somewhat wary of conscious AI, much of our efforts are invested in creating AIs which mimic human capabilities such as memory, learning and the ability to base decisions on past experiences. To be at this place is telling of who we are as a species (and honestly warrants its own blog). We should consider the complexities of our biological design as well as the motivations that make us human.

So, what does this all mean for you and your business?

The world of AI flows between the fundamental and tangible to the esoteric, just like this article. The progress we’re making on a daily basis is phenomenal – but by the time businesses understand one concept, something new will come along to evolve the landscape. It’s hard to keep up.

How does a business keep up?

Truthfully, nothing beats talking to, interacting with and getting your hands on actual use cases and cutting-edge technologies.

Talk to us at Luxoft. One of the many things we pride ourselves on is our expertise and agnostic attitude towards technology. We have experience in 8 verticals – from healthcare to financial services – and global teams that deploy end-to-end solutions that include high-end technology such as Cloud, Blockchain, DevOps, AI and Deep Learning. Discover what AI & ML can do for your business by speaking to some seriously passionate people!

Check out the following links for a couple of interesting use cases:

And be sure to get in touch with us here!
Maya Dillon
A Data Scientist with a passion for AI and Space. She is focused on developing strategic initiatives of businesses – implementing both new technologies and scientific methodologies. She is an experienced and highly acclaimed public speaker not to mention a science and technology evangelist. She is passionate about creating and developing new initiatives incorporating cutting-edge technologies in AI, Machine Learning and Data Science.