Augmented Analytics (1)  :

This includes Augmented data preparation, Augmented data discovery, Augmented Data Science and Machine Learning. Augmented analytics marks the next wave of analytics disruption moving from the cycles of Semantic based platforms, to Visual Based data discovery platforms to Augmented Analytics. There are a few vendors that are leading the charge with these technologies and include both large and small disruptive technology companies. Augmented analytics will also be a key feature of conversational analytics. This is an emerging paradigm that enables business people to generate queries, explore data, and receive and act on insights in natural language (voice or text) via mobile devices and personal assistants. For example, instead of accessing a daily dashboard, a decision maker with access to Amazon Alexa might say, "Alexa, analyse my sales results for the past three months!" or "Alexa, what are the top three things I can do to improve my close rate today?"

Conversational analytics applications are not yet available "out of the box," and early integrations are immature. Analytics vendors are using APIs and building integrations with the help of partners to make these applications easier to deploy. Gartner expects out-of-the-box and enterprise-ready instances to appear over the next two to five years.

AlphaGo (2)  :

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

Artificial Intelligence (3)  :

Artificial intelligence (AI, also machine intelligence, MI) is intelligent behaviour by machines, rather than the natural intelligence (NI) of humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet." For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology. Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.

Big Data (4)  :

Big Data is often the fuel of AI. Knowledge unlocks understanding and wisdom by analysing masses of data. AI platforms leverage the volumes, variety and velocity of information in today’s digital world to learn faster and make increasingly better informed recommendations, decisions and outcomes.

The access to cheaper and faster real time data (dynamic Big Data) sees benefits that would otherwise not be achievable if only based of static data, as Big Data enables real value in the process of giving relevant information. This at the point of need and in context to what the end-user may be seeking or needing. AI facilitates and delivers the outcomes.

Bots (robot) (5) :

A bot (short for "robot") is a program that operates as an agent for a user or another program or simulates a human activity. On the Internet, the most ubiquitous bots are the programs, also called spiders or crawlers, that access Web sites and gather their content for search engine indexes.

A chatterbot is a program that can simulate talk with a human being. One of the first and most famous chatterbots (prior to the Web) was Eliza, a program that pretended to be a psychotherapist and answered questions with other questions.

Red and Andrette were names of two early programs that could be customized to answer questions from users seeking service for a product. Such a program is sometimes called a virtual representative or a virtual service agent.

A shopbot is a program that shops around the Web on your behalf and locates the best price for a product you're looking for. There are also bots such as OpenSesame that observe a user's patterns in navigating a Web site and customize the site for that user.

A knowbot is a program that collects knowledge for a user by automatically visiting Internet sites and gathering information that meets certain specified criteria.

Note: Due to most bots being based on preloaded algorithms, often Bots are seen outside the term AI. It must however be noted that advances in bots with the term Smart bots (such as offered on the Microsoft bot framework) means that these are evolving into being able to interact in a more human way using cognitive science approaches (as opposed to a prescriptive methodologies).

Cognitive Computing (6)  :

Cognitive computing (CC) describes technology platforms that, broadly speaking, are based on the scientific disciplines of artificial intelligence and signal processing. These platforms encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, among other technologies.

At present, there is no widely agreed upon definition for cognitive computing in either academia or industry.

In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain (2004) and helps to improve human decision-making. In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. CC applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, CC hardware and applications strive to be more effective and more influential by design.

IBM describes the components used to develop, and behaviours resulting from, "systems that learn at scale, reason with purpose and interact with humans naturally". According to them, while sharing many attributes with the field of artificial intelligence, it differentiates itself via the complex interplay of disparate components, each of which comprise their own individual mature disciplines.

Some features that cognitive systems may express are:

  • Adaptive: They may learn as information changes, and as goals and requirements evolve. They may resolve ambiguity and tolerate unpredictability. They may be engineered to feed on dynamic data in real time, or near real time.
  • Interactive: They may interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices, and Cloud services, as well as with people.
  • Iterative and stateful: They may aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They may "remember" previous interactions in a process and return information that is suitable for the specific application at that point in time.- 
  • Contextual: They may understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

Cognitive Science (7)  :

Cognitive Science is the science that includes Artificial Intelligence as one element and also includes areas such as Linguistics, Anthropology, Psychology, Neuroscience and Philosophy

Source: Cognitive Science Society

Computer vision (8)  :

Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.

Computer vision tasks include methods for acquiring, processing, analysing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. Also see: Image Recognition

Data Mining (9)  :

Data mining is the computing process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. An essential process where intelligent methods are applied to extract data patterns. It is an interdisciplinary sub-field of computer science. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. Aside from the raw analysis step, it involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating. Data mining is the analysis step of the "knowledge discovery in databases" process, or today, can also be accelerated through Big Data Discovery programs.

The term is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information {and part of a Big Data programs} -processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence, machine learning, and business intelligence. Often the more general terms (large scale) data analysis and analytics – or, when referring to actual methods, artificial intelligence and machine learning – are more appropriate. {Although today it may be said that value from mining through providing analytics and outcomes or in support of decision making means that data mining is a core part of the process of Big Data and Analytics and can also be extended to AI methods at the front end}

Deep Learning (10)  :

Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.

Some representations are loosely based on interpretation of information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain. Research attempts to create efficient systems to learn these representations from large-scale, unlabelled data sets.

Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design where they produced results comparable to and in some cases superior to human experts.

Generalised AI (or Artificial Generalised AI - AGI) (11)  :

This is the future concept when a machine can be able to do any job that a human can do without being managed. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies.

The difference today is that the majority of AI research focuses on creating applied or specialised AI devices or processes.

This can often lead to the term ‘singularity’ where machines may become a lot more intelligent than humans. The term post-singularity signifies the point that super-intelligence would be common place with humans potentially being overtaken intellectually by machines.

Image Recognition (12)  :

The ability for machines to learn to recognise and classify objects visually is the important foundation of AI. It is useful as it allows visual information to be interpreted and can support human thought processes and responses. Using static photos, videos or live feeds, computers are being taught to classify images according to what is shown using pattern recognition to identify key features. Advances in machine learning and the ability for computers to teach themselves from vast image databases, increases the accuracy of the results.

Machine Learning (ML) (13) :

Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. At its most basic it is technology designed around the principle that rather than having to teach machines to carry out every task, we should just be able to feed them data and allow them to work out the rules by themselves. This is done through a process of simulated trial-and-error scenarios in which machines crunch datasets through algorithms that can adapt based on what they learn from the data to more efficiently process subsequent data.

Natural Language Processing (NLP) (14) :

Natural language processing (NLP) is the ability of a computer program to understand human language as it is spoken. NLP is a component of artificial intelligence (AI). The development of NLP applications is challenging because computers traditionally require humans to "speak" to them in a programming language that is precise, unambiguous and highly structured, or through a limited number of clearly enunciated voice commands. Human speech, however, is not always precise -- it is often ambiguous and the linguistic structure can depend on many complex variables, including slang, regional dialects and social context.

Neural Networks (15)  :

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

A neural network operates similar to the brain’s neural network. A “neuron” in a neural network is a simple mathematical function capturing and organizing information according to an architecture. The network closely resembles statistical methods such as curve fitting and regression analysis. A neural network consists of layers of interconnected nodes. Each node is a perceptron and resembles a multiple linear regression. The perceptron feeds the signal generated by a multiple linear regression into an activation function that may be nonlinear.

In a multi-layered perceptron (MLP), perceptrons are arranged in interconnected layers. The input layer receives input patterns. The output layer contains classifications or output signals to which input patterns may map. For example, the patterns may be a list of quantities for technical indicators regarding a security; potential outputs could be “buy,” “hold” or “sell.” Hidden layers adjust the weightings on the inputs until the error of the neural network is minimal. It is theorized that hidden layers extract salient features in the input data that have predictive power with respect to the outputs. This describes feature extraction, which performs a function similar to statistical techniques such as principal component analysis.

Neural networks are widely used in financial operations, enterprise planning, trading, business analytics and product maintenance. Neural networks are common in business applications such as forecasting and marketing research solutions, fraud detection and risk assessment.

A neural network analyses price data and uncovers opportunities for making trade decisions based on thoroughly analysed data. The networks can detect subtle nonlinear interdependencies and patterns other methods of technical analysis cannot uncover. However, a 10% increase in efficiency is all an investor can expect from a neural network. There will always be data sets and task classes for which previously used algorithms remain superior. The algorithm is not what matters; it is the well-prepared input information on the targeted indicator that determines the success of a neural network.

Robot Process Automation (RPA) (16)  :

RPA is not part of AI as it is based on simulation of human tasks, using software robots to carry those tasks. It emulates the processes of accessing and reading data from different sources to carry our pre-determined actions, which may include for example putting together complex monthly reports or simply replying to a request etc. The technology is non–intrusive and usually accessing existing systems. The interesting new approach to this however is that there is some research into how these processes can be automated from the start; in other words, instead of training the technology to carry out these actions, it may literally be able to learn through copying humans in their work and using AI approaches to make this even less reliant on human intervention. The idea of RPA is that it removes monotonous, complex or time consuming processes to be done through this technology leaving humans to be doing more interesting and valuable work.

Semantic Analysis  (17) :

The key to Semantic Analysis is for machines to understand human language, its derived meaning and context together with relevant relationships. In today’s business environment, unstructured information has become one of the most valuable resources in an organisation and its rate of growth is staggering. The need to extract information from internal and external sources and integrate it into existing systems for use in key business decisions is critical to maintaining a competitive advantage. All information would be available, required facts would be a click away and intelligent agents would locate things. The reality; the volume and complexity has spiraled out of control, the quality of publicly available information is suspect, and the context in which that information was created is seldom obvious. For those reasons, organisations struggling to extract value from the 80% of unstructured information inside the enterprise (where most of the organisations' human intelligence resides) are now overwhelmed by a flood of available, but not very usable, information. The real problem is description. It’s difficult to extract value from information unless it’s well described; the context, the topics it pertains to and the relationships between facts must be clearly identified in a way that machines can interpret. And if we rely on humans to determine information context and validity. Semantic technologies may allow the formation of standards, provide a common framework that allow technology to create meaningful relationships between disparate resources and make the meaning of those resources explicit. Model management tools can be web-based, task-centric and include workflow and life-cycle management capabilities to support information governance and the changing role of information scientists. Ontology mapping and linked data strategies would need to be supported and improved fine-grained classification strategies and provide the precise, descriptive metadata that allow machines to interpret volumes of information to help organisations derive value, gain insight and drive business decisions.

Specialist AI (18) :

The form of AI becoming commonplace in business, scientific research and our everyday lives – usually in the form of applications designed to carry out one specific task in an increasingly efficient way. This could be anything from giving you tips on improving your fitness by monitoring exercise patterns, to predicting when machinery will break down on a production line, to spotting genetic indicators of illness in a human gene sequence.

Supervised Learning (19) :

The majority of practical machine learning uses supervised learning.

Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output. Y = f(X)

The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.

It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process. We know the correct answers, the algorithm iteratively makes predictions on the training data and is corrected by the teacher. Learning stops when the algorithm achieves an acceptable level of performance.

Supervised learning problems can be further grouped into regression and classification problems.

- Classification: A classification problem is when the output variable is a category, such as “red” or “blue” or “disease” and “no disease”.

- Regression: A regression problem is when the output variable is a real value, such as “dollars” or “weight”.

Some common types of problems built on top of classification and regression include recommendation and time series prediction respectively.

Some popular examples of supervised machine learning algorithms are:

- Linear regression for regression problems.

- Random forest for classification and regression problems.

- Support vector machines for classification problems.

As a simple example, imagine an AI fraud detection algorithm designed to flag suspicious transactions by a bank. In unsupervised learning, data is matched against previous outcomes to look for patterns in transactions (financial transactions, for example) such as their point of origin, size or time of day they take place. These patterns may indicate that some transactions are suspicious. As new suspicious transactions are identified, the algorithm adapts to “learn” that other features of the newly identified suspicious transactions may also be an indicator of fraud. In this way, a supervised learning system can learn to identify fraud from characteristics that were not highlighted in its initial training data as indicators of fraud.

Training Data (20)  :

In machine learning, the training data is the data initially given to the program to “learn” and identify patterns. Afterwards, more test data sets are given to the machine learning program to check the patterns for accuracy.

In the area of autonomous vehicles the amount of data needed to train systems to be ‘road ready’ is enormous. There are hence companies that both share or sell this data to enable manufacturers to test their technology in an accelerated way, as opposed to physically doing all the driving themselves. It is a big demand to expect technology to act like a ‘human’.

Unsupervised Learning (21)  :

Unsupervised learning is where you only have input data (X) and no corresponding output variables.

The goal for unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data.

These are called unsupervised learning because unlike supervised learning above there are no correct answers and there is no teacher. Algorithms are left to their own devises to discover and present the interesting structure in the data.

Unsupervised learning problems can be further grouped into clustering and association problems.

- Clustering: A clustering problem is where you want to discover the inherent groupings in the data, such as grouping customers by purchasing behaviour.

- Association: An association rule learning problem is where you want to discover rules that describe large portions of your data, such as people that buy X also tend to buy Y.

Some popular examples of unsupervised learning algorithms are:

  • k-means for clustering problems.
  • Apriori algorithm for association rule learning problems.

This approach to the problem of data classification has tremendous potential for developing machines that more closely emulate our own thought and decision-making processes, but it also requires huge amounts of processing power compared with the processing needed for supervised learning.

Note: everis have developed a Cognitive Applications Development Platform called everisMoriarty ( to enable fast adoption of Advanced Analytics and AI programs that will allow the building of solutions in a much faster way than traditional methods. It can access the use of other technologies such as IBM Watson, and programs such as Python and R and enable near immediate deployment into production. It allows drag and drop methodologies, and in most cases will not need coding if companies have the building blocks or in fact advanced technologies already. It uses both open-source and proprietary software where available to the end user.


(1) Gartner (Licensed for Distribution)

(2) The Guardian – 18th October 2017

(3) Wikipedia – AI

(5) Source: TechTarget – Microservices

(6) Wikipedia – Cognitive Computing

(8) Wikipedia – Computer vision

(9) (Based on) Wikipedia – Data mining.

(10) Wikipedia – Deep learning

(13) Source(s): Corsera (MIT) and Bernard Marr -blog

(14) TechTarget – Definition

(15) Investopedia

(17) Smartlogic (White Paper)

(18) Bernard Marr

(19) Machine Learning Mastery – Dr J Brownlee / Bernard Marr (example paragraph)

(21) Machine Learning Mastery – Dr J Brownlee / Bernard Marr