5 artificial intelligence (AI) types, defined
By Stephanie Overby
Do you understand the main types of AI, how they work, and where they add value? Let’s break down machine learning, deep learning, natural language processing, computer vision, and explainable AI
Artificial intelligence (AI) is redefining the enterprise’s notions about extracting insight from data. Indeed, the vast majority of technology executives (91 percent) and 84 percent of the general public believe that AI is the “next technology revolution,” according to Edelman’s 2019 Artificial Intelligence (AI) Survey. PwC has predicted that AI could contribute $15.7 trillion to the global economy by 2030.
AI, in short, is a pretty big deal. However, it’s not a monolithic entity: There are multiple flavors of cognitive capabilities. Understanding the various types of AI, how they work, and where they might add value to the business is critical for both IT and line-of-business leaders.
Five important kinds of AI
Let’s break down five types of AI and sample uses for them:
Machine learning (ML)
ML is perhaps the most relevant subset of AI to the average enterprise today. As explained in the Executive’s guide to real-world AI, our recent research report conducted by Harvard Business Review Analytic Services, ML is a mature technology that has been around for years.
ML is a branch of AI that empowers computers to self-learn from data and apply that learning without human intervention. When facing a situation in which a solution is hidden in a large data set, machine learning is a go-to. “ML excels at processing that data, extracting patterns from it in a fraction of the time a human would take, and producing otherwise inaccessible insight,” says Ingo Mierswa, founder and president of the data science platform RapidMiner.
ML use cases
ML powers risk analysis, fraud detection, and portfolio management in financial services; GPS-based predictions in travel; and targeted marketing campaigns, to list a few examples.
ML learning can get better at completing tasks over time based on the labeled data it ingests, explains ISG director of cognitive automation and innovation Wayne Butterfield, or it can power the creation of predictive models to improve a plethora of business-critical tasks.
An explainer article by AI software company Pathmind offers a useful analogy: Think of a set of Russian dolls nested within each other. “Deep learning is a subset of machine learning, and machine learning is a subset of AI, which is an umbrella term for any computer program that does something smart.”
In our plain English primer on deep learning, we offer this basic definition: the branch of AI that tries to closely mimic the human mind. With deep learning, CompTIA explains, “computers analyze problems at multiple layers in an attempt to simulate how the human brain analyzes problems. Visual images, natural language, or other inputs can be parsed into various components in order to extract meaning and build context, improving the probability of the computer arriving at the correct conclusion.”
Deep learning uses so-called neural networks, which “learn from processing the labeled data supplied during training, and uses this answer key to learn what characteristics of the input are needed to construct the correct output,” according to one explanation provided by deep AI. “Once a sufficient number of examples have been processed, the neural network can begin to process new, unseen inputs and successfully return accurate results.”
Deep learning use cases
Deep learning powers product and content recommendations for Amazon and Netflix. It works behind the scenes of Google’s voice- and image-recognition algorithms. Its capacity to analyze very large amounts of high-dimensional data makes deep learning ideally suited for supercharging preventive maintenance systems, as McKinsey pointed out in its Notes from the AI frontier: Applications and value of deep learning: “Layering in additional data, such as audio and image data, from other sensors – including relatively cheap ones such as microphones and cameras – neural networks can enhance and possibly replace more traditional methods. AI’s ability to predict failures and allow planned interventions can be used to reduce downtime and operating costs while improving production yield.”
When you take AI and focus it on human linguistics, you get NLP. SAS offers one of the clearest and most basic explanations of the term: “Natural language processing makes it possible for humans to talk to machines.” It’s the branch of AI that enables computers to understand, interpret, and manipulate human language.
NLP itself has a number of subsets, including natural language understanding (NLU), which refers to machine reading comprehension, and natural language generation (NLG), which can transform data into human words.
But, says ISG’s Butterfield, the premise is the same: “Understand language and sew something on the back of that understanding.”
NLP has roots in linguistics, where it emerged to enable computers to literally process natural language, explains Anil Vijayan, vice president at Everest Group. “Over the course of time, it evolved from rule-based to machine-learning infused approaches, thus overlapping with AI,” Vijayan says.
NLP might employ both ML learning and deep learning methodologies in combination with computational linguistics in order to effectively ingest and process unstructured speech and text datasets, says JP Baritugo, director at business transformation and outsourcing consultancy Pace Harmon.
NLP use cases
Natural language processing makes it possible for computers to extract key words and phrases, understand the intent of language, translate that to another language, or generate a response. “The enterprise literally runs through communication, either the written word or spoken conversation,” says Butterfield. “The ability to analyze this information and either find intent or insight will be absolutely critical to the enterprise of the future.”
Any area of the business where natural language is involved may be fodder for the deployment of NLP capabilities, says Vijayan. Think chatbots, social media feeds, emails, or complex documentation like contracts or claims forms.
Now let’s move on to the last two key types of AI: Computer vision and Explainable AI:
As NLP is to speech, computer vision is to sight. Per SAS, computer vision is the “field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images and deep learning models, machines can accurately identify and classify objects – and then react to what they ‘see.’”
While early computer vision efforts date back to 1950, the confluence of hardware and software advances, along with an influx of new visual data from mobile devices and other cameras, are setting off a computer vision renaissance. Indeed, 82 percent of respondents to a 2019 survey by Harvard Business Review Analytic Services said that they would be assessing, piloting, implementing, or would have adopted computer vision capabilities within two years.
Computer vision can learn to view and interpret the visual world in much the same way humans do – and as AI capabilities have advanced, it can enable machines to gauge things people can’t, such as temperature or air quality. Incorporating deep learning, computer vision tools get better at detecting patterns in images or other data over time. Accuracy rates for object identification and classification have gone from 50 percent to 99 percent in less than a decade — and today’s systems are more accurate than humans at quickly detecting and reacting to visual inputs. What’s more, computer vision capabilities can process, categorize, and understand images and video at a scale and speed that would otherwise be impossible.
Computer vision use cases
By employing NLP as well, computer vision tools may be able to not only capture, index, store, and extract information from visual data, but also to “curate, normalize, and understand” content from images or documents, explains Baritugo.
Computer vision technology helps healthcare providers better classify conditions and powers automated driving solutions like Google’s Waymo and Tesla’s Autopilot. Amazon Web Services invented a programmable deep learning-enabled camera and kits that enterprises can use to develop their own computer vision applications.
AI can seem like a black box: Input, then output – with little transparency on how the machine got from point A to point B. As PubNub CTO and co-founder Stephen Blum explained in our article, What is Explainable A!?, that may be acceptable for use cases like chatbots or social sentiment analysis, but for others – autonomous vehicles, aerial navigation, and drones, military applications – “being able to understand the decision-making process is mission-critical. As we rely more and more on AI in our everyday lives, we need to be able to understand its ‘thought process’ and make changes and improvements over time.”
Explainable AI use cases
A number of organizations – most notably the Defense Advanced Research Projects Agency (DARPA) – are developing machine learning techniques that make it possible for human users to understand, appropriately trust, and effectively manage AI. DARPA calls the capabilities that will result “third-wave AI systems.”
As Enterprisers’ Kevin Casey recently wrote, “The need for explainable AI rises in sync with the real human impacts.” As such, XAI will be critical for sectors such as the healthcare, manufacturing, insurance, and automotive industries.
Read original article: here
Amsterdam is one of Europe’s leading tech-hubs. Companies are enhancing their international orientation in order to draw in talented internationals who can help them realise their projects. Esti, IT recruitment Amsterdam, attracts and retains international IT talent by guiding companies to develop a culture in which international professionals thrive. The perfect match is not only about meeting 100% of the requirements but most of all about change, progress and new experiences. Esti focusses on the personal motivation and ambition of each professional.