At the recent Monetize conference, I had the honor of participating on a panel hosted by Igor Stenmark from MGI Research, wherein we discussed artificial intelligence and machine learning (AI/ML). I was joined by co-panelist Nagi Prabhu, representing icertis, a leader in the contract lifecycle management (CLM) space, who offered his own perspective on the subjects.
Artificial intelligence and machine learning are two exciting topics that have each officially reached “buzzword” status, but I find that many people are wholly confused by them – it can be challenging to understand the difference between the two, their applicability to the monetization space, the timing at which they may be enterprise-ready, and so on. As a result, I thought it would help to offer up my own personal thoughts on the subjects – but also encourage you to seek the opinion of many others (as there are diverse perspectives)!
What are artificial intelligence and machine learning, and how do they differ?
At its most basic, artificial intelligence represents the field of computer science that strives to create ever-smarter machines that can solve problems thought once only solvable by humans. More precisely, it seeks to create machines that – when provided a set of inputs – can generate a higher statistical probability of a “correct” output, faster and more reliably than a human.
Many misconstrue the term artificial intelligence to outright mean automation, which isn’t a fair comparison. Automation involves repetition of the same identical task over and over again – which could be viewed as a small subset of artificial intelligence – but equating the two is akin to making the claim that Texas is interchangeable with the United States: it’s simply unfair to do (despite a few passionate folks who may feel otherwise).
The field of artificial intelligence has been around for decades, which contributes to the doubt many people have with respect to its readiness for prime time, as they’ve been “hearing the same old story” about its potential for many years. That’s where machine learning comes in.
You see, artificial intelligence can be accomplished in several ways; machine learning describes one specific approach to achieving artificial intelligence – and is the one that happens to have yielded the most recent and demonstrable successes. Machine learning flips traditional artificial intelligence paradigms on their heads: whereas AI was traditionally accomplished using complex and convoluted logic trees to attempt to train a computer how to think in order to reliably solve a problem, machine learning instead gives the computer a plethora of known answers to a problem and expects the computer to generate the ways of thinking it would need to replicate reliable and accurate results.
Let’s say, for example, that I set out to build an artificially intelligent computer program to be able to identify whether or not any given picture on the internet contains a dog. Using techniques such image processing, edge detection, and pattern recognition, I might write a traditional, logic-based program that goes something like this:
Does an object in the image have two eyes?
If yes, does an object in the image have four legs?
If yes, does an object in the image have two ears?
If yes, does an object in the image have fur?
If yes, does an object in the image have a tail?
<…and so on…>
This program might be pretty good at identifying dogs, and may even identify several correctly in a row, but what happens when the following image gets analyzed?
[caption id="attachment_3053" align="aligncenter" width="300"]
Two eyes, two ears, four legs, fur, and a tail – must be a dog![/caption]
Therein lies the problem with the traditional approach to artificial intelligence: it gets complicated. Quickly. The algorithms and equations and nested logic behind the programs have to become very complex in order to increase the probability of success of the outcome (in this case, correctly identifying “dog” or “not dog”).
With the machine learning approach to artificial intelligence, I might instead create my program in such a way that I provide the basic construct of an equation along with placeholder variables for mathematical coefficients. Then, I could provide the software with tens or hundreds of thousands of images known to contain dogs (designated as such by humans) and let the computer itself tweak the parameters of the software in whatever ways it needs to in order to maximize the accuracy of its output. Further still, with every new training image provided, the software can “learn” and continue to iterate and improve its own statistical accuracy.
From an enterprise perspective, how “real” are AI/ML?
At the risk of providing a cliché answer: it depends.
When many think of enterprise-grade AI, they envision what is called artificial general intelligence – or, said very simply, AI that can solve ‘any’ problem (think Rosie the Robot from The Jetsons). The truth is, we’re many, many years away from that.
What is here and now and absolutely ready for enterprise use cases, however, is applied artificial intelligence (sometimes called “narrow” AI). Applied AI, as the name implies, describes the application of artificial intelligence to specialized problems. Recall our dog image identification example: it may excel at identifying dogs in images but would perform laughably if we asked it to identify, say, giraffes – let alone do something completely non-tangential such as intelligently route a customer service email to the appropriate support specialist. Similarly, AI designed to identify tax fraud likely couldn’t be used to identify warranty claims fraud or insurance fraud or money laundering.
AI/ML is technically classified as still being in an early adopter phase, but enterprises are actively incorporating AI, right now, and they’re doing so at an accelerating pace. You may not realize it – and you may not like it – but AI is already all around you. AI-driven cars (pun intended) already cruise our streets, intelligent personal assistants (e.g., Siri) live in our pockets, and even the content curated specifically for us on social media sites such as Facebook and Instagram leverage AI to increase our engagement with the paid promotions thoughtfully sprinkled through our feeds. Even complex and creative tasks once thought protected from AI-based disruption are not immune: algorithms have been written to correctly predict Supreme Court justice decisions (with over 75% accuracy!) and even create visually-inspiring abstract art.
[caption id="attachment_3052" align="aligncenter" width="300"]
A modern-day look at Geoffrey Moore’s Crossing the Chasm?[/caption]
If you aren’t at least thinking about where to apply AI in your own business and how it could help differentiate you among competitors, I would argue that you’re already behind. AI has the potential to be the single most disruptive technology in a generation, much like the internet or smart devices were for generations before. One only has to look to the speed at which devices such as Amazon’s Alexa or Google’s Home (which leverage a common AI technique called natural language processing, or NLP) have proliferated through the consumer space for examples of the pace of adoption of AI-based technologies and therein derive an understanding of the ease at which one could be “left behind” if not actively attempting to stay ahead of the AI/ML technology curve.
Should we develop in-house AI/ML capabilities, or leverage partners?
AI/ML fundamentally requires three things, some of which make sense to keep in-house and some do not:
- Compute Power: Crunching data takes a lot of computational prowess, and there’s no reason to keep this in-house. Using the power of the cloud is secure, scalable, readily-available, and economically advantageous thanks to players such as Amazon Web Services (AWS) and Google Cloud Platform (GCP).
- Algorithms: These are the actual brains behind the “intelligence” – the equations and models wherein the magic lies. Developing this capability in-house versus relying on external expertise is a decision best made at the strategic level. Building an internal data science team could require significant investment to stand up a capability that may not be viewed by[caption id="attachment_3051" align="alignright" width="300"] Machine Intelligence Landscape 3.0 by Shivon Zilis, founding partner at Bloomberg Beta (and all-around AI/ML powerhouse thinker)[/caption]leadership or investors as one of your organization’s core competencies (travel giant Expedia, for example, once claimed that it has over 700 data scientists on staff worldwide), particularly when a partner could be leveraged. That said, the ecosystem of machine-based intelligence continues to be confusing to navigate thanks to it being extremely fragmented with companies focused on solving niche vertical problems and others attempting to create more holistic platforms.
- Data: Somehow, it always comes back to data in the end. Data is required in two capacities: first, as a training data set with which to train the AI/ML algorithms, and second, data against which to actually apply the algorithms for your chosen use-case. The reality is that AI/ML can only be as effective as the quality and extent of data you have (imagine our success if we trained our aforementioned dog image identification algorithm using only ten blurry images known to contain dogs and you’ll understand the importance here).
When it comes to data, I absolutely recommend keeping this capability in-house. Note that this doesn’t mean you have to store data, say, on-premise versus in the cloud – it just means that your organization alone should be responsible for identifying the data it has the capacity to collect, collecting and securely storing that data (wherever it may reside), and cleansing it as necessary for application with AI/ML. Your organization knows its data better than anyone, and that data should be a differentiator unless strategically decided otherwise (e.g., if shared for the greater good).
With all of the potential applications of AI/ML in the enterprise, how should we get started?
I’m a big fan of using a pragmatic approach to AI/ML adoption (versus a shotgun approach that says, ‘let’s throw AI/ML at every problem we have and see what sticks’).
The first and most obvious uses for AI/ML in the enterprise often revolve around the bottom line: cost savings and efficiency. For example, applications of AI can be used to improve email spam filters and web protections (thereby safeguarding the enterprise from phishing or other cyber-attacks), serve as the first line of communication for customers using web-based chat bots, replicate the completion of tedious and repetitive forms and documents (via robotic process automation), and more.
I’m personally more excited, however, by the ways in which AI/ML can be used to encourage top-line revenue growth, particularly as it applies to differentiated offerings, customer loyalty, or the overall customer experience. How does an organization identify the AI/ML opportunities it has in this realm? Again, our focus must return to the data an organization has within its span of control. Ask yourself: what data does my organization uniquely have access to, and what could AI/ML do with that data that would set us apart from our competition?
An organization like JPMorgan Chase has access to billions of purchase records and knows your own unique credit card buying patterns and behavior. It makes perfect sense, then, that Chase has used AI-based algorithms that can almost instantaneously detect anomalous purchase behavior at the point-of-sale and notify its users of potential fraud, which increases its customers’ confidence and loyalty to is profitable credit card offerings. Netflix and Spotify have unique viewing and listening data that allows them to know exactly which movies and music you like, so it was a natural step for them to introduce AI to augment their customers’ processes of exploration and discovery with recommendations of new movies and music, respectively. Conversely, Walmart – known for its operational efficiency – probably wouldn’t have near as much success with similar offerings but may instead set itself apart by applying AI to optimize decisions using its vast data sets filled with supply chain and procurement data.
In short, focus on the data your company has access to, overlay your organization’s value proposition and market differentiation, and the intersection is likely a great place to start with AI/ML.
What uses have you seen for AI/ML?
The unique differentiation that Gotransverse brings to the monetization space is as a billing platform optimized for anything from one-time to subscription to usage- or consumption-based billing. Some of our customers process billions of usage events per month, so we’re no strangers to billing data, particularly as it pertains to usage and consumption. The intersection of our own market differentiation with the data to which we have access has unique implications on how we at Gotransverse could use AI/ML in the future, and we’re excited about what it will mean for Gotransverse customers.
For example, we imagine a world in which artificial intelligence could be used to recommend new product or service offerings based upon existing usage patterns detected by your billing system. Add-on products could be suggested and supplied back to your CPQ based upon your billing system’s knowledge of what products are commonly transacted together. Innovative pricing models could be suggested that incent customers (e.g., with a discount) to consume digital products or services when overall consumption patterns are known to be low and capacity therefore high, which smooths and load-balances demand and thereby optimizes costs-to-serve. Declining usage behavior could even be detected, and a follow-up notification could be supplied to a customer service representative via your CRM, as declining usage may be indicative of an unhappy customer.
The implications on the future of monetization are endless when artificial intelligence meets an intelligent billing platform like Gotransverse. To learn more, request a meeting with the Gotransverse billing experts today.
Derrick Snyder is the Vice President of Partnerships and Alliances at Gotransverse. Bringing his experience from pricing strategy at Deloitte and big data at National Instruments, we're pretty sure he might actually be an artificial intelligence system.