Artificial intelligence for all!?
The other day I was chatting with the owner of my favorite bakery. Out of nowhere, she asked me how artificial intelligence (AI) could drive business for her. Actually, she was more specific: How could AI automate her marketing? I was surprised. I’d never imagined her asking me about this.
By the nature of our business at Aliz, we deal with AI, or to be accurate with ML (machine learning), every day. We talk about it, learn about it or work with it. Yet it was still surprising that the small business I visit almost every day was interested in the topic from a perspective other than self-driving cars and robots. At that point I thought to myself, now either something fundamental is about to change or we have reached the tip of a hype cycle.
Leaving the bakery, I kept thinking why I was surprised. I tell organizations every day that ML is going to change the way businesses are, so this really shouldn’t surprise me that much. What really got me thinking, was not that ML in business came up in the conversation. It was more that their questions, misconceptions, and expectations were similar to other discussions we are having with multi-billion-dollar companies like banks, telcos, and airlines.
When I got home, I started to debug what AI and ML really are. Why are they changing the way we do business and not just hype? How can they automate and drive return on investment (ROI)? Understanding where ML can fit in a business – no matter how big or small it is – is key to preparing for the future. Regardless of whether the goal is growth, better experience, or automation, all businesses can benefit from the technological advancements of the past few years.
AI never really made it through the buzz
A common misconception is that AI is some kind of technology. In reality, it’s just a buzzword, not a solution. The concept has been around since the 1960s and it really hasn’t changed much since then. We use the term AI to label the most advanced available technology solutions. This is not the first time that investors, businesses, and people, in general, are going crazy about it. It’s been through several hype cycles already to the extent that the concept of an ‘AI winter’ has been born.
Similar to a nuclear winter, this refers to a severe global climatic cooling effect that occurs after a firestorm. The firestorm refers to the overly positive enthusiasm of how AI is changing the world. The cooling refers to the disappointment people experience when they realize that the fundamental change they were expecting has fallen short again. But like most bitter memories, the disappointment disappears over time and a new ‘revolutionary technology’ or idea sparks the hype and it all starts all over again.
Falling short of expectations is a result of a mix of different things. The failure of AI is similar to the failure of things like supersonic flying which in the beginning was supposed to change the world. We can narrow it down to one major reason. AI never really seeped into mainstream business processes. It didn’t drive revenue uplift for my local bakery or my telco or my bank. Most of the use cases focused on futuristic and long-shot applications like self-driving cars and natural language processing. These might have a major impact on businesses in the future. Still, most of us are interested in the next one to two years rather than the next 20.
AI hasn’t had the chance to become mainstream
Underlying technologies of AI, like ML, were traditionally expensive, work-intensive, and very, very hard to put in place. Even a few years ago, proper customer segmentation using deep learning algorithms was extremely difficult and costly. Even partially successful projects resulted in negative ROI. For the past 40 years, anyone who has tried to do things around ML has bumped into the same obstacles. For instance, not enough computing power, and the difficulty of coding due to the lack of proper frameworks and hyperparameter tuning. Because of this, ML and AI remained toys, exclusively used by institutions, governments, and the military.
But things changed a few years ago
The past few years have brought dramatic changes. With Google taking the lead, major technology companies have started to democratize their technology and computing power under their cloud services.
This has given an enormous boost to the spread of ML to mainstream business. The same technology that was behind predicting our search terms and recognizing people and objects in pictures is now available to everyone. And when I say everyone, then I mean everyone from my bakery to my telco to my bank. We all have access to the incredible computing power and models which have been trained using thousands of petabytes of data and from tens of billions of dollars of investments.
What in the past was manual work involving trial and error for an army of data scientists is now an automated optimization. Automated optimization in a relatively easily accessible bundle of computing power, coding frameworks, and cloud services.
Google’s Tensorflow became the first massively successful ML framework to democratize algorithms that can run in distributed mode on subsets of data in parallel. This is essential to scale ML to today’s data sizes. On the computing side, hardware has caught up with the new paradigm of deep learning. It replaces features handcrafted from raw data by humans by throwing the raw data itself at the computer. Barriers to entering the deep learning paradigm are now lower, also due to hyperparameter tuning services such as Google Cloud ML Engine that aid the search for the best possible ML model.
AI only works if you have the data organized and available
This all sounds nice but in most cases, unfortunately, there are still barriers. Either there is not enough data or the data is not organized or collected in a proper way. Or it’s not in the right format for the algorithms to use. No matter what the goal is with introducing an ML project, we have to ask a few questions first:
- Is there enough data to teach the machines?
- Has the data been collected properly and stored in the right place?
- Is the data accessible and in the right format?
- What are the first data sources I’d like to use for an ML solution? Internal database vs. web-tracking data, maybe a third party or a combination of these? Are these data available in an organized fashion?
Unfortunately, the answers in most of the cases are that the data is scattered.
It is not easily accessible, or not relevant, because of scalability issues. This chart shows that without a proper data discipline, it is impossible to break out from the lower quadrant. The best you can achieve is to understand what has happened in the past. However, you won’t be able to automate or predict the future. For more on data warehousing and its importance, check out our article: 5 Good Reasons to Move to a Cloud-based Data Warehouse.
The use of ML starts when you are able to go beyond understanding simply what happened in the past. It requires some work to get there. In most cases, it’s a question of dedication to changing the way the organization works. Setting up the right data discipline is the foundation of ML. The good news is that with tools like Google’s Bigquery and Dataflow, it doesn’t have to be a million-dollar project, to begin with.
In the next few posts, I’ll go through actual ML applications and how they can drive ROI for businesses. The focus is on understanding and segmenting customers to achieve more accurate targeting, resulting ultimately in better customer experiences.
Interested in learning more? Check out our solutions and learn more about our cloud-based machine learning projects.