How to Set Your AI Project Up for Success

To pick the right AI project, it’s essential to understand how AI works — what it’s good at and what it’s not. HBR spoke with Marco Casalaina, head of Salesforce’s Einstein project, who broke down the basic ingredients of a good AI project and the questions leaders need to be asking themselves before investing time and resources in a new AI project. The most important one, he argues, is, “Can a human do it?”

Picking the right AI project for your company often comes down to having the right ingredients and knowing how to combine them. That, at least, is how Salesforce’s Marco Casalaina tends to think about it. The veteran artificial intelligence and data scientist expert oversees Einstein, Salesforce’s AI technology, and has made a career out of making emerging technologies more intuitive and accessible for all. With Einstein, he’s working to help Salesforce customers — from small businesses to nonprofits to Fortune 50 companies — realize the full benefits of AI. HBR spoke with Casalaina about what goes into a successful AI project, how to communicate as a data scientist, and the one question you really need to ask before launching an AI pilot.

You’ve been working in AI for a long time now. You worked for Salesforce years ago, then at other companies, and now you’ve come back to lead. How would you describe what it is you do in this work? 

I bring machine learning into the things that people use every day — and I do it in a way that aligns with their intuition. The problem with machine learning and AI — which are two sides of the same coin — is that most people don’t know what either really mean. They often have an outsized idea of what AI can do, for example. And of course, AI is always changing, and it is a powerful thing, but its powers are limited. It’s not omniscient.

The point you’re making about how imagination can take hold explains a lot of the issues businesses run into with AI. So, when you’re thinking about the kinds of problems that AI is good at solving, what do you consider?

When I talk to customers, I like to break it down into ingredients. If you think about a fast food taco, there are six main ingredients: meat, cheese, tomatoes, beans, lettuce and tortillas. AI isn’t that different: there’s a menu of certain things that it can do. When you have an idea of what they are, it gives you an idea of what its powers are.

I’m intrigued! So, what are AI’s ingredients? 

The first ingredient is “yes” and “no” questions. If I send you an email, are you going to open it or not? These give you a probability of whether something is going to happen. We get a lot of mileage out of “yes” or “no” questions. They’re like the cheese for us — we kind of put that in everything.

The second ingredient is numeric prediction. How many days is going to take you to pay your bill? How long is it going to take me to fix this person’s refrigerator?

Then, third, we have classifications. I can take a picture of this meeting that we’re in right now and ask, “are there people in this picture?” “How many people are in this picture?” There are text classifications, too, which you see if you ever interact with a chatbot.

The fourth ingredient is conversions. That could be voice transcription, it could be translation. But basically, you’re just taking information and translating it from one format to another.

The tortilla, if we’re sticking to our analogy, is the rules. Almost every functional AI system that exists in the world today works through some manner of rules that are encoded in the system. The rules — like the tortilla — hold everything together.

So how do you, personally, apply this in your work at Salesforce? Because I think people often struggle with figuring out where to start with an AI project. 

The questions I ask are, “What data do we have?” And, “What concrete problems can I solve with it?”

In this job at Salesforce, I started with something every salesperson tracks as a natural part of their job: categorizing a lead by giving it a score of how likely it is to close.

Data sets like these are a key source of truth from which to develop an AI-based project. People want to do all kinds of things with AI capabilities, but if you don’t have the data, then you have a problem.

Getting into the next phase of this, let’s talk about the lifecycle of finding a project and deploying it. What are the questions you find yourself asking when thinking about how to get from pilot to rollout?

What problem you’re trying to solve — that’s the first question you need to answer. Am I trying to prioritize people’s time? Am I trying to automate something new? Then, you confirm that you have the data for this project, or that you can get it.

The next question you need to ask is: Is this a reasonable goal? If you’re saying, I want to automate 100% of my customer service queries, it’s not going to happen. You’re setting yourself up for failure. Now, if 25% of your customer service queries are requests to reset a password, and you want to automate that and take it off your agents’ plates, that is a reasonable goal.

Managing Data Science

An eight-week newsletter on making analytics and AI work for your organization.

Another question is: Can a human do it? Most of the time AI can’t do anything that humans can’t do.

Let’s say you’re an insurance company and you want to use a picture of a dented car to find out how much it’s going to cost to fix it. If you might reasonably expect that Joe down at the body shop can look at the picture and say, this is going to cost $1,500, then you could probably train AI to do it too. If they can’t, well, then an AI probably can’t either.

How long do you want to spend in a pilot phase? Because a lot of what you’re doing, other people are trying to do, too.  

AI projects tend to have uncomfortably long pilot periods — and they should. There’s two reasons for this.

First, to determine whether it actually works the way it should. Do people trust it? Is it explaining itself sufficiently for the weight of the problem? At one extreme there’s things like an AI-driven medical diagnosis, which can have a huge impact on someone’s life. You better tell me exactly why you think I have cancer, right? But if an AI recommends a movie I don’t like, I don’t really care why it’s telling me that. A lot of business problems kind of fall somewhere in between. You need to share just enough explanation so your users will trust it. And you need this pilot period to verify that your users understand it.

Second, you need to measure the value of the AI solution versus baseline — human interaction. Think about automating customer service queries. For customers using the chatbot, how many of those are actually answering the right questions? If I use the DMV’s chatbot and say, “I lost my license” and it says, “Fill out this form and you’ll get a replacement,” well, that’s what I was asking for. But if your chatbot can’t answer your customers’ questions, you end up with frustrated customers who hate your chatbot and end up talking to a human anyway.

Pivoting for a second here, you’ve been in this job for a few years at this point. What are some of the big things you’ve learned over that time? 

We’ve learned how to find and use data sets to solve problems. Now, we help people understand how the data that they’re putting into their business systems — just by virtue of doing their jobs — can be used to develop machine learning that helps them solve problems more efficiently. But we’ve also learned how important a role intuition plays in that process.

How so?

So, we released a product called Einstein prediction builder about two years ago. A lot of customers are using it now, but it didn’t have the same rapid adoption curve as some of the more self-explanatory services like lead scoring.

Einstein prediction builder allows you to build a custom prediction for questions like, “Will my customer pay their bill late or not?” We realized that to get to that prediction, people have to make a bit of a mental leap: I would like to know the answer to this question, so I want to make a prediction about that.

That was tough for a lot of customers. Now, we have a new product, a recommendation builder. It’s a little bit more self-explanatory, because we’re also introducing a template system. For example, it will recommend what parts to put on the truck when a field representative is sent out to fix a refrigerator. We’ll lead the horse to water, right, from the Salesforce perspective, by having the automated step there, and work with customers to understand what parts they might need for the scenarios they might face.

As data scientists in the AI field, we have a tendency to think about algorithms, or maybe slightly higher level abstractions. I’ve learned we really need to get into our customers’ heads and express the solution to the problem in terms that they will relate to. So, I’m not just making a recommendation, I am specifically recommending the part that goes into a project; I’m not just making a prediction, I am specifically answering the question, are you going to pay your bill or not?

And then you have to decide, if I make that prediction, I give you a probability of the guy paying late, what are we going to do about it?

If you’re speaking to leaders who are thinking about this, it sounds like part of what you’re what you’re talking about is the need to stay grounded when considering what problems you should try to solve with AI and what you have on hand that can help you do it.

Right, it’s going back to the question of: Can a human do it? If they can, okay, maybe AI is a great way to take that task off a human’s plate to free them up for other magical things.

Read More

Related posts

Scientists Looking at Vaccines With Time-Released Microparticles Believe No Booster is Necessary

An In-Depth Guide To Training Employees With Videos

Alert for Parents on Outbreak of Hepatitis among Children