What Mark Zuckerberg Can Teach Us About Selling Data Science Initiatives To C-Level Leaders
Mark’s right, but he talks to businesspeople like they’re engineers. He’s a lesson in public view about why data science fails to capture the attention of C-level leaders. Like every data initiative, it starts with an insight that flies against conventional wisdom.
Social media is slowly dying, and Facebook’s internal data is probably driving that reality home for Mark Zuckerberg. Apple removed the app’s ability to track customers and measure ad effectiveness. Facebook (I’ll use the app name vs. the company name when I refer to the app as a product) has lost advertising customers as a result.
Mark Zuckerberg is data-driven. He trusts the data enough to make a massive change because he sees the threat on the horizon. Data scientists across businesses see similar opportunities and threats. However, few CEOs are willing to make the difficult decision to change something that’s working before it fails. Meta’s strategy is sound, but its implementation and messaging are flawed.
As a result, investors are not bought in. Rather than rewarding Zuckerberg for being a forward-looking CEO,investors are fleeing the stock and pummeling Meta’s share price.
The same process impacts data teams. If we don’t change how we talk to C-level leaders, the data team’s stock will take a similar hit. Let’s learn from Mark’s example.
Advertisers areincreasingly focused on outcomes. They want to see a line between ads and growing sales. I learned something early on in my time doing customer behavioral models. Social media advertising isn’t worth the money. However, using customer tracking data, it can look effective.
The missing piece of most advertising effectiveness metrics is tracking a change in behavior caused by the ad or campaign. The tricky part is answering the question, ‘Would this customer have bought if they did not see the ad?” Marketing effectiveness metrics look at a connection between the ad and a purchase but bake in the assumption that the purchase would not have happened if not for the ad.
Mark knows the same thing I and anyone else who has explored this question does. Ads usually speed up the purchase, but the decision has already been made. The sale would have happened without the ad. I use causal graphs because they are excellent tools for revealing these assumptions. The counterfactual is obvious and must be refuted or confirmed.
In this case, the flawed experiment measures a connection between the ad being served to a customer and the customer making a purchase. With Apple’s tracking data, Meta could provide advertisers with a view of what customers did after seeing their ads. If the customer bought something, the connection between the ad served and the purchase was assumed to be validated.
A flawed experimental design casts doubt on the results. One review criterion is, ‘Does this experiment provide complete coverage of all relevant variables to eliminate confounding?’
In an experimental review, the counterfactual would be raised to evaluate this criterion. What would the customer have done if the ad was not served? This is tricky because we don’t have a twin to test or the ability to go back in time and change the intervention.
It is easy to sidestep this assumption if there’s customer tracking data. Take that away, and we are forced to perform more complex experiments. We must ask a deeper business question. What caused the purchase? Now we are working at the behavioral level. These experiments are expensive for most, but for a company like Meta, it’s a small investment in understanding their core business.
I have done these experiments for Fortune 500 retailers and smaller luxury brands. I must start by defining the purpose of an ad. Working with several marketing teams, the consistent theme is creating and amplifying demand. Marketing teams want to create advertising campaigns that lead to new sales.
New sales fall into multiple categories. Converting a new customer from a competitor is one. Keeping an existing customer loyal to the brand is another. There are many more, but the commonality is the connection to increases in revenue.
The segmentation is important because each corresponds to different KPIs: customer acquisition, customer churn, customer lifetime value, etc. The marketing team doesn’t care about the segments, but my experiment needs them. The connection between segment and KPI allows me to explain my results in terms that the marketing team does care about.
I have a connection between business metrics and model metrics. New sales cause increased revenue. Ads are one intervention the business relies on to cause new sales. I must create experiments to generate datasets measuring the rate that each ad category causes each sales category.
The opportunity size is undetermined. I trust data and know optimizing ad spend cannot be achieved without this dataset. Marketing believes they understand ad effectiveness because they have tracking data connecting an ad being served and that customer making a purchase.
This is the business challenge facing data science teams and Zuckerberg. Critical decision-makers have data or a heuristic that they believe is working. There is contradictory data available and the potential to discover what’s really going on. Critical decision-makers don’t trust our new data enough to change how they make decisions.
Behaviors have triggers, and buying is no different. Social media ads don’t create demand, but they can influence behavior. They can create awareness, but that does not cause new sales in the near term. Combined with other interventions, it is a critical part of the buyer’s journey. However, the cost isn’t justified by the value delivered.
Social media marketing is effective at pulling demand forward, and that’s what most marketing analytics are seeing. It’s taking from tomorrow’s sales and adding them to today’s. Many customers see an ad that triggers a behavior that would have happened on its own.
There’s a vicious cycle created. It’s like payday loans, where people borrow based on tomorrow’s earnings and eventually can’t keep up. More marketing is required to keep pulling demand forward, or sales lag for weeks or months. Social media marketing can look effective on the surface, but with datasets like the ones Facebook has, they must know the truth.
It’s only a matter of time before more marketing teams make the same discovery. In a bit of irony, most marketing teams haven’t figured it out because they’re unwilling to accept new data.
The loss of tracking data accelerated the timeline for Meta to find a more viable business model. Social media advertising spending will change significantly over the next 2 years as more businesses realize what’s happening. Narratives open people up to new perspectives. The more other marketers publish their findings, the more the community will see this new narrative. That builds trust because multiple sources have come to the same conclusion.
Meta knows that Facebook is dying, and they must innovate now while they have the money to invest. What Zuckerberg is struggling with is getting investors to buy in.
Data scientists make similar realizations that inform a tough choice. Adopting data and model-supported products requires a change from the familiar. Deploying them to customers is even more challenging. There’s a lot of inertia because the typical drivers of significant shifts aren’t there.
Businesses only make significant changes when faced with existential threats. The data warns of a threat, but people are used to the old mantra,’ Don’t fix what isn’t broken.’ The data tells data scientists there’s something wrong, but senior leaders don’t see it yet.
Mark Zuckerberg is encountering the same problem. Facebook is still profitable but has lost ground. Investors are asking Mark to refocus on Meta’s core business, Facebook. They want him tofix the problem and get back to printing cash. Mark knows he can’t do that. The data shows that the social media model that depends on advertisers for revenue is fundamentally broken.Snap, Pinterest, Twitter, and others all have similar downward trends.
Even now that Zuckerberg has partially capitulated to investors’ demands, he’s still not refocusing on Facebook. This week, he said thatMeta’s business messengers are the near-term growth engines. It’s a good pivot. Meta reduced headcount in their devices teams and a few other areas. He hasn’t announced a major slowdown in metaverse spending.
That announcement is probably coming, but I don’t think the reduction will be significant or that Zuckerberg is about to bring in the timeline for achieving profitability. Hopefully, investors will be appeased so he can get back to saving Meta. His current approach isn’t convincing them, so he must try something different.
This arc is very similar to what data scientists deal with regularly. Our initiatives ask the business to do something new, but they don’t see what’s wrong with the old way. We show potential growth from new product lines and features.
It’s a hard sell when current lines are thriving. Even those product lines with declining revenues are seen as broken vs. needing to be replaced. C-level leaders perceive that refocusing on the core business will return broken product lines to revenue drivers.
Data science is forward-looking and prescriptive. Businesses are used to reacting to feedback after an event or change has happened. We’re breaking their process, which is one reason we get so much pushback.
The central issue is trust. The business doesn’t trust the data team’s analysis vs. leadership’s gut instincts. The behavioral trigger that makes data scientists act is not the same as the triggers that leadership responds to.
Mark Zuckerberg explained the metaverse investment as a response to Apple’s removal of tracking data. Investors looked at that through the narrow lens of Facebook. Mark’s talking about the bigger picture.
If Meta doesn’t control their platform from end to end, another company can hobble its core business model again. Transforming Facebook into a metaverse app isn’t enough. Mark Zuckerberg knows they must create a complete platform to rebuild Meta as a titan. Mark is using data to see the bigger picture. Investors don’t trust the data or Mark’s story enough to follow him.
Most businesses are at the same point. They do not understand the magnitude of their risks or the growth opportunity they have. If they did, data scientists would have no problem getting buy-in for initiatives. How do we overcome the inertia?
We should use their old behavioral triggers. The business is looking for solutions to problems they feel. Being aware of the problem isn’t enough. It must hurt. Those are deep pain points for users, business units, customers, and senior leaders. They are incentivized to change because what they are doing now is not working. Early data science initiatives should target these use cases.
The data team can build trust with a small number of successful initiatives when they take away those big pains. People will remember the data product worked because they felt the impact. Retraining people in the business to respond to a new behavioral trigger starts with these incremental victories.
I have found this approach effective. After the small wins, I plan a long-term initiative. The whole project will take 12-18 months, but I break it down into 2-6 week chunks. The goal is to deliver value incrementally while making incremental progress toward the long-term objective.
Meta has tried to show their progress, but what they deliver isn’t connected to a customer pain point. Customers don’t want an expensive headset. They don’t see the value in floating avatars talking to each other online. Few games are developed for the metaverse, and most are underwhelming. There isn’t an amazing experience that customers can’t get anywhere else.
Zuckerberg has implemented an incremental delivery and demo strategy to win customers and investors over. It’s not working because the incremental deliveries don’t target a painful problem or provide an obvious, significant value. I have seen something Meta could quickly deliver that touches a larger pain point.
For home improvement projects, it’s painful to watch instructional videos and then try to apply them to the real world. Once I’ve done a repair 2-3 times, it’s easy. Those first attempts are not. Some automotive companies are providing repair technicians with AR headsets. Rather than delivering an immersive experience, the display gives the technician instructions, arrows, and examples over the real-world task they are working on. It doesn’t sound flashy, but it solves a problem many people performing DIY repairs have.
A partnership with The Home Depot or Lowes to create content for DIYers would make for an interesting demo. Ikea furniture can be annoying to piece together, which opens another possible partnership. These demos would showcase the underlying technology and platform. They would also frame them as valuable. A year’s worth of monthly or quarterly demos would change many minds.
For data science teams, this same approach works. If step 1 gathers a new dataset, the demo can show useful reports. An early report can showcase customer awareness of the advertising effectiveness initiative I discussed earlier. The report can show the journey from awareness, which social media advertising is excellent at driving, to a buying decision. It’s a long journey that most marketing teams don’t have visibility into.
During the demo, I’m setting up for the next phase, delivering data about the differences between customers who bought after the ad and customers who bought without seeing an ad. Again, this is data that the marketing team doesn’t have. I am seeding the question, does the ad cause the sale, or is something else going on?
In phase 3, I can use the existing data that the marketing team thinks supports social media ad effectiveness and contrast it with the experimental data. I finish phase 3 by asking, do you want me to dig into what’s happening to see which dataset is more accurate?
Then I must do the hard thing. I don’t own the marketing team’s decisions to use or ignore data. Zuckerberg doesn’t own the decision he’s trying to influence, but he’s acting like he does. That’s a recipe for failure. I can see him in his office yelling, “Why don’t they get it?” It’s the right question, but he focused on controlling the decision, which he doesn’t.
Investors put money into Meta’s stock. If they lose money, it doesn’t hurt Mark Zuckerberg. They suffer the loss and blame. If marketers look at my reports and change their ad mix, it’s their head if those choices don’t pan out. A drop in sales won’t get blamed on the data team. I work with stakeholders and data consumers to tackle this problem from two directions.
First, I explain that data gives them cover. The blame is all theirs when they decide to act based on their gut. It’s harder to fault them if they choose based on the best available data. The business is supposed to be data-driven, right?
Second, I work to understand their process and workflow. When I ask, “Why don’t they get it?” I want to know what I could have done differently to get them to incorporate data into their workflow.
In both approaches, I trust that they will make the right decision if I do my job well. I am sharing the burden of failure and altering my approach to provide data THEY need to make an informed decision. I focus on what I can control vs. influencing what I cannot.
That’s the hard part because it doesn’t always work the first time. Even then, I must trust that I am working with experts. They’ll incorporate data into their workflow if I give them cover and present data that informs their decision-making.
It all starts with trust. If I don’t trust them, why should they trust me?