Advice for Data Scientists and Managers


Over the last decade I have worked in analytics roles in numerous companies, from small German Fintech startups, to fast-growth pre-IPO scaleups (Rippling), to large tech companies (Uber, Meta).
Every company has its own unique data culture, and every role comes with its own challenges and set of hard-won lessons. Below are 10 key lessons I’ve learned over the past decade, many of which I’ve found to be applicable regardless of company stage, product, or business model.
Think about who your audience is.
If you work in a research-focused organization or are presenting to a primarily technical audience (e.g., engineering), an academic “white paper” style analysis may be more appropriate.
But if your audience is non-technical business teams or executives, you need to make sure you focus on the key insights without going into technical details and connect them to the business decisions that your work is meant to impact. If you focus too much on the technical details of your analysis, you will lose your audience. Communication in the workplace is about telling your audience what they need to hear, not what you want to share.
The best-known approach to this kind of insight-driven, top-down communication is the Pyramid Principle, developed by McKinsey consultant Barbara Minto. Check out this recent TDS article to learn how you can use the Pyramid Principle to communicate better as a DS.
If you’re a senior DS in a company that sets high standards, you can expect all your colleagues to have strong technical skills.
You won’t stand out by incrementally improving your technical skillset, you’ll stand out by ensuring your work has the greatest impact on your stakeholders (product, engineering, business teams, etc.).
This is where business acumen comes in. To maximize your impact, 1) Deep understanding of business priorities and the issues facing stakeholders 2) Scope analytical solutions that directly serve or address these priorities; 3) Communicate your insights and recommendations in a way that your reader can understand (see #1 above).
Strong business acumen also allows you to check the validity of your work, by giving you the business context and judgment to determine whether your analysis and recommendations make sense.
Business acumen isn’t something that’s taught in school or DS bootcamp, so how do you cultivate it? Here are some specific things you can do:
- Pay attention in all-hands and other cross-team meetings when strategic priorities are discussed
- Practice connecting these priorities to your team’s work. During planning cycles or when a new project comes up, ask yourself, “How does this relate to high-level business priorities?” If you can’t make the connection, talk to your manager.
- When conducting any analysis, always ask yourself, “So what?” Any data point or insight becomes relevant and impactful only if it answers this question and clearly explains why anyone should care about it. What should people change based on this data?
The ultimate goal here is to move from accepting requests and receiving JIRA tickets to Thought Partners Collaborate with stakeholders to develop an analytics roadmap.
Many people cherry-pick data to fit their story, which makes sense: most organizations reward people for achieving their goals, not for being the most objective.
Data scientists can afford to resist this: data science teams typically don’t have direct ownership of business metrics, so there’s less pressure to achieve short-term goals than teams like sales.
Stakeholders may pressure you to find data that supports the narrative they have prefabricated, and while following this might earn you points in the short term, it will serve you better in the long run to be a truth seeker and push for a narrative that the data truly supports.
Even if it’s uncomfortable in the moment (because you might be forcing a story people don’t want to hear), it will help you stand out and establish you as the person executives approach when they need an unfiltered, unbiased view of what’s really going on.
Data people often dislike “anecdotal evidence,” but it is a necessary complement to rigorous quantitative analysis.
Running experiments and analyzing large datasets can provide statistically significant insights, but we often miss signals that are not large enough to appear in the data or that are not adequately captured by structured data.
Sometimes, digging through notes on closed or lost deals, talking to customers, or reading support tickets may be the only way to discover a particular issue (or truly understand the root cause).
for example, For example, say you’re in a B2B SaaS business, you look at your data and see that your win rates for enterprise deals are declining, and you might want to target specific types of customers.
But to really understand what’s going on, you need to talk to sales reps, look at deal notes, talk to prospects, etc. At first, this will seem like random anecdotes or noise, but after a while a pattern will start to emerge — and that pattern most likely never showed up in any of the standardized metrics you’re tracking.
People get excited when they see a sudden increase in a metric and tend to attribute the movement to something they did, such as a recent feature release.
Unfortunately, when a change in a metric seems suspiciously positive, it is often due to a data issue or temporary effect. For example:
- Data for the most recent period is incomplete and the index will be smoothed out once all data points are in.
- There is a temporary tailwind that will not be sustained (e.g., sales increased in early January, but this did not represent a sustained improvement in sales performance, but merely the clearing of holiday backlogs)
Don’t get too excited about rising metrics: Avoiding pitfalls and generating solid insights requires a healthy dose of skepticism, curiosity, and experience.
When working with data, it’s natural that opinions change regularly. For example:
- Recommend a course of action to executives, but as more data becomes available, you become less convinced that it is the right course
- You have interpreted the movement of the metric in a certain way, but after performing additional analysis, you believe something else is happening.
However, most analytical people hesitate to retract previous statements for fear of appearing incompetent or of offending stakeholders.
That’s understandable: changing a recommendation usually results in additional work for stakeholders to adapt to the new reality, and as a result there is a risk of them becoming dissatisfied.
Still, you shouldn’t stick to your previous recommendation for fear of losing face. Once you’ve lost credibility, you won’t be able to defend your opinion well. Leaders like Jeff Bezos recognize the importance of changing their mind when faced with new information or simply when looking at an issue from a different angle. As long as you can clearly explain why your recommendation has changed, it’s a sign of strength and intellectual rigor, not weakness.
It’s very important to change your mind often. Don’t ever let someone trap you because of something you said in the past. — Jeff Bezos
Working in analytical fields can make you a perfectionist. You are trained in the scientific method and pride yourself on knowing the ideal way to approach an analysis or experiment.
Unfortunately, the realities of running a business often impose severe constraints: you need answers faster than your experiments can provide statistically significant results, you don’t have enough users to make a properly unbiased split, or your dataset doesn’t go back far enough in time to establish the time series patterns you want to explore.
Your job is to help the teams that run your business (the teams that ship your product, the teams that close your deals, etc.) get things done. If you obsess over the perfect approach, your business will likely move on without you and your insights.
As with many things, done is more important than perfect.
Hiring a full-stack data scientist and having them spend their day primarily building dashboards or doing ad-hoc data retrieval and investigation is a surefire way to burn them out and lead to turnover within your team.
Many companies, especially fast-growing startups, are hesitant to hire data analysts or BI professionals who specialize in researching metrics and building dashboards. Because manpower is limited and managers want flexibility in the work their teams can tackle, they plan to hire versatile data scientists and give them the occasional dashboard-building task or metric research request.
In reality, this grows out of proportion and DS ends up spending a disproportionate amount of their time on these tasks: they get swamped with Slack notifications that pull them away from focused work, “easy requests” (which aren’t as quick as they initially seem) pile up and fill entire days, and it becomes difficult to keep larger strategic projects moving forward in parallel.
Fortunately, there is a solution to this.
- Implement an AI chatbot that can answer simple data questions
- Train relevant teams in basic SQL (at least 1-2 analysts per team) to increase their independence: Using the Snowflake SQL AI assistant or Gemini assistance for BigQuery means that retrieving data and extracting insights doesn’t necessarily require extensive knowledge of SQL syntax.
- Self-service BI tools give users the autonomy and flexibility to get the insights they need. Great strides have been made in recent years, and tools like Omni are bringing self-service analytics closer to becoming a reality.
Companies tend to see getting data out of spreadsheets and into a BI solution as a sign of a mature and strong data culture.
While dashboards that are used frequently by many stakeholders across an organization and are used as the basis for important, hard-to-reverse decisions should be placed in a governed BI tool like Tableau, there are many cases where using Google Sheets will allow you to get what you need more quickly, without having to scope and build a robust dashboard over days or weeks.
In reality, your teams need to move fast, so they’ll always be leveraging the analytical capabilities of the software they use every day (like Salesforce) and spreadsheets. Encouraging this kind of agile, decentralized analysis, rather than shoving everything through the bottleneck of your BI tool, will help preserve your data science team’s resources (see number 8 above) and give them what they need to succeed (basic SQL training, best practices for data modeling and visualization, etc.).
As mentioned in point 9 above, teams across the company will always be running hacking analytics outside of the BI tool to unblock it, making it difficult to enforce a shared data model. Especially in a fast-growing startup, it’s impossible to enforce perfect governance if you want your teams to move fast and get things done.
Mismatched metric definitions give many data scientists nightmares, but in reality, it’s not the end of the world: often the difference in numbers is small enough that it doesn’t change the overall narrative or resulting recommendation.
As long as the important reports (those going to production, those going to Wall Street, etc.) are rigorously processed and adhere to standardized definitions, it’s okay (even if it makes you uncomfortable) for the data to be a little messy across the company.
Some of the points above might feel uncomfortable at first (e.g. resisting a cherry-picked narrative, taking a pragmatic approach rather than striving for perfection, etc.), but in the long run, you’ll find that they help you stand out and establish yourself as a true thought partner.
For more practical analytics advice, consider following me on Medium, LinkedIn, or Substack.