기사

Six Crucial Refinements to Conventional Wisdom About Data Strategy

This post shines a spotlight on the difference between what you already know & the more specific guidance you really need to create a successful data strategy.

Kevin Lewis
Kevin Lewis
2021년 1월 28일 7 최소 읽기
Big data and analytics strategies.

If you’ve read a few books and articles on devising a data strategy, you’ve almost certainly noticed some repeated themes. Although the guidance conveyed in these themes may be correct, it can also be dangerously vague, even when supported by hundreds of pages of elaboration. Many data management leaders have followed both the spirit and letter of widely-repeated principles and veered far off track nonetheless, resulting in data and analytics initiatives that take too long, cost too much, or are severely misaligned with company priorities. After following a strategy based on rigorous study of well-regarded literature, these results can be confusing and disheartening – and extremely costly.
 
However, there are a handful of seemingly subtle refinements in conventional wisdom that can produce a dramatic difference in results. Because you’ve probably encountered the standard advice many times, it’s easy for similar – but crucially different – suggestions to be interpreted as just a repeat of the same old ideas. So, with this article, I’d like to shine a more direct spotlight on the difference between what you (probably) already know by now and the more specific guidance you really need to create a successful data strategy.

Conventional Wisdom Refinement #1

Conventional Wisdom:
You need to start by identifying the business value of your proposed data and analytics.

Refinement:
You need to support someone else’s funded business initiatives with your data and analytics.
 
Why it makes a difference:
If you support funded business initiatives, you are doing something important for the organization – by definition. If, instead, you propose the business value of your own data and analytics projects independent of other funded initiatives, you will end up competing with those other initiatives instead of contributing directly to their success. Meanwhile, the other initiatives – such as customer experience, manufacturing automation, or supply chain optimization – will be left on their own to collect the data for their analytic needs. Virtually all major initiatives need analytic functionality, whether embedded within special-purpose applications or delivered through general-purpose analytics tools, and they could use your help to collect the data effectively. Your goal should be to provide this help by contributing to and reusing a shared data resource that advances in breadth and quality each time you use it to help someone else’s business initiative.

Conventional Wisdom Refinement #2

Conventional Wisdom:
You need a strong executive sponsor for the data and analytics program.
 
Refinement:
You need an executive sponsor for data and analytics and at least one sponsor of a funded business initiative that is counting on the data and analytics you will deliver.
 
Why it makes a difference:
If there is an executive sponsor responsible for a funded business initiative willing to defend your project, it means that you’ve probably positioned your work appropriately, consistent with the previous principle. If you only have an executive sponsor of the data and analytics initiative itself, it’s an indication that you’re probably not aligned to the strategic value-producing priorities of the company. Having a single, strong executive sponsor can even have a downside – he or she can keep a doomed program alive for too long, which you’ll discover quickly when your sponsor is distracted by other interests, takes on a new role, or leaves the company, and there is no sponsor of a funded business initiative willing to confirm that the data you plan to deliver is absolutely needed.

Conventional Wisdom Refinement #3

Conventional Wisdom:
You need to regard the program as an ongoing journey, not a destination.
 
Refinement:
You need to institutionalize the program by embedding data and analytics planning, implementation, and operation into the machinery of the organization.
 
Why it makes a difference:
In any large organization, new data and analytics requirements will continue to emerge for as long as there are changes in business and technology. In other words – forever. But to meet these requirements effectively, it’s not enough to have a sponsor, a team, and a series of projects. To make the “journey” more systematic, your program needs to be ensconced in the organization by linking to existing functions outside of the data and analytics program itself. For example, the organization’s strategic planning process should include identification of data needs so that new data initiatives are created in support of and at the same time as strategic business initiatives. The project funding process should examine all projects – not just “data and analytic” projects – to verify or recommend the role of shared data resources to serve the projects, preferably as part of a broader enterprise architecture function that offers shared resources of all kinds, including data resources. Project methodologies should establish standard tasks for proactive data management such as data profiling, logical and physical data modeling, and data quality monitoring, along with the role of the data steward and other business and IT data management roles. With this approach, the data and analytics program becomes an inherent part of the way the organization does business.

Conventional Wisdom Refinement #4

Conventional Wisdom:
You need to build an iterative plan – the “big bang” approach doesn’t work.
 
Refinement:
You need to deploy data just-in-time and just-enough to meet the needs of in-scope applications.
 
Why it makes a difference:
It’s true that the “big bang” approach to data and analytics doesn’t work. It’s also true that nobody does that. There are plenty of projects that are much too large in scope, but those projects are always either one-off projects or part of an even larger iterative plan. So, we need to take this a step further. What’s needed is to scope the iterative projects so that each data element to be deployed and each data issue to be resolved are essential to meet the needs of named, targeted, in-scope application projects within supported business initiatives. Without specific needs identified – in a rigid dependency relationship – there is no basis for controlling scope, even within an iterative block of work delivering just one or a few data subjects. If the objective of your project is to deliver Customer data, for example, how do you decide which attributes to collect of the potentially hundreds of possibilities? How do you determine the level of data quality needed? Is perfection possible? Should we just ask the end users what they want? Aligning to supported application projects makes scope control much easier and ensures that every action related to data can be traced to a real business need, not just a desire to have “good” data.

Conventional Wisdom Refinement #5

Conventional Wisdom:
You need to integrate data with each project, contributing to coherent data resources as you go.
 
Refinement:
You need to architect and design shared data resources using principles of scalability and extensibility.
 
Why it makes a difference:
One reason data management leaders believe they need to deliver entire data subjects one by one, rather than targeting specific applications, is that they are afraid they will miss something and have to do excessive rework when new requirements emerge. But it doesn’t have to be that way. If you apply principles of scalability and extensibility, you can deliver only the data needed by in-scope applications, treating each data delivery as a puzzle piece that fits with all the other puzzle pieces in the shared data resource puzzle. Example practices that promote scalability and extensibility include modeling data at the lowest level of detail, obtaining data from original sources, and building right-time and adjustable integration processes. Each of these approaches promotes the highest level of reuse and minimizes rework, even if each data delivery project requires at least some new work to meet the needs of new applications. The more you do this, the less work you need to do to meet the needs of new applications because most of the data is already available, with more being added all the time.

Conventional Wisdom Refinement #6

Conventional Wisdom:
You need to enable self-service for data and analytics as much as possible.

Refinement:
You need to carefully differentiate between experimentation and production and treat each accordingly.

Why it makes a difference:
Qualified end users should have the ability to import raw data, leverage production data, and experiment with analytic methods to test hypotheses, build prototypes, and take immediate business action based on their analysis when there’s an opportunity to do so. But deploying production data is IT’s job – especially data that is needed across many business use cases. There should be a healthy interplay between end user experimentation and production delivery and a careful articulation of who is responsible for what. In too many cases, end user-developed systems become de facto production systems, which defeats the purpose of self-service because the end users become burdened with excessive maintenance and support. Worse, many end users perform redundant work managing the same data in slightly different forms across the organization. It’s much better for IT to deploy shared production data so that work is consolidated, and shared data resources benefit from the ever-increasing quality and integration requirements of multiple initiatives. If IT has challenges with production data planning and delivery, those should be addressed directly. Getting around the problem by deploying data haphazardly across the enterprise ultimately makes the underlying problem much worse.

Conclusion

When I first started on my own journey to build a data and analytics program, I read all the well-known books on the subject at the time, and I did my best to apply the core principles I learned. Yet, even with all the studying and careful planning, I ended up making significant and sometimes painful course corrections as I refined my understanding of what works in real life. I’ve continued to refine and apply this thinking with dozens of clients in all major industries and sectors. If you carefully consider these more specific ideas, I’m confident you can sidestep common struggles by developing a practical data strategy that works the first time, no matter how large or complex the organization.
 

Tags

약 Kevin Lewis

Kevin M Lewis is a Director of Data and Architecture Strategy with Teradata Corporation. Kevin shares best practices across all major industries, helping clients transform and modernize data and analytics programs including organization, process, and architecture. The practice advocates strategies that deliver value quickly while simultaneously contributing to a coherent ecosystem with every project.
  모든 게시물 보기Kevin Lewis

알고 있어

테라데이트의 블로그를 구독하여 주간 통찰력을 얻을 수 있습니다



I consent that Teradata Corporation, as provider of this website, may occasionally send me Teradata Marketing Communications emails with information regarding products, data analytics, and event and webinar invitations. I understand that I may unsubscribe at any time by following the unsubscribe link at the bottom of any email I receive.

Your privacy is important. Your personal information will be collected, stored, and processed in accordance with the Teradata Global Privacy Statement.