BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
David Loshin is the author of Business Intelligence, Second Edition: The Savvy Manager's Guide, published in 2012 by Morgan Kaufmann, a division of Elsevier BV. In this interview, Loshin, president of BI and data management consultancy Knowledge Integrity Inc., elaborates on designing BI dashboards, defining performance metrics and gaining executive buy-in for business intelligence programs. One key piece of advice: Involve business executives in defining the key performance indicators (KPIs) to track and ensuring that BI findings help drive business decision making.
In your book, you explain the way a BI dashboard works to provide a real-time display of KPIs and to connect directly to BI tools for further analysis. Do you have other advice for selecting and making the most of BI dashboards?
David Loshin: You might say that the objective of a dashboard is to provide a summarization of a collection of performance measures in a way that is informative yet not intrusive, so that the consumer of information is able to selectively review and absorb the specific metrics as necessary. Another goal is to provide updates within a reasonable time frame -- real time, if possible -- so that the consumer has a current view of selected measures.
That being said, you might consider whether the visualization methods of the proposed dashboard tool are configurable, if the dashboard's "real estate" can be effectively managed, and if metrics can be aggregated and produced in real time in relation to the processes being monitored. One other thing: If there is a particular performance measure that causes some degree of alarm or concern, and there is interest in further evaluation of potential dependencies, the dashboard should enable drill-down capabilities to be able to get that deeper level of insight.
How can key stakeholders in business intelligence programs best go about defining "performance" and achievable targets to track to aid in decision making? What if different people have different perceptions about performance? Also, what role do BI managers play in this process?
Loshin: First, performance goals and targets have to be defined in the context of how value is perceived within the business, whether that focuses on revenue generation goals, management and reduction of costs or risk mitigation, for example. There are always going to be different people with different perceptions of performance, so it's worth introducing some type of governance over the process flow of devising, defining and monitoring performance measures to ensure that each is rooted to the business, is clearly defined, has a common understanding of its meaning and has some designated "owner" responsible for each performance measure.
The BI manager can help by advising the business clients about the definition of performance measures and objectives, but it really should live among the business representatives. In turn, the BI manager can take the responsibility for ensuring that the metrics can be adequately implemented and made available through the dashboard.
When explaining actionable knowledge, you mention that an organization must be nimble and individuals must be empowered to take action. Aside from the cost considerations you enumerate, are there cultural factors, management factors or other influences that set the stage for BI success?
For more on business intelligence programs
Read an excerpt on defining and using KPIs from Chapter 2, "The Value of Business Intelligence," from David Loshin's book Business Intelligence, Second Edition: The Savvy Manager's Guide.
Loshin: One aspect that I have advocated for a long time is envisioning the desired outcome of reporting and analyses as their results are fed into business processes. For example, it may be important to the CEO to monitor average hourly corporate sales across the company. However, what is the CEO going to do with that information? Is that CEO going to drill down into each region's, state's, county's or specific retail location's data to look for deviation from expectations? Probably not. Rather, for each corporate business objective, there are going to be specific individuals with specific roles and responsibilities in relation to deviation from expected outcomes for any operational measure.
At the same time, the C-level executive might take on the responsibility for monitoring those metrics that are relevant to the shareholders, or that impact perception in the market, or affect forward forecasting and planning. Performance measures are only useful when they inform decision making at the appropriate level. The culture must embrace actionable knowledge as a driver for making better decisions across the corporate hierarchy. That also suggests doing continuous reviews of the decisions made based on the output of the business intelligence framework and ensure that the outcomes are improving.
How can senior-level managers be convinced of the need for and value of BI technologies? Any tips for fostering executive buy-in?
Loshin: There is no better argument than success. There is often a chicken/egg problem with developing this business case, since the executives will want to be convinced of the value before committing to it. But, often, you need to commit resources to demonstrate value.
Our BI and analytics advisory practice has been developing some methods for bridging this gap to enable our clients' teams to rapidly experiment with different analyses to find a few that have potential while weeding out the many that probably won't lead to increased value. One example is what we call the "Customer Data Lab" -- a way of collecting customer data and applying a variety of analytical clustering and classification algorithms to look for segmentations that can be exploited for specific marketing campaigns.
Loosening some of the traditional requirements and creating a fast cycle is only doable in an environment in which the managers know that most of the experiments will not lead to increased value, but that doing many experiments may help find those few models that can be migrated into a more robust "productionalization" process for testing and evaluation.