We recently jotted down our industry predictions for 2008. But more than once during this process, one of us would...
say, “Naah. That will never happen.” Then we’d move on to more practical predictions.
But our wheels were still turning. We kept brainstorming about what would be cool to see in the world of business intelligence (BI) and data integration, however unrealistic. So we made a list of capabilities, features, and fabulous functions that we (and, more to the point, our clients) would love to see in the coming year. Or the next year, for that matter.
Wouldn’t it be cool if companies really believed in data self-service? Over the last ten years, the concept of self-service within the technology environment has become commonplace. Receptionists have ceded to company voicemail. Corporate libraries have become knowledge management repositories and wikis within the firewall. (We call dibs on that WWF acronym!). The reduction of headcount because of technology innovation is now a de facto business imperative.
Unfortunately, when it comes to using corporate data, be it financial data, HR data, product information, pricing data, inventory, and all the other data in the enterprise, data self-service isn’t happening. All too often, data access depends on personal relationships. We see it all the time: data’s "use-ability" is directly proportional to the skills of the person handing it over.
What’s standing in the way of universal data access? Human nature, for one. Sometimes it’s just easier to fish for someone else than to teach that person how to cast his or her own line. And that trite adage of knowledge being power has come home to roost in business intelligence: people are beholden to information owners. Business users shouldn’t be rewarded for spoon-feeding data to others, but for ensuring that information is disseminated in a timely and repeatable way to everyone who needs it. For instance, success in the retailing world isn’t determined by the size of a distribution center, but by the distribution efficiencies. The value of information isn’t in the stockpiling of that information, but in how streamlined the data supply chain is. It would be cool if the data supply chain got faster, and people got their data more quickly, and on-demand.
Wouldn’t it be cool if application systems were measured on data accessibility and sharing, and not just processing and uptime? If the application system can’t share the data it generates, then should it really be benefiting from enterprise funding? The rest of the company may be relying on that data for operational or strategic purposes.
Until availing data to the enterprise becomes a formal responsibility for application systems developers, a corporate data “black market” will continue to flourish. The data black market is where companies spend untold money and time creating data that people need to get their workaday jobs done. Call it spreadmarts. Call it shadow IT. People aren’t just propagating dirty data, they’re increasing corporate liability. Most companies spend the majority of their time finding, cleaning, and distributing data from person to person – a collection of activities and processes that never shows up on anyone’s budget but nevertheless impacts the company’s bottom line.
Wouldn’t it be cool if data-as-a-service became a reality? In this day of search engines and advanced taxonomies, every BI developer is still a jack-of-all-trades. He needs to know database design and performance tuning. He needs to be a metadata expert. He needs to understand the issues of data quality and correction and of database structures and access paths – as well as the unique naming conventions across every platform, database, table, and data element.
Wouldn’t it be cool if that BI developer could suddenly be unencumbered by the details of database structures, navigation issues, or syntax? While there will always be business demand for linear regression algorithms and other advanced analytics, the majority of reporting and analysis continues to be based on simple, filtered lists. With data as a service, a BI developer could repurpose the countless hours of database navigation and usage – some of our clients cite these activities as 40 percent or more of development time – and focus more time on chipping away at the backlog of reports and user requests.
(Note to business managers: Have you calculated what 40 percent of your application developers’ time is costing you?)
Wouldn’t it be cool if we had simple apples-to-apples comparisons of BI tool vendors? These days, vendors keep their performance numbers close to the vest, industry vendor bake-offs are usually more performance-based than functionally specific, and the industry analyst firms simply can’t shake the stigma that recommended vendors are also paying clients.
Wouldn’t it be cool if executives distinguished data governance from data management? We define data governance as the organizing framework for aligning strategy, defining objectives, and establishing policies for enterprise information. Data management is the tactical execution of data governance. It includes data quality processes, privacy and security administration, metadata management, data modeling and design, and other tactical, skills-rich activities. Using data governance synonymously with data management not only misses the point, but also prevents many companies from leaving the starting block. You can do data management without data governance – that’s where most companies are currently. But you can’t do data governance without data management, and it would be cool if managers recognized the symbiosis between the two and began investing.
Wouldn’t it be cool if the business focused less on data perfection, and more on systemic, process-driven data quality? All too often, we find ourselves in a room full of business users complaining about the data they get from IT. The definition of “good data” isn’t within arm’s reach – everyone has a different opinion of what the right answer is. The typical lack of data management rigor means that there’s no simple answer, or a simple process for defining data success metrics.
If management really wants to attack the data quality problem, they need to dismount the high horse of data perfection and ensure that data definitions, content, and acceptance criteria are well defined for individual systems as well as for the enterprise at large. Data management as a discipline helps companies get off of the carousel of lousy data, letting IT work with a set of defined goals. Adopting a business-focused and programmatic approach to data quality, and realizing that data quality automation is simply a means to that end, would be a good start.
Wouldn’t it be cool if business users realized that the only obstacle to data quality was their own lack of engagement and ownership?
Wouldn’t it be cool if we had to justify the continued existence of our systems? We’re all for “pre facto” ROI estimation, but what about measuring the ongoing value of an application, either in terms of cost savings or revenue generation?
We realize that having such a policy could be a double-edged sword for some companies; but, in most cases, it would bolster both the reputation and the funding of data warehouse and BI programs that must regularly, and in many cases unnecessarily, cede to the budget demands of operational systems’ maintenance activities and new release plans.
In the investment world, every mutual fund manager is measured. And the fund either grows or shrinks based upon the fund manager’s performance. When mutual fund giant Janus ran into trouble with regulators, many of its fund managers headed for the hills and its stock got battered. In the last seven years, Janus’ assets have shrunk by 50 percent. An investment isn’t a one-time-only event, but something that people continually monitor. So it should be with IT value measurement.
Wouldn’t it be cool if enterprise search and SOA were linked? Then, building applications would involve nothing more than requesting reports from a search engine. This would eradicate the backlog of standard canned reports and remove much of the maintenance burden. The concept of business information without programming would become a reality. All too often, reports are duplicated across systems because people don’t know what’s already out there, and it’s well-known that business intelligence tools are nothing more than a delivery mechanism into Excel. By linking search with data services, a user could leverage her personal “palette” of toolsets, simplifying data access and navigation. This would make new reports as simple as Excel and web mash-ups, thus liberating developers from the ongoing font size, colors and visualization deliberations once and for all.
Wouldn’t it be cool if the business understood the strategic value of IT? If IT could engage the business consistently in new requirements conversations? If we could get our fingers all the way to the bottom of the peanut butter jar? If someone cleaned our garage while we were out to dinner? If movie stars were paid by weight? If “reboot” worked in real life?
‘Til then, we’ll keep on doing our day jobs.