This article originally appeared on the BeyeNETWORK.
In the same way that a farmer knows every inch of his land, a knowledge worker must fully grasp his knowledge space. The good farmer that understands his soil can often predict the weather and can reach for the right tool in his shed or barn that is needed for the right task. The good knowledge worker must be able to create his or her knowledge space, navigate it, manipulate the contents, visualize results and use the right tool, at the right time, for the right task in the knowledge management process.
A knowledge space is a collection of documents—accessible content—that comprises the working environment. This is the land to till for the knowledge farmer of today. It must be both real and virtual since it will often include documents from internal and external sources, in multiple media, comprised of both structured and unstructured data. It is the set of all probable sources that can be used to answer questions about a knowledge domain. One can choose broad domains and create, for example, a national security knowledge space, an agricultural knowledge space or a criminal justice knowledge space. Or one can select a much narrower domain and generate, say, a Colombian narcotics knowledge space, an avian flu knowledge space or an al-Qaeda knowledge space.
While the knowledge space must allow the user to access certain transaction processes, as well as provide the functionality to address work flow and administrative issues, the principal objective of a knowledge space is to assist in obtaining business intelligence. Therefore, a knowledge space must have the ability to expand and adapt with the addition of new sources or links to other domains.
How is a knowledge space created? After deciding on a specific domain, there will be a harvesting of cyberspace, including internal and external sources, accessible through the net (intra, extra or internet) as well as the corporate legacy databases. Using taxonomies, thesauri or simple keyword collections specific to the chosen domain we can use browsers, spiders and other tools to identify, tag and if necessary extract desired content. Data pumps or extraction tools will be appropriate where there is need for local access and integration of structured data. Similar tools will enable the harvesting of types of unstructured data in the form of text. Other sensors become the capture mechanism or entry points for other unstructured data (e.g., imagery, audio, video, etc.) coming from multiple sources. And it will be essential for good analysis that the knowledge worker be able to access all sources of intelligence, hence the need to have an easily expandable knowledge space.
The main problem in business intelligence today is that to do the job well—and the stakes are very high in defense, intelligence or homeland security—the analyst must take advantage of all sources of intelligence, in any format, medium or language, and factor that data into his or her analysis. This means, of course, that no longer can he or she rely on the old approaches based on selective reading and human-intensive search for patterns in search of occasional insights. The knowledge worker today and more so in the coming years, must have a workbench at his or her disposal that allows the quick and effective collection, navigation and analysis of petabytes or exabytes of data. It must fuse the structured and unstructured data and transparently allow operations on the content that will substantially increase the productivity and the performance of that knowledge worker.
In the knowledge environment of the coming years, a user or analyst must have the ability to efficiently do the things that have been previously discussed: select a knowledge domain, create the relevant knowledge space, collect the data using browsers, data pumps, spiders, other sensors, etc. Then he or she must be able to cleanse the content as needed, integrate and organize the knowledge space by tagging it with metadata and then navigate and manipulate that data as desired. There is the obvious need to know what is in the knowledge space, hence tools will be necessary to find, select, summarize, abstract, sort, collate, compare, link and/or perform many other operations. As the user needs to probe deeper into structured or quantitative data, the classic functionality of On-Line Analytical Processing (OLAP) tools is essential: querying, computing, estimating, grouping, ordering, drill downs, trend analysis, etc. And when faced with the large number of facts, variables and dimensions there can be little choice for the analyst to rely on effective visualization tools and techniques. Here the ability to produce tables, cross-tabulations, graphs and dashboards becomes important, as does the need to project results onto maps or other composites. At some point the payoff emerges if one is able to interpret the data correctly. Therefore there is the need for improving context, facilitating inference and increasing the probability of being right.
This argues for a workbench that includes tools that at the very least can:
- Harvest knowledge from the web or internal sources;
- Do federated searches or clustering;
- Search for concepts through content analysis, not just keywords;
- Classify or categorize documents based on concepts;
- Navigate multiple documents simultaneously;
- Summarize or abstract documents;
- Enable link analysis through developing entity relationships; and
- Facilitate geo-temporal analysis through time and place linking of entities.
This is just a partial list of the tools and functionalities the knowledge worker of the future will need but in addition, this must ride on the robust operating systems, enterprise applications integration, ERPs, enterprise portals, database engines and other software necessary for the enterprise architecture (EA) to perform effectively in a mission-critical environment.
After this environment has been designed and implemented it must then be managed. That means that there have to be metrics to assess its performance and the processes to collect the data that will permit the calculation of such metrics. The organization has to also be willing to manage the change that it will generate and leaders must understand this and provide the vision that leads to change and to their enterprises becoming learning organizations.
The work environments of today are rapidly becoming knowledge environments. Their focus and impact on organizations will be very significant and thus it is very important, particularly in areas such as homeland security, that we get it right. This means that there must be an effective convergence of enterprise architecture, portals and business intelligence to create the enterprise knowledge spaces that will be harvested by tomorrow’s workforce. We must implement the robust workbenches that these analysts will need to operate in these knowledge spaces.
Dr. Barquin is the President of Barquin International, a consulting firm, since 1994. He specializes in developing information systems strategies, particularly data warehousing, customer relationship management, business intelligence and knowledge management, for public and private sector enterprises. He has consulted for the U.S. Military, many government agencies and international governments and corporations.
Dr. Barquin is a member of the E-Gov (Electronic Government) Advisory Board, and chair of its knowledge management conference series; member of the Digital Government Institute Advisory Board; and has been the Program Chair for E-Government and Knowledge Management programs at the Brookings Institution. He was also the co-founder and first president of The Data Warehousing Institute, and president of the Computer Ethics Institute. His PhD is from MIT. Dr. Barquin can be reached at email@example.com.
Editor's note: More government articles, resources, news and events are available in the BeyeNETWORK's Government Channel. Be sure to visit today!