The Knowledge Exchange / Business Analytics / Components of an information management strategy

Components of an information management strategy

David Loshin, President of Knowledge Integrity, Inc.

In my last blog post on financial services, healthcare and government and the previous post on telcommunications and energy, we looked at two tracks of business drivers for deploying best practices in information management across the enterprise. First, there are common demands that are shared across many different industries, such as the need for actionable knowledge about customers and products to drive increased revenues and lengthened customer relationships. Second, there are characteristics for operational and analytical needs associated with specific industries.

Each of these business drivers point to the need for increased agility and maturity in coupling well-defined information management practices with the technologies that compose an end-to-end information management framework. In the next two posts, we will review the components of an information management strategy.

Data integration
Data integration has become the lifeblood of the enterprise. Organizations continually recognize how critical it is to share data across business functions, and that suggests a continued need for increasing reliability, performance, and access speed for data integration, particularly in these fundamental capabilities:

  • Data accessibility – Organizations must support a vast landscape of legacy data systems, especially due to the desire to scan historical data assets for potential business value. One key aspect of data integration is accessibility, and the information management framework going forward must provide connectors to that wide variety of data sources, including file-based, tree-structured data sets, relational databases, and even streamed data sources.
  • Data transformation, exchange and delivery – Once data sets can be accessed from their original sources, the data integration framework must be able to efficiently move the data from source to target. There must be a capability to transform the data from its original format into one that is suited to the target, with a means of verifying that the data sets are appropriately packaged and delivered.
  • Data replication and change data capture – The need to regulate the accessibility and delivery of ever-growing data volumes within expected time frames is impeded by data delivery bottlenecks, especially in periodic extractions from source systems and loading into data warehouses. Data replication techniques enable rapid bulk transfers of large data sets. You can synchronize the process by trickle-feeding changes using a method known as “change data capture” that monitors system logs and triggers updates to the target systems as changes happen in the source.

Data virtualization
Efficient data integration can address some of the issues associated with increasing demands for accessing data from numerous sources and of varied structure and format. Yet some complications remain in populating data warehouses in a timely and consistent manner that meets the performance requirements of consuming systems. When the impediments are linked to the complexity of extraction and transformation in a synchronous manner, you run the risk of timing and synchronization issues that lead to inconsistencies between the consumers of data and the original source systems.

One way to address this is by reducing the perception of data latency and asynchrony. Data virtualization techniques have been evolved and matured to address these concerns. Data virtualization tools and techniques provide three key capabilities:

  • Federation – They enable federation of heterogeneous sources by mapping a standard or canonical data model to the access methods for the variety of sources comprising the federated model.
  • Caching – By managi,kng accessed and aggregated data within a virtual (“cached”) environment, data virtualization reduces data latency, thereby increasing system performance.
  • Consistency – Together, federation and virtualization abstract the methods for access and combine them with the application of standards for data validation, cleansing and unification.

A virtualized data environment can simplify how the end-user applications and business data analysts access data without forcing them to be aware of source data locations, data integration, or application of business rules.

In the next post, we will examine a few more components of an information management strategy.

Tags: , ,
  • Facebook
  • del.icio.us
  • Twitter
  • Digg
  • LinkedIn
  • email

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>