Performance is a Key SLA

Managing the performance of frequently-accessed data sources, in order to minimize impact on operational systems or to support service level agreements, can be a challenge in large scale data virtualization environments.

Caching, also known as materialized views, provides an excellent adjunct to query optimization without the higher cost and longer-time-to-solution of traditional physical consolidation techniques. 

Caching Flexibly Persists Data to Meet Service Level Needs

The Cisco Data Virtualization Platform provides a number of caching options and techniques. These let you flexibly persist queried data to meet data delivery service level agreements and protect source system performance.

  • Any View, Any Service, Any Procedure – Any CIS view, service or procedure may be cached for future use, and all caches may be periodically and automatically refreshed to stay synchronized with their systems of record.  Queries are processed against caches just as if you were querying the original data source.
  • Multiple Cache Repository Options – DB2 , Microsoft SQL Server, MySQL, Netezza, Oracle, Sybase, Teradata and Vertica
  • Event-driven Refresh – Update cache based on defined business rules.
  • Scheduled Refresh – Update cache based on set times.
  • Incremental Refresh – Update partial cache based on triggered changes.
  • Manual Refresh – Update cache on demand as needed.
  • Native Data Source Load – Use target repository native load functions to load and refresh the cache
  • Parallel Load - Use multiple threads to load the cache in parallel.
  • Centralized Caching – In centralized mode, CIS can persist all cached data is stored in a single cache repository such as Oracle, MySQL, Sybase or Teradata. CIS’ centralized cache refresh is fully configurable including timed refresh, event-based refresh (CJM or JMS message), incremental refresh and forced refresh.
  • Distributed Caching - In distributed mode, users dedicate one or more CIS servers as edge servers and configure edge cache policies.  Edge cache policies let you control which cache data is replicated from the central cache to the edge location and the refresh rules. Refresh can be time based, event-based or incremental. The edge cache should also be deployed in a relational database.
  • Clustered Deployment - For clustered deployments, a centralized cache reduces the need for each cluster node to re-fetch the data from the source, which significantly reduces the impact on production data sources.


Selected Examples

  • Caching Product Master Data to Optimize the Supply Chain – To optimize manufacturing efficiency and customer delivery, this leading PC provider uses CIS to combine product master data with customer demand data from their order entry systems, product inventory data from their distribution systems, and product work in process data from their manufacturing operations to provide supply chain planners with a global view of supply and demand. They improved query performance on the master data hub by caching a subset of the product master data, the minimum required for identify supplies and demands. This allowed CIS to deliver to up-to-the-minute information required to fulfill orders more quickly, accelerate revenue, and increase inventory turns.