Data Warehousing - 34 Kimball Subsytems

Data System Architecture

About

This page takes back the Kimball Datawarehouse 34 Subsystem as a table of content and links them to a page on this website.

For Kimball, the “ETL” process has four major components:

Each of these components and all 34 subsystems contained therein are explained below.

Components

Extracting

The initial subsystems interface to the source systems to access the required data. The extract-related ETL subsystems include:

  • Data Quality - Data Profiling (subsystem 1) — Explores a data source to determine its fit for inclusion as a source and the associated cleaning and conforming requirements.
  • change data capture (subsystem 2) — Isolates the changes that occurred in the source system to reduce the ETL processing burden.
  • extract system (subsystem 3) — Extracts and moves source data into the data warehouse environment for further processing.

Cleaning and conforming data

These critical steps are where the ETL system adds value to the data. The other activities, extracting and delivering data, are obviously important, but they simply move and load the data. The cleaning and conforming subsystems change data and enhance its value to the organization. In addition, these subsystems should be architected to create metadata used to diagnose source-system problems. Such diagnoses can eventually lead to business process reengineering initiatives to address the root causes of dirty data and to improve data quality over time.

The ETL data cleaning process is often expected to fix dirty data, yet at the same time the data warehouse is expected to provide an accurate picture of the data as it was captured by the organization's production systems (see related article, “Data Stewardship 101: First Step to Quality and Consistency). It's essential to strike the proper balance between these conflicting goals. The key is to develop an ETL system capable of correcting, rejecting or loading data as is, and then highlighting, with easy-to-use structures, the modifications, standardizations, rules and assumptions of the underlying cleaning apparatus so the system is self-documenting.

The five major subsystems in the cleaning and conforming step include:

  • error event tracking (subsystem 5) — Captures all error events that are vital inputs to data quality improvement.
  • audit_dimension_creation (subsystem 6) — Attaches metadata to each fact table as a dimension. This metadata is available to BI applications for visibility into data quality.
  • deduplication (subsystem 7) — Eliminates redundant members of core dimensions, such as customers or products. May require integration across multiple sources and application of survivorship rules to identify the most appropriate version of a duplicate row.
  • Data Quality - Metrics (subsystem 8) — Enforces common dimension attributes across conformed master dimensions and common metrics across related fact tables (see related article, “Kimball University: Data Integration for Real People”).

Delivering: Prepare for presentation

The primary mission of the ETL system is the handoff of the dimension and fact tables in the delivery step.

There is considerable variation in source data structures and cleaning and conforming logic, but the delivery processing techniques are more defined and disciplined. Careful and consistent use of these techniques is critical to building a successful dimensional data warehouse that is reliable, scalable and maintainable.

Many of these subsystems focus on dimension table processing. Dimension tables are the heart of the data warehouse. They provide the context for the fact tables and hence for all the measurements. For many dimensions, the basic load plan is relatively simple: perform basic transformations to the data to build dimension rows to be loaded into the target presentation table.

Preparing fact tables is certainly important as they hold the key measurements of the business that users want to see. Fact tables can be very large and time consuming to load. However, preparing fact tables for presentation is typically more straightforward.

The delivery systems in the ETL architecture consist of:

  • Dimensional Data Modeling - Slowly Changing Dimensions (SCD) (SCD) Manager (subsystem 9) — Implements logic for slowly changing dimension attributes.
  • surrogate_key_generator (subsystem 10) — Produces surrogate keys independently for every dimension.
  • hierarchy_manager (subsystem 11) — Delivers multiple, simultaneous, embedded hierarchical structures in a dimension.
  • special_dimensions_manager (subsystem 12) — Creates placeholders in the ETL architecture for repeatable processes supporting an organization's specific dimensional design characteristics, including standard dimensional design constructs such as junk dimensions, mini-dimensions and behavior tags.
  • fact_table_builders (subsystem 13) — Construct the three primary types of fact tables: transaction grain, periodic snapshot and accumulating snapshot.
  • surrogate_key_pipeline (subsystem 14) — Replaces operational natural keys in the incoming fact table record with the appropriate dimension surrogate keys.
  • multi-valued_bridge_table_builder (subsystem 15) — Builds and maintains bridge tables to support multi-valued relationships (many-to-many)
  • late_arriving_data_handler (subsystem 16) — Applies special modifications to the standard processing procedures to deal with late-arriving fact and dimension data.
  • dimension_manager (subsystem 17) — Centralized authority who prepares and publishes conformed dimensions to the data warehouse community.
  • fact_table_provider (subsystem 18) — Owns the administration of one or more fact tables and is responsible for their creation, maintenance and use.
  • aggregate_builder (subsystem 19) — Builds and maintains aggregates to be used seamlessly with aggregate navigation technologies for enhanced query performance.
  • olap_cube_builder (subsystem 20) — Feeds data from the relational dimensional schema to populate OLAP cubes.
  • data_propagation_manager (subsystem 21) — Prepares conformed, integrated data from the data warehouse presentation server for delivery to other environments for special purposes.

Managing the ETL environment

A data warehouse will not be a success until it can be relied upon as a dependable source for business decision making. To achieve this goal, the ETL system must constantly work toward fulfilling three criteria:

  • System Property - Reliability. The ETL processes must run consistently to provide data on a timely basis that is trustworthy at any level of detail.
  • System - Availability. The data warehouse must meet its service level agreements. The warehouse should be up and available as promised.
  • manageability. A successful data warehouse is never done; it constantly grows and changes along with the business. Thus, ETL processes need to evolve gracefully as well.

The ETL management subsystems are the key architectural components that help achieve the goals of reliability, availability and manageability. Operating and maintaining a data warehouse in a professional manner is not much different than other systems operations: follow standard best practices, plan for disaster and practice (see related article, “Don't Forget the Owner's Manual”). Many of you will be very familiar with the following requisite management subsystems:

  • job_scheduler (subsystem 22) — Reliably manages the ETL execution strategy, including the relationships and dependencies between ETL jobs.
  • backup_system (subsystem 23) — Backs up the ETL environment for recovery, restart and archival purposes.
  • recovery_and_restart (subsystem 24) — Processes for recovering the ETL environment or restarting a process in the event of failure.
  • Version control (subsystem 25) — Takes snapshots for archiving and recovering all the logic and metadata of the ETL pipeline.
  • version_migration (subsystem 26) — Migrates a complete version of the ETL pipeline from development into test and finally into production.
  • workflow_monitor (subsystem 27) — Ensures that the ETL processes are operating efficiently and that the warehouse is being loaded on a consistently timely basis.
  • sorting (subsystem 28) — Serves the fundamental, high-performance ETL processing role. * [[data:processing:lineage|lineage and dependency (subsystem 29) — Identifies the source of a data element and all intermediate locations and transformations for that data element or, conversely, start with a specific data element in a source table and reveals all activities performed on that data element.
  • Problem escalation (subsystem 30) — Support structure that elevates ETL problems to appropriate levels for resolution.
  • paralleling and pipelining (subsystem 31) — Enables the ETL system to automatically leverage multiple processors or grid computing resources to deliver within time constraints.
  • security (subsystem 32) — Ensures authorized access to (and a historical record of access to) all ETL data and metadata by individual and role.
  • compliance manager (subsystem 33) — Supports the organization's compliance requirements typically through maintaining the data's chain of custody and tracking who had authorized access to the data.
  • metadata repository (subsystem 34) — Captures ETL metadata including the process metadata, technical metadata and business metadata, which make up much of the metadata of the total DW/BI environment.





Discover More
Data System Architecture
Data Warehouse

A data warehouse is a large central data repository of current, history and summarised data coming from operational and external sources used primarily for analysis. s is large historical databases for...
Dw Layers
Data Warehouse - Layer (Architecture)

Name Properties Stage Layer Real-Time, CDC, continuous refresh DataWarehouse Layer / ODS Normalized, Data History, refresh: 2-6 daily Data Mart Performance, Access layer, Star schema, refresh: 1-4...
Card Puncher Data Processing
Datacadamia - Data all the things

Computer science from a data perspective



Share this page:
Follow us:
Task Runner