Data Quality - Data (cleaning|scrubbing|wrangling)

Dataquality Metrics

About

Data cleansing or data scrubbing is the act of:

corrupt or inaccurate records from a record set, table, or database.

Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant etc. parts of the data and then replacing, modifying or deleting this dirty data.

The goal is to get data ready for analysis.

After cleansing, a data set will be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by different data dictionary definitions of similar entities in different stores, may have been caused by user entry errors, or may have been corrupted in transmission or storage.

Tool

Probabilistic

Probabilistic scripts for automating common-sense tasks - The idea is to clean a data set (state, city and rent) automatically through a bayesian probabilistic script that encode prior domain knowledge declaratively such as:

  • More populous cities (and states) appear more often
  • Pre-bedroom rents in a city and state tend to be similar across listings
  • Type named of cities may, rarely, contain a typo.
  • Even more rarely, typed names of cities may have multiple typos.

The process is:

  • data are fitted to get prior knowledge over cities, states and rents
  • domain rules are implemented as follows
    • city and state choice are proportional to their frequency in the data set.
    • typos are generated via a coin flipping sequence with a chance of 1/10. (There is a chance of 1/10 to get a letter typo, a chance of 1/100 to get two letters typo)
    • rent got a little bit of noise by state, city

Script: Probabilistic Script Pclean

More … https://probcomp.github.io/Gen/

Documentation / Reference





Discover More
P Value Pipeline
Data Mining - (Life cycle|Project|Data Pipeline)

Data mining is an experimental science. Data mining reveals correlation, not causation. With good data, you will make good algorithm. The most preferable solution is then to work on good features....
Thomas Bayes
Data Mining - Data (Preparation | Wrangling | Munging)

Data Preparation is a data step that prepares your data for further analyis. It's a key factor in any data project: mining, ai analytics Preparing has several steps that are explained below. ...
Data Quality Correction Rule
Data Quality - Data Correction

Data Correction is the second step in a data cleansing process after the detection of values that not meet the business rules (data rules). For each data values that are not accepted, you can have to...
Dataquality Metrics
Data Quality - Data Rules

Data rules are rule that can have various designations such as: business rules (in the data modeling), data test, quality screen. They follow the same concept than the rules from an event driven...
Dataquality Metrics
Data Quality - Entity (Resolution|Disambiguation) - Record (linkage|matching) - Conflation

Entity Resolution, or Record linkage is the process of (joining|matching) records from one data source with another that describe the same Entity. Also known as : entity disambiguation/linking, ...



Share this page:
Follow us:
Task Runner