Open In App

Data Reduction in Data Mining

Last Updated : 02 Feb, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite – Data Mining 
The method of data reduction may achieve a condensed description of the original data which is much smaller in quantity but keeps the quality of the original data. 

INTRODUCTION:

Data reduction is a technique used in data mining to reduce the size of a dataset while still preserving the most important information. This can be beneficial in situations where the dataset is too large to be processed efficiently, or where the dataset contains a large amount of irrelevant or redundant information.

There are several different data reduction techniques that can be used in data mining, including:

  1. Data Sampling: This technique involves selecting a subset of the data to work with, rather than using the entire dataset. This can be useful for reducing the size of a dataset while still preserving the overall trends and patterns in the data.
  2. Dimensionality Reduction: This technique involves reducing the number of features in the dataset, either by removing features that are not relevant or by combining multiple features into a single feature.
  3. Data Compression: This technique involves using techniques such as lossy or lossless compression to reduce the size of a dataset.
  4. Data Discretization: This technique involves converting continuous data into discrete data by partitioning the range of possible values into intervals or bins.
  5. Feature Selection: This technique involves selecting a subset of features from the dataset that are most relevant to the task at hand.
  6. It’s important to note that data reduction can have a trade-off between the accuracy and the size of the data. The more data is reduced, the less accurate the model will be and the less generalizable it will be.

In conclusion, data reduction is an important step in data mining, as it can help to improve the efficiency and performance of machine learning algorithms by reducing the size of the dataset. However, it is important to be aware of the trade-off between the size and accuracy of the data, and carefully assess the risks and benefits before implementing it.
Methods of data reduction: 
These are explained as following below. 

1. Data Cube Aggregation: 
This technique is used to aggregate data in a simpler form. For example, imagine the information you gathered for your analysis for the years 2012 to 2014, that data includes the revenue of your company every three months. They involve you in the annual sales, rather than the quarterly average,  So we can summarize the data in such a way that the resulting data summarizes the total sales per year instead of per quarter. It summarizes the data. 

2. Dimension reduction: 
Whenever we come across any data which is weakly important, then we use the attribute required for our analysis. It reduces data size as it eliminates outdated or redundant features. 

  • Step-wise Forward Selection – 
    The selection begins with an empty set of attributes later on we decide the best of the original attributes on the set based on their relevance to other attributes. We know it as a p-value in statistics. 

    Suppose there are the following attributes in the data set in which few attributes are redundant. 

Initial attribute Set: {X1, X2, X3, X4, X5, X6}
Initial reduced attribute set:  { }

Step-1: {X1}
Step-2: {X1, X2}
Step-3: {X1, X2, X5}

Final reduced attribute set: {X1, X2, X5} 
  • Step-wise Backward Selection – 
    This selection starts with a set of complete attributes in the original data and at each point, it eliminates the worst remaining attribute in the set. 

    Suppose there are the following attributes in the data set in which few attributes are redundant. 

Initial attribute Set: {X1, X2, X3, X4, X5, X6}
Initial reduced attribute set:  {X1, X2, X3, X4, X5, X6 }

Step-1: {X1, X2, X3, X4, X5}
Step-2: {X1, X2, X3, X5}
Step-3: {X1, X2, X5}

Final reduced attribute set: {X1, X2, X5} 
  • Combination of forwarding and Backward Selection – 
    It allows us to remove the worst and select the best attributes, saving time and making the process faster. 

3. Data Compression: 
The data compression technique reduces the size of the files using different encoding mechanisms (Huffman Encoding & run-length Encoding). We can divide it into two types based on their compression techniques. 

  • Lossless Compression – 
    Encoding techniques (Run Length Encoding) allow a simple and minimal data size reduction. Lossless data compression uses algorithms to restore the precise original data from the compressed data.  
  • Lossy Compression – 
    Methods such as the Discrete Wavelet transform technique, PCA (principal component analysis) are examples of this compression. For e.g., the JPEG image format is a lossy compression, but we can find the meaning equivalent to the original image. In lossy-data compression, the decompressed data may differ from the original data but are useful enough to retrieve information from them. 

4. Numerosity Reduction: 
In this reduction technique, the actual data is replaced with mathematical models or smaller representations of the data instead of actual data, it is important to only store the model parameter. Or non-parametric methods such as clustering, histogram, and sampling.

5. Discretization & Concept Hierarchy Operation: 
Techniques of data discretization are used to divide the attributes of the continuous nature into data with intervals. We replace many constant values of the attributes by labels of small intervals. This means that mining results are shown in a concise, and easily understandable way. 

  • Top-down discretization – 
    If you first consider one or a couple of points (so-called breakpoints or split points) to divide the whole set of attributes and repeat this method up to the end, then the process is known as top-down discretization also known as splitting. 
  • Bottom-up discretization – 
    If you first consider all the constant values as split points, some are discarded through a combination of the neighborhood values in the interval, that process is called bottom-up discretization. 

Concept Hierarchies: 
It reduces the data size by collecting and then replacing the low-level concepts (such as 43 for age) with high-level concepts (categorical variables such as middle age or Senior). 

For numeric data following techniques can be followed: 

  • Binning 
    Binning is the process of changing numerical variables into categorical counterparts. The number of categorical counterparts depends on the number of bins specified by the user. 
  • Histogram analysis – 
    Like the process of binning, the histogram is used to partition the value for the attribute X, into disjoint ranges called brackets. There are several partitioning rules: 
    1. Equal Frequency partitioning: Partitioning the values based on their number of occurrences in the data set. 
    2. Equal Width Partitioning: Partitioning the values in a fixed gap based on the number of bins i.e. a set of values ranging from 0-20. 
    3. Clustering: Grouping similar data together. 
       

ADVANTAGED OR DISADVANTAGES OF Data Reduction in Data Mining : 

Data reduction in data mining can have a number of advantages and disadvantages.

Advantages:

  1. Improved efficiency: Data reduction can help to improve the efficiency of machine learning algorithms by reducing the size of the dataset. This can make it faster and more practical to work with large datasets.
  2. Improved performance: Data reduction can help to improve the performance of machine learning algorithms by removing irrelevant or redundant information from the dataset. This can help to make the model more accurate and robust.
  3. Reduced storage costs: Data reduction can help to reduce the storage costs associated with large datasets by reducing the size of the data.
  4. Improved interpretability: Data reduction can help to improve the interpretability of the results by removing irrelevant or redundant information from the dataset.

Disadvantages:

  1. Loss of information: Data reduction can result in a loss of information, if important data is removed during the reduction process.
  2. Impact on accuracy: Data reduction can impact the accuracy of a model, as reducing the size of the dataset can also remove important information that is needed for accurate predictions.
  3. Impact on interpretability: Data reduction can make it harder to interpret the results, as removing irrelevant or redundant information can also remove context that is needed to understand the results.
  4. Additional computational costs: Data reduction can add additional computational costs to the data mining process, as it requires additional processing time to reduce the data.
  5. In conclusion, data reduction can have both advantages and disadvantages. It can improve the efficiency and performance of machine learning algorithms by reducing the size of the dataset. However, it can also result in a loss of information, and make it harder to interpret the results. It’s important to weigh the pros and cons of data reduction and carefully assess the risks and benefits before implementing it.
     


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads