wiki:Parallel-DT

Version 32 (modified by andrei.minca, 14 years ago) (diff)

--

Parallel-DT

  • Team members: Eremia Bogdan, Andrei Minca, Alexandru Sorici
  • Project Description: Classification is an important data mining problem. One of the most popular algorithms used for classification purposes are decision trees (DT). Since datasets that are used in data mining problems are usually very large, computationally efficient and scalable algorithms are highly desirable. Thus, the project's goal is to parallelize the decision tree inference process. A shared memory programming model using OpenMP is being considered for this task.
  • Project Implementation Details: Details

Serial DT process

Most of the existing induction-based algorithms, also C4.5 that is analysed on this topic, use Hunt's method as the basic algorithm. Here is a recursive description of Hunt's method for constructing a decision tree from a set T of trainning cases with classes denoted {C1, C2, C3, ..., Ck} :

  • Case 1 T contains cases all belonging to a single class Cj. The decision tree for T is a leaf identifying class Cj.
  • Case 2 T contains cases that belong to a mixture of classes. A test is chosen, based on a single attribute, that has one or more mutually exclusie outcomes {O1, O2, O3, ..., On}. note that in many implementation n is chosen to be 2 and this leads to a binary decision tree. T is partitioned into subsets T1, T2, ..., Tn, where Ti contains all the cases in T that have outcome Oi of the chosen test. The decision tree for T consists of a decision node identifying the test, and one branch for each possible outcome. The same tree building machinery is applied recursively to each subset of training cases.
  • Case 3 T containes no cases. the decision tree for T is a leaf, but the class to be asociated with the leaf must be determined from information other than T. For example, C4.5 chosses this to be the most frequent class at the parent of this node.

a small training data set

steps in creating the decision tree

Outlook and Humidity attributes

Parallel approaches

  • Syncronous Tree Construction - Depth First Expansion Strategy - the one that we implemented

In this approach, all processors construct a decision tree syncronously by sending and receiving class distribution information of local data. Major steps for the approach:

  • select a node to expand according to a decision tree expansion strategy (eg Depth-First or Breadth-First), and call that node as the current node. At the beginning, root node is selected as the current node
  • for each data attribute, collect class distribution information of the local data at the current node
  • exchange the local class distribuition information using global reduction among processors
  • simultaneously compute the entropy gains of each attribute at each processor and select the best attribute for child node expansion
  • depending on the branching factor of the tree desired, create child nodes for the same number of partitions of attributes values, and split training cases accordingly

syncronous treeconstruction

  • Partitioned Tree Construction

In this approach, whenever feasible, deifferent processors work on different parts of the classification tree.

partitioned tree construction

Project activity

Steps:

  • 12 Nov
    - Project status:
    still researching :)
    
    - ToDo:
    chose the best aproach for the serial code that we have
    

Attachments (6)

Download all attachments as: .zip