Changes between Version 35 and Version 36 of Parallel-DT


Ignore:
Timestamp:
Jan 18, 2010, 9:23:14 PM (14 years ago)
Author:
andrei.minca
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Parallel-DT

    v35 v36  
    2828
    2929In this approach, all processors construct a decision tree syncronously by sending and receiving class distribution information of local data. Major steps for the approach:
    30    ** select a node to expand according to a decision tree expansion strategy (eg Depth-First or Breadth-First), and call that node as the current node. At the beginning, root node is selected as the current node
    31    ** for each data attribute, collect class distribution information of the local data at the current node
    32    ** exchange the local class distribuition information using global reduction among processors
    33    ** simultaneously compute the entropy gains of each attribute at each processor and select the best attribute for child node expansion
    34    ** depending on the branching factor of the tree desired, create child nodes for the same number of partitions of attributes values, and split training cases accordingly
     30   * * select a node to expand according to a decision tree expansion strategy (eg Depth-First or Breadth-First), and call that node as the current node. At the beginning, root node is selected as the current node
     31   * * for each data attribute, collect class distribution information of the local data at the current node
     32   * * exchange the local class distribuition information using global reduction among processors
     33   * * simultaneously compute the entropy gains of each attribute at each processor and select the best attribute for child node expansion
     34   * * depending on the branching factor of the tree desired, create child nodes for the same number of partitions of attributes values, and split training cases accordingly
    3535
    3636[[Image(SyncronusTreeConstruction-DepthFirstExpansionStrategy.jpg)]]