Changes between Version 44 and Version 45 of Parallel-DT
- Timestamp:
- Jan 18, 2010, 10:30:41 PM (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Parallel-DT
v44 v45 30 30 In this approach, all processors construct a decision tree syncronously by sending and receiving class distribution information of local data. Major steps for the approach: 31 31 32 * select a node to expand according to a decision tree expansion strategy (eg Depth-First or Breadth-First), and call that node as the current node. At the beginning, root node is selected as the current node32 * select a node to expand according to a decision tree expansion strategy (eg. Depth-First or Breadth-First), and call that node as the current node. At the beginning, root node is selected as the current node 33 33 * for each data attribute, collect class distribution information of the local data at the current node 34 34 * exchange the local class distribuition information using global reduction among processors … … 47 47 48 48 '''Step 1''' processors in ''Pn'' cooperate to expand node ''n'' 49 49 50 '''Step 2''' once the node ''n'' is expanded in to successors nodes, ''n1'', ''n2'', ..., ''nk'' then the processor group ''Pn'' is also partitioned, and the successor nodes are assigned to processors as follows: 51 50 52 '''Case 1''': if the number of successor nodes is greater than |''Pn''| 51 53 1. partition the successors into |''Pn''| groups such that the total number of training cases corresponding to each node group is roughly equal. Assign each processor to one node group.