Version 6 (modified by sinziana.mazilu, 14 years ago) (diff)


<< Home

Overview of VNSim - the VANET simulator

VNSim is a discrete event simulator meaning that the simulation time advances with a fixed time resolution after executing the simulator code for the current moment of time. The VANET application consists of an event queue which can hold 4 types of events: Send, Receive, GPS and Cleanup.

A Send event for a specified node triggers the calling of the node’s procedure responsible for preparing a message. That send event is then inserted into the Event Queue. The Engine of the simulator checks this Event Queue at a regular fixed amount of time, and for any Send Event found it creates one Receive Event (if the Send Event is Unicast) or several Receive Events for everyone of the nodes which are in the wireless range of the sender. When a Receive Event is created first the Engine checks to see if there is another Receive Event corresponding to the same Receiver for the current simulation time, and if it is, it adds this Receive Event to the receiver’s event list. This makes it easier when the Receiver simulates the receive procedure of an Event, it has to check its Receive Event list and chooses one according to a receive power threshold and interference level.

The GPS Event is scheduled at a regular time interval for each node thus accurately simulating the way a real VANET application collects GPS data periodically. The mobility module updates periodically the position of each vehicle-node according to the vehicular mobility model. This model takes into consideration vehicle interactions, traffic rules and various driver behaviors.

The network model takes into consideration the position and the wireless range of the vehicles, medium access and the propagation model of radio waves according to the Two Ray Ground model. The Simulator delivers a message to all the receiving nodes in the wireless range using an optimized local search thru the nodes. This is possible due to indexing the map point locations of the nodes with a PeanoKey? mechanism which scans the geographical area around a node.

For a more accurate network model, the node’s protocol stack is taken into account and thus the simulator can have the packet encapsulation process by adding the corresponding headers to the message. The transport layer is UDP, and the IP network layer is replaced by a geographical routing and addressing scheme. The MAC layer is the one in 802.11b.


The VANET simulator we have developed is a discrete event simulator. The simulation time advances with a fixed time resolution after executing the application code for the current moment of the simulation time. More specifically, at every moment of the simulation time, all the current events are pulled from a queue of events, and handled in a random order.

The events queue can hold three types of events: send, receive or GPS. A send event for a specified node triggers the calling of the node’s procedure responsible for preparing a message. It also schedules the corresponding receive event(s) for the receiver(s) the simulator decides to deliver the message to, according to the network module. The receive event is associated either with a node, or with a group of nodes (broadcast) and it calls the appropriate handler in each of the receiving nodes. The GPS event is scheduled at a regular time interval for each node, in order to simulate the way a real VANET application collects GPS data periodically. Besides these three types of events, the mobility module updates periodically the position of each node that is a vehicle, according to the vehicular mobility model. This model takes into account vehicle interactions (passing by, car following patterns etc), traffic rules and various driver behavior.

The main advantage of this architecture is that the simulator can execute (or emulate) the real application’s code without significant changes. Practically, we have succeeded to simulate the TrafficView? application [1] on each node, by calling the appropriate methods of the application when the corresponding events occur. Some minor changes were in order, because the original application was multithreaded which would be a serious limitation for the simulator.

Java Thread Pools

Threads are a very important aspect of Java, but creating large numbers of threads can negatively impact program performance. Discover the advantages of thread pools, which allow you to limit the total number of threads running assigned tasks to each of the threads.

A thread allows Java to perform more than one task at a time. In much the same way as multitasking allows your computer to run more than one program at a time, multithreading allows your program to run more than one task at a time. Depending on the type of program, multithreading can significantly increase the performance of a program.

When to Use Multithreading

There are two primary cases in which multithreading can increase performance. The first is when the program is run on a multiprocessor computer, which will do little for your program if it is not multithreaded. A multiprocessor computer works by using the multiple processors to handle threads simultaneously. If your program uses only the one thread that all programs begin with, multiple processors will do your program little good because the computer has no way to divide your program among the processors. The second type of program that greatly benefits from multithreading is a program that spends a great deal of time waiting for outside events. One example of this is a Web crawler, which must visit a Web page and then visit all of the links on that page. When crawling a large site, your program must examine a considerable amount of pages. Requesting a Web page can take several seconds—even on a broadband connection. This is a considerable amount of time for a computer to wait for each Web page. If the crawler has a considerable number of pages to visit, these mere seconds can really add up. It would be much better for the crawler to request a large number of Web pages and then wait for each of these pages at the same time. For example, the program may use 10 different threads to request 10 different Web pages. The program is now waiting for 10 pages, rather than just one. Because the time spent waiting for the page is idle, the program can be waiting for a large number of pages before performance degrades. Also, because the pages are being waited for in parallel, the entire process takes only a fraction of the time that it would when the pages were waited on individually.

Why a Thread Pool?

When programming the crawler in the previous section, a problem that would soon present itself is the number of threads to use. A crawler may have to visit tens of thousands of pages, and you certainly do not want to create tens of thousands of threads because each thread imposes a certain amount of overhead. If the number of threads grows too large, the computer will be spending all of its time switching between threads, rather than just executing them. To solve this problem, you must create a thread pool. The thread pool is given some fixed number of threads to use. The thread pool will assign its tasks to each of these threads. As the threads finish with old tasks, new ones are assigned. This causes the program to use a fixed number of threads, not to be continually creating new threads. Unfortunately, there is no thread-pooling feature built into Java; thread pooling must be implemented by the programmer. I will now show you my thread pool.

Implementing the Thread Pool

The main class file that makes up my thread pool is the ThreadPool?.java source file (this source file can be seen in Listing 1 at the end of the article). The listing is well-documented, and should allow you to understand the details of the program. I will now explain the general flow of the program. The thread pool contains an array of WorkerThread? objects. These objects are the individual threads that make up the pool. The WorkerThread? objects will start and stop as work arrives for them. If there is more work than there are WorkerThreads?, the work will backlog until WorkerThreads? free up. When you first create a new ThreadPool? object the WorkerThreads? are initially paused, waiting for work. You assign work to the ThreadPool? using the assign method. Any class that implements the Runnable interface can be passed to the assign method. The assign method places the object into assignments array, in which it is picked up by a waiting thread.

Knowing when the thread pool has completed its task can be complex. There are several things that must be checked to determine whether the thread pool is completely done. First, the assignments array must be empty. However, an empty assignments array does not mean that the thread pool is done. There may still be threads inside of the pool that are executing tasks that were previously assigned. To determine whether the thread pool is done or not, I have provided the Done class (shown in Listing 2), which is used internally by the thread pool. All that you have to do to make use of it is assign your tasks to the ThreadPool? and call the complete method to wait for the ThreadPool? to complete. The Done class is used to determine when no threads are still running. The Done class has two methods that are called by the worker threads to track their progress. When a worker thread begins, it calls the Done class' workerBegin method. Similarly, when a worker thread completes, it calls the Done class's workerEnd method. These two methods allow the Done class to determine when no thread is currently running. Most likely, you will not directly interact with the Done class. You will simply assign your tasks to the ThreadPool? and wait for the ThreadPool? to complete. In the next section, I will show you an example of how the whole thread pool fits together.