Bounded lag distributed discrete event simulation.
The problem of distributed event driven siumlation has found no commonly accepted solution yet. This paper presents another attack on the problem. The proposed algorithm of simulation is based on the restriction of the bounded lag which means that the difference in the simulated time between the events processed concurrently is bounded from above by a known constant. The algorithm also employs the minimal propagation delays and the delays caused by non-preemptive states of the simulated system. The algorithm can be carried out on a special purpose distributed message passing computer. Assuming reasonable timing and spatial distribution of the simulated events, if nu processing elements in parallel execute the algorithm, then average of 0(lognu) instructions of one processing element would suffice for processing one event.