Most complex information systems are event-driven: each part of the system reacts to the events happening in the other parts, potentially generating new events. Complex Event Processing (CEP) engines in charge of interpreting, filtering, and combining primitive events to identify higher level composite events according to a set of rules, are the new breed of Message Oriented Middleware, which is being proposed today to better support event-driven interactions. A key requirement for CEP engines is low latency processing, even in presence of complex rules and large numbers of incoming events. In this paper we investigate how parallel hardware may speed up CEP processing. In particular, we consider the most common operators oered by existing rule languages (i.e., sequences, parameters, and aggregates); we consider different algorithms to process rules built using such operators; and we discuss how they can be implemented on a multi-core CPU and on CUDA, a widespread architecture for general purpose programming on GPUs. Our analysis shows that the use of GPUs can bring impressive speedups in presence of complex rules. On the other hand, it shows that multi-core CPUs scale better with the number of rules. Our conclusion is that an advanced CEP engine should leverage a multi-core CPU for processing the simplest rules, using the GPU as a coprocessor devoted to process the most complex ones.

Low Latency Complex Event Processing on Parallel Hardware

CUGOLA, GIANPAOLO;MARGARA, ALESSANDRO
2012-01-01

Abstract

Most complex information systems are event-driven: each part of the system reacts to the events happening in the other parts, potentially generating new events. Complex Event Processing (CEP) engines in charge of interpreting, filtering, and combining primitive events to identify higher level composite events according to a set of rules, are the new breed of Message Oriented Middleware, which is being proposed today to better support event-driven interactions. A key requirement for CEP engines is low latency processing, even in presence of complex rules and large numbers of incoming events. In this paper we investigate how parallel hardware may speed up CEP processing. In particular, we consider the most common operators oered by existing rule languages (i.e., sequences, parameters, and aggregates); we consider different algorithms to process rules built using such operators; and we discuss how they can be implemented on a multi-core CPU and on CUDA, a widespread architecture for general purpose programming on GPUs. Our analysis shows that the use of GPUs can bring impressive speedups in presence of complex rules. On the other hand, it shows that multi-core CPUs scale better with the number of rules. Our conclusion is that an advanced CEP engine should leverage a multi-core CPU for processing the simplest rules, using the GPU as a coprocessor devoted to process the most complex ones.
2012
Complex Event Processing; Parallel Hardware; Multi-core CPUs; General Purpose GPU Computing
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/637114
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 44
  • ???jsp.display-item.citation.isi??? 35
social impact