Esper - Event Stream and Complex Event Processing for Java

Reference Documentation

4.5.0


Table of Contents

Preface
1. Technology Overview
1.1. Introduction to CEP and event stream analysis
1.2. CEP and relational databases
1.3. The Esper engine for CEP
1.4. Required 3rd Party Libraries
2. Event Representations
2.1. Event Underlying Java Objects
2.2. Event Properties
2.2.1. Escape Characters
2.3. Dynamic Event Properties
2.4. Fragment and Fragment Type
2.5. Plain-Old Java Object Events
2.5.1. Java Object Event Properties
2.5.2. Property Names
2.5.3. Constants and Enumeration
2.5.4. Parameterized Types
2.5.5. Setter Methods for Indexed and Mapped Properties
2.5.6. Known Limitations
2.6. java.util.Map Events
2.6.1. Overview
2.6.2. Map Properties
2.6.3. Map Supertypes
2.6.4. Advanced Map Property Types
2.6.4.1. Nested Properties
2.6.4.2. Map Event Type Properties
2.6.4.3. One-to-Many Relationships
2.7. org.w3c.dom.Node XML Events
2.7.1. Schema-Provided XML Events
2.7.1.1. Getting Started
2.7.1.2. Property Expressions and Namespaces
2.7.1.3. Property Expression to XPath Rewrite
2.7.1.4. Array Properties
2.7.1.5. Dynamic Properties
2.7.1.6. Transposing Properties
2.7.1.7. Event Sender
2.7.1.8. Limitations
2.7.2. No-Schema-Provided XML Events
2.7.3. Explicitly-Configured Properties
2.7.3.1. Simple Explicit Property
2.7.3.2. Explicit Property Casting and Parsing
2.7.3.3. Node and Nodeset Explicit Property
2.8. Additional Event Representations
2.9. Updating, Merging and Versioning Events
2.10. Coarse-Grained Events
2.11. Event Objects Instantiated and Populated by Insert Into
2.12. Comparing Event Types
3. Processing Model
3.1. Introduction
3.2. Insert Stream
3.3. Insert and Remove Stream
3.4. Filters and Where-clauses
3.5. Time Windows
3.5.1. Time Window
3.5.2. Time Batch
3.6. Batch Windows
3.7. Aggregation and Grouping
3.7.1. Insert and Remove Stream
3.7.2. Output for Aggregation and Group-By
3.7.2.1. Un-aggregated and Un-grouped
3.7.2.2. Fully Aggregated and Un-grouped
3.7.2.3. Aggregated and Un-Grouped
3.7.2.4. Fully Aggregated and Grouped
3.7.2.5. Aggregated and Grouped
3.8. Event Visibility and Current Time
4. Context and Context Partitions
4.1. Introduction
4.2. Context Declaration
4.2.1. Keyed Segmented Context
4.2.1.1. Multiple Stream Definitions
4.2.1.2. Filters
4.2.1.3. Multiple Properties Per Event Type
4.2.1.4. Built-In Context Properties
4.2.1.5. Examples of Joins
4.2.2. Hash Segmented Context
4.2.2.1. Multiple Stream Definitions
4.2.2.2. Filters
4.2.2.3. Built-In Context Properties
4.2.2.4. Performance Considerations
4.2.3. Category Segmented Context
4.2.3.1. Built-In Context Properties
4.2.4. Non-Overlapping Context
4.2.5. Overlapping Context
4.2.5.1. Built-In Context Properties
4.2.6. Context Conditions
4.2.6.1. Filter Context Condition
4.2.6.2. Pattern Context Condition
4.2.6.3. Crontab Context Condition
4.2.6.4. Time Period Context Condition
4.3. Context Nesting
4.3.1. Built-In Nested Context Properties
4.4. Partitioning Without Context Declaration
4.5. Output When Context Partition Ends
4.6. Context and Named Window
4.7. Operations on Specific Context Partitions
5. EPL Reference: Clauses
5.1. EPL Introduction
5.2. EPL Syntax
5.2.1. Specifying Time Periods
5.2.2. Using Comments
5.2.3. Reserved Keywords
5.2.4. Escaping Strings
5.2.5. Data Types
5.2.5.1. Data Type of Constants
5.2.5.2. BigInteger and BigDecimal
5.2.6. Annotation
5.2.6.1. Application-Provided Annotations
5.2.6.2. Built-In Annotations
5.2.6.3. @Name
5.2.6.4. @Description
5.2.6.5. @Tag
5.2.6.6. @Priority
5.2.6.7. @Drop
5.2.6.8. @Hint
5.2.6.9. @Hook
5.2.6.10. @Audit
5.2.7. Expression Declaration
5.2.8. Script Declaration
5.2.9. Referring to a Context
5.3. Choosing Event Properties And Events: the Select Clause
5.3.1. Choosing all event properties: select *
5.3.2. Choosing specific event properties
5.3.3. Expressions
5.3.4. Renaming event properties
5.3.5. Choosing event properties and events in a join
5.3.6. Choosing event properties and events from a pattern
5.3.7. Selecting insert and remove stream events
5.3.8. Qualifying property names and stream names
5.3.9. Select Distinct
5.3.10. Transposing an Expression Result to a Stream
5.4. Specifying Event Streams: the From Clause
5.4.1. Filter-based Event Streams
5.4.1.1. Specifying an Event Type
5.4.1.2. Specifying Filter Criteria
5.4.1.3. Filtering Ranges
5.4.1.4. Filtering Sets of Values
5.4.1.5. Filter Limitations
5.4.2. Pattern-based Event Streams
5.4.3. Specifying Views
5.4.4. Multiple Data Window Views
5.4.5. Using the Stream Name
5.5. Specifying Search Conditions: the Where Clause
5.6. Aggregates and grouping: the Group-by Clause and the Having Clause
5.6.1. Using aggregate functions
5.6.2. Organizing statement results into groups: the Group-by clause
5.6.3. Selecting groups of events: the Having clause
5.6.4. How the stream filter, Where, Group By and Having clauses interact
5.6.5. Comparing Keyed Segmented Context, the Group By clause and the std:groupwin view
5.7. Stabilizing and Controlling Output: the Output Clause
5.7.1. Output Clause Options
5.7.1.1. Controlling Output Using an Expression
5.7.1.2. Suppressing Output With After
5.7.2. Aggregation, Group By, Having and Output clause interaction
5.7.3. Runtime Considerations
5.8. Sorting Output: the Order By Clause
5.9. Limiting Row Count: the Limit Clause
5.10. Merging Streams and Continuous Insertion: the Insert Into Clause
5.10.1. Transposing a Property To a Stream
5.10.2. Merging Streams By Event Type
5.10.3. Merging Disparate Types of Events: Variant Streams
5.10.4. Decorated Events
5.10.5. Event as a Property
5.10.6. Instantiating and Populating an Underlying Event Object
5.10.7. Transposing an Expression Result
5.11. Subqueries
5.11.1. The 'exists' Keyword
5.11.2. The 'in' and 'not in' Keywords
5.11.3. The 'any' and 'some' Keywords
5.11.4. The 'all' Keyword
5.11.5. Multi-Column Selection
5.11.6. Multi-Row Selection
5.11.7. Hints Related to Subqueries
5.12. Joining Event Streams
5.12.1. Introducing Joins
5.12.2. Inner (Default) Joins
5.12.3. Outer, Left and Right Joins
5.12.4. Unidirectional Joins
5.12.5. Hints Related to Joins
5.13. Accessing Relational Data via SQL
5.13.1. Joining SQL Query Results
5.13.2. SQL Query and the EPL Where Clause
5.13.3. Outer Joins With SQL Queries
5.13.4. Using Patterns to Request (Poll) Data
5.13.5. Polling SQL Queries via Iterator
5.13.6. JDBC Implementation Overview
5.13.7. Oracle Drivers and No-Metadata Workaround
5.13.8. SQL Input Parameter and Column Output Conversion
5.13.9. SQL Row POJO Conversion
5.14. Accessing Non-Relational Data via Method Invocation
5.14.1. Joining Method Invocation Results
5.14.2. Polling Method Invocation Results via Iterator
5.14.3. Providing the Method
5.14.4. Using a Map Return Type
5.15. Creating and Using Named Windows
5.15.1. Creating Named Windows: the Create Window clause
5.15.1.1. Creation by Modelling after an Existing Type
5.15.1.2. Creation By Defining Columns Names and Types
5.15.1.3. Dropping or Removing Named Windows
5.15.2. Inserting Into Named Windows
5.15.2.1. Named Windows Holding Decorated Events
5.15.2.2. Named Windows Holding Events As Property
5.15.3. Selecting From Named Windows
5.15.4. Triggered Select on Named Windows: the On Select clause
5.15.5. Triggered Playback from Named Windows: the On Insert clause
5.15.6. Populating a Named Window from an Existing Named Window
5.15.7. Updating Named Windows: the On Update clause
5.15.8. Deleting From Named Windows: the On Delete clause
5.15.8.1. Using Patterns in the On Delete Clause
5.15.9. Triggered Upsert using the On-Merge Clause
5.15.10. Explicitly Indexing Named Windows
5.15.11. Versioning and Revision Event Type Use with Named Windows
5.16. Declaring an Event Type: Create Schema
5.16.1. Declare an Event Type by Providing Names and Types
5.16.2. Declare an Event Type by Providing a Class Name
5.16.3. Declare a Variant Stream
5.17. Splitting and Duplicating Streams
5.18. Variables and Constants
5.18.1. Creating Variables: the Create Variable clause
5.18.2. Setting Variable Values: the On Set clause
5.18.3. Using Variables
5.18.4. Object-Type Variables
5.18.5. Class and Event-Type Variables
5.19. Contained-Event Selection
5.19.1. Select Clause in a Contained-Event Selection
5.19.2. Where Clause in a Contained-Event Selection
5.19.3. Contained-Event Selection and Joins
5.19.4. Sentence and Word Example
5.19.5. More Examples
5.20. Updating an Insert Stream: the Update IStream Clause
5.20.1. Immutability and Updates
5.21. Controlling Event Delivery : The For Clause
6. EPL Reference: Patterns
6.1. Event Pattern Overview
6.2. How to use Patterns
6.2.1. Pattern Syntax
6.2.2. Patterns in EPL
6.2.3. Subscribing to Pattern Events
6.2.4. Pulling Data from Patterns
6.2.5. Pattern Error Reporting
6.3. Operator Precedence
6.4. Filter Expressions In Patterns
6.4.1. Controlling Event Consumption
6.5. Pattern Operators
6.5.1. Every
6.5.1.1. Limiting Subexpression Lifetime
6.5.1.2. Every Operator Example
6.5.1.3. Sensor Example
6.5.2. Every-Distinct
6.5.3. Repeat
6.5.4. Repeat-Until
6.5.4.1. Unbound Repeat
6.5.4.2. Bound Repeat Overview
6.5.4.3. Bound Repeat - Open Ended Range
6.5.4.4. Bound Repeat - High Endpoint Range
6.5.4.5. Bound Repeat - Bounded Range
6.5.4.6. Tags and the Repeat Operator
6.5.5. And
6.5.6. Or
6.5.7. Not
6.5.8. Followed-by
6.5.8.1. Limiting Sub-Expression Count
6.5.8.2. Limiting Engine-wide Sub-Expression Count
6.5.9. Pattern Guards
6.5.9.1. The timer:within Pattern Guard
6.5.9.2. The timer:withinmax Pattern Guard
6.5.9.3. The while Pattern Guard
6.5.9.4. Guard Time Interval Expressions
6.5.9.5. Combining Guard Expressions
6.6. Pattern Atoms
6.6.1. Filter Atoms
6.6.2. Time-based Observer Atoms
6.6.2.1. timer:interval
6.6.2.2. timer:at
7. EPL Reference: Match Recognize
7.1. Overview
7.2. Comparison of Match Recognize and EPL Patterns
7.3. Syntax
7.3.1. Syntax Example
7.4. Pattern and Pattern Operators
7.4.1. Operator Precedence
7.4.2. Concatenation
7.4.3. Alternation
7.4.4. Quantifiers Overview
7.4.5. Variables Can be Singleton or Group
7.4.5.1. Additional Aggregation Functions
7.4.6. Eliminating Duplicate Matches
7.4.7. Greedy Or Reluctant
7.4.8. Quantifier - One Or More (+ and +?)
7.4.9. Quantifier - Zero Or More (* and *?)
7.4.10. Quantifier - Zero Or One (? and ??)
7.5. Define Clause
7.5.1. The Prev Operator
7.6. Measure Clause
7.7. Datawindow-Bound
7.8. Interval
7.9. Use with Different Event Types
7.10. Limitations
8. EPL Reference: Operators
8.1. Arithmetic Operators
8.2. Logical And Comparison Operators
8.2.1. Null-Value Comparison Operators
8.3. Concatenation Operators
8.4. Binary Operators
8.5. Array Definition Operator
8.6. Dot Operator
8.6.1. Duck Typing
8.7. The 'in' Keyword
8.7.1. 'in' for Range Selection
8.8. The 'between' Keyword
8.9. The 'like' Keyword
8.10. The 'regexp' Keyword
8.11. The 'any' and 'some' Keywords
8.12. The 'all' Keyword
8.13. The 'new' Keyword
9. EPL Reference: Functions
9.1. Single-row Function Reference
9.1.1. The Case Control Flow Function
9.1.2. The Cast Function
9.1.3. The Coalesce Function
9.1.4. The Current_Timestamp Function
9.1.5. The Exists Function
9.1.6. The Instance-Of Function
9.1.7. The Min and Max Functions
9.1.8. The Previous Function
9.1.8.1. Restrictions
9.1.8.2. Comparison to the prior Function
9.1.9. The Previous-Tail Function
9.1.9.1. Restrictions
9.1.10. The Previous-Window Function
9.1.10.1. Restrictions
9.1.11. The Previous-Count Function
9.1.11.1. Restrictions
9.1.12. The Prior Function
9.1.13. The Type-Of Function
9.2. Aggregate Functions
9.2.1. SQL-Standard Functions
9.2.2. Data Window Aggregation Functions
9.2.2.1. First Aggregation Function
9.2.2.2. Last Aggregation Function
9.2.2.3. Window Aggregation Function
9.2.3. Additional Aggregation Functions
9.3. User-Defined Functions
9.4. Select-Clause transpose Function
9.4.1. Transpose with Insert-Into
10. EPL Reference: Enumeration Methods
10.1. Overview
10.2. Example Events
10.3. How to Use
10.3.1. Syntax
10.3.2. Introductory Examples
10.3.3. Input, Output and Limitations
10.4. Inputs
10.4.1. Subquery Results
10.4.2. Named Window
10.4.3. Event Property
10.4.4. Data Window Aggregation
10.4.5. prev, prevwindow and prevtail Single-Row Functions as Input
10.4.6. Single-Row Function or UDF
10.4.7. Declared Expression
10.5. Example
10.6. Reference
10.6.1. Aggregate
10.6.2. AllOf
10.6.3. AnyOf
10.6.4. Average
10.6.5. CountOf
10.6.6. Except
10.6.7. FirstOf
10.6.8. GroupBy
10.6.9. Intersect
10.6.10. LastOf
10.6.11. LeastFrequent
10.6.12. Max
10.6.13. MaxBy
10.6.14. Min
10.6.15. MinBy
10.6.16. MostFrequent
10.6.17. OrderBy and OrderByDesc
10.6.18. Reverse
10.6.19. SelectFrom
10.6.20. SequenceEqual
10.6.21. SumOf
10.6.22. Take
10.6.23. TakeLast
10.6.24. TakeWhile
10.6.25. TakeWhileLast
10.6.26. ToMap
10.6.27. Union
10.6.28. Where
11. EPL Reference: Date-Time Methods
11.1. Overview
11.2. How to Use
11.2.1. Syntax
11.3. Calendar and Formatting Reference
11.3.1. Between
11.3.2. Format
11.3.3. Get (By Field)
11.3.4. Get (By Name)
11.3.5. Minus
11.3.6. Plus
11.3.7. RoundCeiling
11.3.8. RoundFloor
11.3.9. RoundHalf
11.3.10. Set (By Field)
11.3.11. WithDate
11.3.12. WithMax
11.3.13. WithMin
11.3.14. WithTime
11.3.15. ToCalendar
11.3.16. ToDate
11.3.17. ToMillisec
11.4. Interval Algebra Reference
11.4.1. After
11.4.2. Before
11.4.3. Coincides
11.4.4. During
11.4.5. Finishes
11.4.6. Finished By
11.4.7. Includes
11.4.8. Meets
11.4.9. Met By
11.4.10. Overlaps
11.4.11. Overlapped By
11.4.12. Starts
11.4.13. Started By
12. EPL Reference: Views
12.1. Window views
12.1.1. Length window (win:length)
12.1.2. Length batch window (win:length_batch)
12.1.3. Time window (win:time)
12.1.4. Externally-timed window (win:ext_timed)
12.1.5. Time batch window (win:time_batch)
12.1.6. Time-Length combination batch window (win:time_length_batch)
12.1.7. Time-Accumulating window (win:time_accum)
12.1.8. Keep-All window (win:keepall)
12.1.9. First Length (win:firstlength)
12.1.10. First Time (win:firsttime)
12.1.11. Expiry Expression (win:expr)
12.1.12. Expiry Expression Batch (win:expr_batch)
12.2. Standard view set
12.2.1. Unique (std:unique)
12.2.2. Grouped Data Window (std:groupwin)
12.2.3. Size (std:size)
12.2.4. Last Event (std:lastevent)
12.2.5. First Event (std:firstevent)
12.2.6. First Unique (std:firstunique)
12.3. Statistics views
12.3.1. Univariate statistics (stat:uni)
12.3.2. Regression (stat:linest)
12.3.3. Correlation (stat:correl)
12.3.4. Weighted average (stat:weighted_avg)
12.4. Extension View Set
12.4.1. Sorted Window View (ext:sort)
12.4.2. Time-Order View (ext:time_order)
13. API Reference
13.1. API Overview
13.2. The Service Provider Interface
13.3. The Administrative Interface
13.3.1. Creating Statements
13.3.2. Receiving Statement Results
13.3.3. Setting a Subscriber Object
13.3.3.1. Row-By-Row Delivery
13.3.3.2. Multi-Row Delivery
13.3.3.3. No-Parameter Update Method
13.3.4. Adding Listeners
13.3.4.1. Subscription Snapshot and Atomic Delivery
13.3.5. Using Iterators
13.3.6. Managing Statements
13.3.7. Runtime Configuration
13.4. The Runtime Interface
13.4.1. Event Sender
13.4.2. Receiving Unmatched Events
13.4.3. On-Demand Snapshot Query Execution
13.4.3.1. On-Demand Query API
13.5. Event and Event Type
13.5.1. Event Type Metadata
13.5.2. Event Object
13.5.3. Query Example
13.5.4. Pattern Example
13.6. Engine Threading and Concurrency
13.6.1. Advanced Threading
13.6.1.1. Inbound Threading
13.6.1.2. Outbound Threading
13.6.1.3. Timer Execution Threading
13.6.1.4. Route Execution Threading
13.6.1.5. Threading Service Provider Interface
13.6.2. Processing Order
13.6.2.1. Competing Statements
13.6.2.2. Competing Events in a Work Queue
13.7. Controlling Time-Keeping
13.7.1. Controlling Time Using Time Span Events
13.7.2. Additional Time-Related APIs
13.8. Time Resolution
13.9. Service Isolation
13.9.1. Overview
13.9.2. Example: Suspending a Statement
13.9.3. Example: Catching up a Statement from Historical Data
13.9.4. Isolation for Insert-Into
13.9.5. Isolation for Named Windows
13.9.6. Runtime Considerations
13.10. Exception Handling
13.11. Condition Handling
13.12. Statement Object Model
13.12.1. Building an Object Model
13.12.2. Building Expressions
13.12.3. Building a Pattern Statement
13.12.4. Building a Select Statement
13.12.5. Building a Create-Variable and On-Set Statement
13.12.6. Building Create-Window, On-Delete and On-Select Statements
13.13. Prepared Statement and Substitution Parameters
13.14. Engine and Statement Metrics Reporting
13.14.1. Engine Metrics
13.14.2. Statement Metrics
13.15. Event Rendering to XML and JSON
13.15.1. JSON Event Rendering Conventions and Options
13.15.2. XML Event Rendering Conventions and Options
13.16. Plug-in Loader
13.17. Interrogating EPL Annotations
13.18. Context Partition Selection
13.18.1. Selectors
13.19. Test and Assertion Support
13.19.1. EPAssertionUtil Summary
13.19.2. SupportUpdateListener Summary
13.19.3. Usage Example
14. Configuration
14.1. Programmatic Configuration
14.2. Configuration via XML File
14.3. XML Configuration File
14.4. Configuration Items
14.4.1. Events represented by Java Classes
14.4.1.1. Package of Java Event Classes
14.4.1.2. Event type name to Java class mapping
14.4.1.3. Non-JavaBean and Legacy Java Event Classes
14.4.1.4. Specifying Event Properties for Java Classes
14.4.1.5. Turning off Code Generation
14.4.1.6. Case Sensitivity and Property Names
14.4.1.7. Factory and Copy Method
14.4.1.8. Start and End Timestamp
14.4.2. Events represented by java.util.Map
14.4.3. Events represented by org.w3c.dom.Node
14.4.3.1. Schema Resource
14.4.3.2. Explicit XPath Property
14.4.3.3. Absolute or Deep Property Resolution
14.4.3.4. XPath Variable and Function Resolver
14.4.3.5. Auto Fragment
14.4.3.6. XPath Property Expression
14.4.3.7. Event Sender Setting
14.4.3.8. Start and End Timestamp
14.4.4. Events represented by Plug-in Event Representations
14.4.4.1. Enabling an Custom Event Representation
14.4.4.2. Adding Plug-in Event Types
14.4.4.3. Setting Resolution URIs
14.4.5. Class and package imports
14.4.6. Cache Settings for From-Clause Method Invocations
14.4.7. Variables
14.4.8. Relational Database Access
14.4.8.1. Connections obtained via DataSource
14.4.8.2. Connections obtained via DataSource Factory
14.4.8.3. Connections obtained via DriverManager
14.4.8.4. Connections-level settings
14.4.8.5. Connections lifecycle settings
14.4.8.6. Cache settings
14.4.8.7. Column Change Case
14.4.8.8. SQL Types Mapping
14.4.8.9. Metadata Origin
14.4.9. Engine Settings related to Concurrency and Threading
14.4.9.1. Preserving the order of events delivered to listeners
14.4.9.2. Preserving the order of events for insert-into streams
14.4.9.3. Internal Timer Settings
14.4.9.4. Advanced Threading Options
14.4.9.5. Engine Fair Locking
14.4.10. Engine Settings related to Event Metadata
14.4.10.1. Java Class Property Names, Case Sensitivity and Accessor Style
14.4.11. Engine Settings related to View Resources
14.4.11.1. Sharing View Resources between Statements
14.4.11.2. Configuring Multi-Expiry Policy Defaults
14.4.12. Engine Settings related to Logging
14.4.12.1. Execution Path Debug Logging
14.4.12.2. Query Plan Logging
14.4.12.3. JDBC Logging
14.4.12.4. Audit Logging
14.4.13. Engine Settings related to Variables
14.4.13.1. Variable Version Release Interval
14.4.14. Engine Settings related to Patterns
14.4.14.1. Followed-By Operator Maximum Subexpression Count
14.4.15. Engine Settings related to Scripts
14.4.16. Engine Settings related to Stream Selection
14.4.16.1. Default Statement Stream Selection
14.4.17. Engine Settings related to Time Source
14.4.17.1. Default Time Source
14.4.18. Engine Settings related to Metrics Reporting
14.4.19. Engine Settings related to Language and Locale
14.4.20. Engine Settings related to Expression Evaluation
14.4.20.1. Integer Division and Division by Zero
14.4.20.2. Subselect Evaluation Order
14.4.20.3. User-Defined Function or Static Method Cache
14.4.20.4. Extended Built-in Aggregation Functions
14.4.20.5. Duck Typing
14.4.21. Engine Settings related to Execution of Statements
14.4.21.1. Prioritized Execution
14.4.21.2. Context Partition Fair Locking
14.4.21.3. Disable Locking
14.4.21.4. Threading Profile
14.4.22. Engine Settings related to Exception Handling
14.4.23. Engine Settings related to Condition Handling
14.4.24. Revision Event Type
14.4.25. Variant Stream
14.5. Type Names
14.6. Runtime Configuration
14.7. Logging Configuration
14.7.1. Log4j Logging Configuration
15. Development Lifecycle
15.1. Authoring
15.2. Testing
15.3. Debugging
15.3.1. @Audit Annotation
15.4. Packaging and Deploying Overview
15.5. EPL Modules
15.6. The Deployment Administrative Interface
15.6.1. Reading Module Content
15.6.2. Ordering Multiple Modules
15.6.3. Deploying and Undeploying
15.6.4. Listing Deployments
15.6.5. State Transitioning a Module
15.6.6. Best Practices
15.7. J2EE Packaging and Deployment
15.7.1. J2EE Deployment Considerations
15.7.2. Servlet Context Listener
16. Integration and Extension
16.1. Overview
16.2. Virtual Data Window
16.2.1. How to Use
16.2.1.1. Query Access Path
16.2.2. Implementing the Factory
16.2.3. Implementing the Virtual Data Window
16.2.4. Implementing the Lookup
16.3. Single-Row Function
16.3.1. Implementing a Single-Row Function
16.3.2. Configuring the Single-Row Function Name
16.3.3. Value Cache
16.3.4. Single-Row Functions in Filter Predicate Expressions
16.4. Derived-value and Data Window View
16.4.1. Implementing a View Factory
16.4.2. Implementing a View
16.4.3. View Contract
16.4.4. Configuring View Namespace and Name
16.4.5. Requirement for Data Window Views
16.4.6. Requirement for Grouped Views
16.5. Aggregation Function
16.5.1. Implementing an Aggregation Function Factory
16.5.2. Implementing an Aggregation Function
16.5.3. Configuring the Aggregation Function Name
16.5.4. Accepting Multiple Parameters
16.6. Pattern Guard
16.6.1. Implementing a Guard Factory
16.6.2. Implementing a Guard Class
16.6.3. Configuring Guard Namespace and Name
16.7. Pattern Observer
16.7.1. Implementing an Observer Factory
16.7.2. Implementing an Observer Class
16.7.3. Configuring Observer Namespace and Name
16.8. Event Type And Event Object
16.8.1. How It Works
16.8.2. Steps
16.8.3. URI-based Resolution
16.8.4. Example
16.8.4.1. Sample Event Type
16.8.4.2. Sample Event Bean
16.8.4.3. Sample Event Representation
16.8.4.4. Sample Event Bean Factory
17. Script Support
17.1. Overview
17.2. Syntax
17.3. Examples
17.4. Built-In EPL Script Attributes
17.5. Performance Notes
17.6. Additional Notes
18. Examples, Tutorials, Case Studies
18.1. Examples Overview
18.2. Running the Examples
18.3. AutoID RFID Reader
18.4. Runtime Configuration
18.5. JMS Server Shell and Client
18.5.1. Overview
18.5.2. JMS Messages as Events
18.5.3. JMX for Remote Dynamic Statement Management
18.6. Market Data Feed Monitor
18.6.1. Input Events
18.6.2. Computing Rates Per Feed
18.6.3. Detecting a Fall-off
18.6.4. Event generator
18.7. OHLC Plug-in View
18.8. Transaction 3-Event Challenge
18.8.1. The Events
18.8.2. Combined event
18.8.3. Real time summary data
18.8.4. Find problems
18.8.5. Event generator
18.9. Self-Service Terminal
18.9.1. Events
18.9.2. Detecting Customer Check-in Issues
18.9.3. Absence of Status Events
18.9.4. Activity Summary Data
18.9.5. Sample Application for J2EE Application Server
18.9.5.1. Running the Example
18.9.5.2. Building the Example
18.9.5.3. Running the Event Simulator and Receiver
18.10. Assets Moving Across Zones - An RFID Example
18.11. StockTicker
18.12. MatchMaker
18.13. Named Window Query
18.14. Sample Virtual Data Window
18.15. Quality of Service
18.16. Trivia Geeks Club
19. Performance
19.1. Performance Results
19.2. Performance Tips
19.2.1. Understand how to tune your Java virtual machine
19.2.2. Compare Esper to other solutions
19.2.3. Input and Output Bottlenecks
19.2.4. Advanced Theading
19.2.5. Select the underlying event rather than individual fields
19.2.6. Prefer stream-level filtering over post-data-window filtering
19.2.7. Reduce the use of arithmetic in expressions
19.2.8. Remove Unneccessary Constructs
19.2.9. End Pattern Sub-Expressions
19.2.10. Consider using EventPropertyGetter for fast access to event properties
19.2.11. Consider casting the underlying event
19.2.12. Turn off logging and audit
19.2.13. Disable view sharing
19.2.14. Tune or disable delivery order guarantees
19.2.15. Use a Subscriber Object to Receive Events
19.2.16. High-Arrival-Rate Streams and Single Statements
19.2.17. Joins And Where-clause And Data Windows
19.2.18. Patterns and Pattern Sub-Expression Instances
19.2.19. Pattern Sub-Expression Instance Versus Data Window Use
19.2.20. The Keep-All Data Window
19.2.21. Statement Design for Reduced Memory Consumption
19.2.22. Performance, JVM, OS and hardware
19.2.23. Consider using Hints
19.2.24. Optimizing Stream Filter Expressions
19.2.25. Statement and Engine Metric Reporting
19.2.26. Expression Evaluation Order and Early Exit
19.2.27. Large Number of Threads
19.2.28. Context Partition Related Information
19.2.29. Prefer Constant Variables over Non-Constant Variables
19.3. Using the performance kit
19.3.1. How to use the performance kit
19.3.2. How we use the performance kit
20. References
20.1. Reference List
A. Output Reference and Samples
A.1. Introduction and Sample Data
A.2. Output for Un-aggregated and Un-grouped Queries
A.2.1. No Output Rate Limiting
A.2.2. Output Rate Limiting - Default
A.2.3. Output Rate Limiting - Last
A.2.4. Output Rate Limiting - First
A.2.5. Output Rate Limiting - Snapshot
A.3. Output for Fully-aggregated and Un-grouped Queries
A.3.1. No Output Rate Limiting
A.3.2. Output Rate Limiting - Default
A.3.3. Output Rate Limiting - Last
A.3.4. Output Rate Limiting - First
A.3.5. Output Rate Limiting - Snapshot
A.4. Output for Aggregated and Un-grouped Queries
A.4.1. No Output Rate Limiting
A.4.2. Output Rate Limiting - Default
A.4.3. Output Rate Limiting - Last
A.4.4. Output Rate Limiting - First
A.4.5. Output Rate Limiting - Snapshot
A.5. Output for Fully-aggregated and Grouped Queries
A.5.1. No Output Rate Limiting
A.5.2. Output Rate Limiting - Default
A.5.3. Output Rate Limiting - All
A.5.4. Output Rate Limiting - Last
A.5.5. Output Rate Limiting - First
A.5.6. Output Rate Limiting - Snapshot
A.6. Output for Aggregated and Grouped Queries
A.6.1. No Output Rate Limiting
A.6.2. Output Rate Limiting - Default
A.6.3. Output Rate Limiting - All
A.6.4. Output Rate Limiting - Last
A.6.5. Output Rate Limiting - First
A.6.6. Output Rate Limiting - Snapshot
B. Reserved Keywords
Index

Preface

Analyzing and reacting to information in real-time oftentimes requires the development of custom applications. Typically these applications must obtain the data to analyze, filter data, derive information and then indicate this information through some form of presentation or communication. Data may arrive with high frequency requiring high throughput processing. And applications may need to be flexible and react to changes in requirements while the data is processed. Esper is an event stream processor that aims to enable a short development cycle from inception to production for these types of applications.

This document is a resource for software developers who develop event driven applications. It also contains information that is useful for business analysts and system architects who are evaluating Esper.

It is assumed that the reader is familiar with the Java programming language.

This document is relevant in all phases of your software development project: from design to deployment and support.

If you are new to Esper, please follow these steps:

  1. Read the tutorials, case studies and solution patterns available on the Esper public web site at http://esper.codehaus.org

  2. Read Section 1.1, “Introduction to CEP and event stream analysis” if you are new to CEP and ESP (complex event processing, event stream processing)

  3. Read Chapter 2, Event Representations that explains the different ways of representing events to Esper

  4. Read Chapter 3, Processing Model to gain insight into EPL continuous query results

  5. Read Section 5.1, “EPL Introduction” for an introduction to event stream processing via EPL

  6. Read Section 6.1, “Event Pattern Overview” for an overview over event patterns

  7. Read Section 7.1, “Overview” for an overview over event patterns using the match recognize syntax.

  8. Then glance over the examples Section 18.1, “Examples Overview”

  9. Finally to test drive Esper performance, read Chapter 19, Performance

Chapter 1. Technology Overview

1.1. Introduction to CEP and event stream analysis

The Esper engine has been developed to address the requirements of applications that analyze and react to events. Some typical examples of applications are:

  • Business process management and automation (process monitoring, BAM, reporting exceptions)

  • Finance (algorithmic trading, fraud detection, risk management)

  • Network and application monitoring (intrusion detection, SLA monitoring)

  • Sensor network applications (RFID reading, scheduling and control of fabrication lines, air traffic)

What these applications have in common is the requirement to process events (or messages) in real-time or near real-time. This is sometimes referred to as complex event processing (CEP) and event stream analysis. Key considerations for these types of applications are throughput, latency and the complexity of the logic required.

  • High throughput - applications that process large volumes of messages (between 1,000 to 100k messages per second)

  • Low latency - applications that react in real-time to conditions that occur (from a few milliseconds to a few seconds)

  • Complex computations - applications that detect patterns among events (event correlation), filter events, aggregate time or length windows of events, join event streams, trigger based on absence of events etc.

The Esper engine was designed to make it easier to build and extend CEP applications.

1.2. CEP and relational databases

Relational databases and the standard query language (SQL) are designed for applications in which most data is fairly static and complex queries are less frequent. Also, most databases store all data on disks (except for in-memory databases) and are therefore optimized for disk access.

To retrieve data from a database an application must issue a query. If an application need the data 10 times per second it must fire the query 10 times per second. This does not scale well to hundreds or thousands of queries per second.

Database triggers can be used to fire in response to database update events. However database triggers tend to be slow and often cannot easily perform complex condition checking and implement logic to react.

In-memory databases may be better suited to CEP applications than traditional relational database as they generally have good query performance. Yet they are not optimized to provide immediate, real-time query results required for CEP and event stream analysis.

1.3. The Esper engine for CEP

The Esper engine works a bit like a database turned upside-down. Instead of storing the data and running queries against stored data, the Esper engine allows applications to store queries and run the data through. Response from the Esper engine is real-time when conditions occur that match queries. The execution model is thus continuous rather than only when a query is submitted.

Esper provides two principal methods or mechanisms to process events: event patterns and event stream queries.

Esper offers an event pattern language to specify expression-based event pattern matching. Underlying the pattern matching engine is a state machine implementation. This method of event processing matches expected sequences of presence or absence of events or combinations of events. It includes time-based correlation of events.

Esper also offers event stream queries that address the event stream analysis requirements of CEP applications. Event stream queries provide the windows, aggregation, joining and analysis functions for use with streams of events. These queries are following the EPL syntax. EPL has been designed for similarity with the SQL query language but differs from SQL in its use of views rather than tables. Views represent the different operations needed to structure data in an event stream and to derive data from an event stream.

Esper provides these two methods as alternatives through the same API.

1.4. Required 3rd Party Libraries

Esper requires the following 3rd-party libraries at runtime:

  • ANTLR is the parser generator used for parsing and parse tree walking of the pattern and EPL syntax. Credit goes to Terence Parr at http://www.antlr.org. The ANTLR license is in the lib directory. The library is required for compile-time only.

  • CGLIB is the code generation library for fast method calls. This open source software is under the Apache license. The Apache 2.0 license is in the lib directory.

  • Apache commons logging is a logging API that works together with LOG4J and other logging APIs. While Apache commons logging is required, the LOG4J log component is not required and can be replaced with SLF4J or other loggers. This open source software is under the Apache license. The Apache 2.0 license is in the lib directory.

Esper requires the following 3rd-party libraries at compile-time and for running the test suite:

  • JUnit is a great unit testing framework. Its license has also been placed in the lib directory. The library is required for build-time only.

  • MySQL connector library is used for testing SQL integration and is required for running the automated test suite.

Chapter 2. Event Representations

This section outlines the different means to model and represent events.

Esper uses the term event type to describe the type information available for an event representation.

Your application may configure predefined event types at startup time or dynamically add event types at runtime via API or EPL syntax. See Section 14.4, “Configuration Items” for startup-time configuration and Section 13.3.7, “Runtime Configuration” for the runtime configuration API.

The EPL create schema syntax allows declaring an event type at runtime using EPL, see Section 5.16, “Declaring an Event Type: Create Schema”.

In Section 13.5, “Event and Event Type” we explain how an event type becomes visible in EPL statements and output events delivered by the engine.

2.1. Event Underlying Java Objects

An event is an immutable record of a past occurrence of an action or state change. Event properties capture the state information for an event.

In Esper, an event can be represented by any of the following underlying Java objects:

Table 2.1. Event Underlying Java Objects

Java ClassDescription
java.lang.ObjectAny Java POJO (plain-old java object) with getter methods following JavaBean conventions; Legacy Java classes not following JavaBean conventions can also serve as events .
java.util.MapMap events are key-values pairs and can also contain objects, further Map, and arrays thereof.
org.w3c.dom.NodeXML document object model (DOM).
org.apache.axiom.om.OMDocument or OMElementXML - Streaming API for XML (StAX) - Apache Axiom (provided by EsperIO package).
Application classesPlug-in event representation via the extension API.

Esper provides multiple choices for representing an event. There is no absolute need for you to create new Java classes to represent an event.

Event representations have the following in common:

  • All event representations support nested, indexed and mapped properties (aka. property expression), as explained in more detail below. There is no limitation to the nesting level.

  • All event representations provide event type metadata. This includes type metadata for nested properties.

  • All event representations allow transposing the event itself and parts of all of its property graph into new events. The term transposing refers to selecting the event itself or event properties that are themselves nestable property graphs, and then querying the event's properties or nested property graphs in further statements. The Apache Axiom event representation is an exception and does not currently allow transposing event properties but does allow transposing the event itself.

  • The Java object and Map representations allow supertypes.

The API behavior for all event representations is the same, with minor exceptions noted in this chapter.

The benefits of multiple event representations are:

  • For applications that already have events in one of the supported representations, there is no need to transform events into a Java object before processing.

  • Event representations are exchangeable, reducing or eliminating the need to change statements when the event representation changes.

  • Event representations are interoperable, allowing all event representations to interoperate in same or different statements.

  • The choice makes its possible to consciously trade-off performance, ease-of-use, the ability to evolve and effort needed to import or externalize events and use existing event type metadata.

2.2. Event Properties

Event properties capture the state information for an event. Event properties be simple as well as indexed, mapped and nested event properties. The table below outlines the different types of properties and their syntax in an event expression. This syntax allows statements to query deep JavaBean objects graphs, XML structures and Map events.

Table 2.2. Types of Event Properties

TypeDescriptionSyntaxExample
SimpleA property that has a single value that may be retrieved.
name
sensorId
IndexedAn indexed property stores an ordered collection of objects (all of the same type) that can be individually accessed by an integer-valued, non-negative index (or subscript).
name[index]
sensor[0]
MappedA mapped property stores a keyed collection of objects (all of the same type).
name('key')
sensor('light')
NestedA nested property is a property that lives within another property of an event.
name.nestedname
sensor.value

Combinations are also possible. For example, a valid combination could be person.address('home').street[0].

2.2.1. Escape Characters

If your application uses java.util.Map or XML to represent events, then event property names may themselves contain the dot ('.') character. The backslash ('\') character can be used to escape dot characters in property names, allowing a property name to contain dot characters.

For example, the EPL as shown below expects a property by name part1.part2 to exist on event type MyEvent:

select part1\.part2 from MyEvent

Sometimes your event properties may overlap with EPL language keywords. In this case you may use the backwards apostrophe ` character to escape the property name.

The next example assumes a Quote event that has a property by name order, while order is also a reserved keyword:

select `order`, price as `price.for.goods` from Quote

2.3. Dynamic Event Properties

Dynamic (unchecked) properties are event properties that need not be known at statement compilation time. Such properties are resolved during runtime: they provide duck typing functionality.

The idea behind dynamic properties is that for a given underlying event representation we don't always know all properties in advance. An underlying event may have additional properties that are not known at statement compilation time, that we want to query on. The concept is especially useful for events that represent rich, object-oriented domain models.

The syntax of dynamic properties consists of the property name and a question mark. Indexed, mapped and nested properties can also be dynamic properties:

Table 2.3. Types of Event Properties

TypeSyntax
Dynamic Simple
name?
Dynamic Indexed
name[index]?
Dynamic Mapped
name('key')?
Dynamic Nested
name?.nestedPropertyName

Dynamic properties always return the java.lang.Object type. Also, dynamic properties return a null value if the dynamic property does not exist on events processed at runtime.

As an example, consider an OrderEvent event that provides an "item" property. The "item" property is of type Object and holds a reference to an instance of either a Service or Product.

Assume that both Service and Product classes provide a property named "price". Via a dynamic property we can specify a query that obtains the price property from either object (Service or Product):

select item.price? from OrderEvent

As a second example, assume that the Service class contains a "serviceName" property that the Product class does not possess. The following query returns the value of the "serviceName" property for Service objects. It returns a null-value for Product objects that do not have the "serviceName" property:

select item.serviceName? from OrderEvent

Consider the case where OrderEvent has multiple implementation classes, some of which have a "timestamp" property. The next query returns the timestamp property of those implementations of the OrderEvent interface that feature the property:

select timestamp? from OrderEvent

The query as above returns a single column named "timestamp?" of type Object.

When dynamic properties are nested, then all properties under the dynamic property are also considered dynamic properties. In the below example the query asks for the "direction" property of the object returned by the "detail" dynamic property:

select detail?.direction from OrderEvent

Above is equivalent to:

select detail?.direction? from OrderEvent

The functions that are often useful in conjunction with dynamic properties are:

  • The cast function casts the value of a dynamic property (or the value of an expression) to a given type.

  • The exists function checks whether a dynamic property exists. It returns true if the event has a property of that name, or false if the property does not exist on that event.

  • The instanceof function checks whether the value of a dynamic property (or the value of an expression) is of any of the given types.

  • The typeof function returns the string type name of a dynamic property.

Dynamic event properties work with all event representations outlined next: Java objects, Map-based and XML DOM-based events.

2.4. Fragment and Fragment Type

Sometimes an event can have properties that are itself events. Esper uses the term fragment and fragment type for such event pieces. The best example is a pattern that matches two or more events and the output event contains the matching events as fragments. In other words, output events can be a composite event that consists of further events, the fragments.

Fragments have the same metadata available as their enclosing composite events. The metadata for enclosing composite events contains information about which properties are fragments, or have a property value that can be represented as a fragment and therefore as an event itself.

Fragments and type metadata can allow your application to navigate composite events without the need for using the Java reflection API and reducing the coupling to the underlying event representation. The API is further described in Section 13.5, “Event and Event Type”.

2.5. Plain-Old Java Object Events

Plain-old Java object events are object instances that expose event properties through JavaBeans-style getter methods. Events classes or interfaces do not have to be fully compliant to the JavaBean specification; however for the Esper engine to obtain event properties, the required JavaBean getter methods must be present or an accessor-style and accessor-methods may be defined via configuration.

Esper supports JavaBeans-style event classes that extend a superclass or implement one or more interfaces. Also, Esper event pattern and EPL statements can refer to Java interface classes and abstract classes.

Classes that represent events should be made immutable. As events are recordings of a state change or action that occurred in the past, the relevant event properties should not be changeable. However this is not a hard requirement and the Esper engine accepts events that are mutable as well.

The hashCode and equals methods do not need to be implemented. The implementation of these methods by a Java event class does not affect the behavior of the engine in any way.

Please see Chapter 14, Configuration on options for naming event types represented by Java object event classes. Java classes that do not follow JavaBean conventions, such as legacy Java classes that expose public fields, or methods not following naming conventions, require additional configuration. Via configuration it is also possible to control case sensitivity in property name resolution. The relevant section in the chapter on configuration is Section 14.4.1.3, “Non-JavaBean and Legacy Java Event Classes”.

2.5.1. Java Object Event Properties

As outlined earlier, the different property types are supported by the standard JavaBeans specification, and some of which are uniquely supported by Esper:

  • Simple properties have a single value that may be retrieved. The underlying property type might be a Java language primitive (such as int, a simple object (such as a java.lang.String), or a more complex object whose class is defined either by the Java language, by the application, or by a class library included with the application.

  • Indexed - An indexed property stores an ordered collection of objects (all of the same type) that can be individually accessed by an integer-valued, non-negative index (or subscript).

  • Mapped - As an extension to standard JavaBeans APIs, Esper considers any property that accepts a String-valued key a mapped property.

  • Nested - A nested property is a property that lives within another Java object which itself is a property of an event.

Assume there is an NewEmployeeEvent event class as shown below. The mapped and indexed properties in this example return Java objects but could also return Java language primitive types (such as int or String). The Address object and Employee can themselves have properties that are nested within them, such as a street name in the Address object or a name of the employee in the Employee object.

public class NewEmployeeEvent {
	public String getFirstName();
	public Address getAddress(String type);
	public Employee getSubordinate(int index);
	public Employee[] getAllSubordinates();
}

Simple event properties require a getter-method that returns the property value. In this example, the getFirstName getter method returns the firstName event property of type String.

Indexed event properties require either one of the following getter-methods. A method that takes an integer-type key value and returns the property value, such as the getSubordinate method, or a method that returns an array-type, or a class that implements Iterable. An example is the getSubordinates getter method, which returns an array of Employee but could also return an Iterable. In an EPL or event pattern statement, indexed properties are accessed via the property[index] syntax.

Mapped event properties require a getter-method that takes a String-typed key value and returns the property value, such as the getAddress method. In an EPL or event pattern statement, mapped properties are accessed via the property('key') syntax.

Nested event properties require a getter-method that returns the nesting object. The getAddress and getSubordinate methods are mapped and indexed properties that return a nesting object. In an EPL or event pattern statement, nested properties are accessed via the property.nestedProperty syntax.

All event pattern and EPL statements allow the use of indexed, mapped and nested properties (or a combination of these) anywhere where one or more event property names are expected. The below example shows different combinations of indexed, mapped and nested properties in filters of event pattern expressions (each line is a separate EPL statement):

every NewEmployeeEvent(firstName='myName')
every NewEmployeeEvent(address('home').streetName='Park Avenue')
every NewEmployeeEvent(subordinate[0].name='anotherName')
every NewEmployeeEvent(allSubordinates[1].name='thatName')
every NewEmployeeEvent(subordinate[0].address('home').streetName='Water Street')

Similarly, the syntax can be used in EPL statements in all places where an event property name is expected, such as in select lists, where-clauses or join criteria.

select firstName, address('work'), subordinate[0].name, subordinate[1].name
from NewEmployeeEvent
where address('work').streetName = 'Park Ave'

2.5.2. Property Names

Property names follows Java standards: the class java.beans.Introspector and method getBeanInfo returns the property names as derived from the name of getter methods. In addition, Esper configuration provides a flag to turn off case-sensitive property names. A sample list of getter methods and property names is:

Table 2.4. JavaBeans-style Getter Methods and Property Names

MethodProperty NameExample
getPrice()price
select price from MyEvent
getNAME()NAME
select NAME from MyEvent
getItemDesc()itemDesc
select itemDesc from MyEvent
getQ()q
select q from MyEvent
getQN()QN
select QN from MyEvent
getqn()qn
select qn from MyEvent
gets()s
select s from MyEvent

2.5.3. Constants and Enumeration

Constants are public static final fields in Java that may also participate in expressions of all kinds, as this example shows:

select * from MyEvent where property=MyConstantClass.FIELD_VALUE

Event properties that are enumeration values can be compared by their enumeration value:

select * from MyEvent where enumProp=EnumClass.ENUM_VALUE_1

Alternatively, a static method may be employed on a class, such as the enumeration class 'EnumClass' as below:

select * from MyEvent where enumProp=EnumClass.valueOf('ENUM_VALUE_1')

If your application does not import, through configuration, the package that contains the enumeration class, then it must also specify the package name of the class. Enumeration classes that are inner classes must be qualified with $ following Java conventions.

For example, the Color enumeration as an inner class to MyEvent in package org.myorg can be referenced as shown:

select * from MyEvent(enumProp=org.myorg.MyEvent$Color.GREEN).std:firstevent()

Instance methods may also be invoked on event instances by specifying a stream name, as shown below:

select myevent.computeSomething() as result from MyEvent as myevent

Chaining instance methods is supported as this example shows:

select myevent.getComputerFor('books', 'movies').calculate() as result 
from MyEvent as myevent

2.5.4. Parameterized Types

When your getter methods or accessor fields return a parameterized type, for example Iterable<MyEventData> for an indexed property or Map<String, MyEventData> for a mapped property, then property expressions may refer to the properties available through the class that is the type parameter.

An example event that has properties that are parameterized types is:

public class NewEmployeeEvent {
  public String getName();
  public Iterable<EducationHistory> getEducation();
  public Map<String, Address> getAddresses();
}

A sample of valid property expressions for this event is shown next:

select name, education, education[0].date, addresses('home').street
from NewEmployeeEvent

2.5.5. Setter Methods for Indexed and Mapped Properties

An EPL statement may update indexed or mapped properties of an event, provided the event class exposes the required setter method.

The setter method for indexed properties must be named setPropertyName and must take two parameters: the int-type index and the Object type new value.

The setter method for mapped properties must be named setPropertyName and must take two parameters: the String-type map key and the Object type new map value.

The following is an example event that features a setter method for the props mapped property and for the array indexed property:

public class MyEvent {
  private Map props = new HashMap();
  private Object[] array = new Object[10];

  public void setProps(String name, Object value) {
    props.put(name, value);
  }

  public void setArray(int index, Object value) {
    array[index] = value;
  }
  // ... also provide regular JavaBean getters and setters for all properties

This sample statement updates mapped and indexed property values:

update istream MyEventStream set props('key') = 'abc', array[2] = 100

2.5.6. Known Limitations

Esper employs byte code generation for fast access to event properties. When byte code generation is unsuccessful, the engine logs a warning and uses Java reflection to obtain property values instead.

A known limitation is that when an interface has an attribute of a particular type and the actual event bean class returns a subclass of that attribute, the engine logs a warning and uses reflection for that property.

2.6. java.util.Map Events

2.6.1. Overview

Events can also be represented by objects that implement the java.util.Map interface. Event properties of Map events are the values in the map accessible through the get method exposed by the java.util.Map interface.

The Map event type is a comprehensive type system that can eliminate the need to use Java classes as event types, thereby making it easier to change types at runtime or generate type information from another source.

A given Map event type can have one or more supertypes that must also be Map event types. All properties available on any of the Map supertypes are available on the type itself. In addition, anywhere within EPL that an event type name of a Map supertype is used, any of its Map subtypes and their subtypes match that expression.

Your application can add properties to an existing Map event type during runtime using the configuration operation updateMapEventType. Properties may not be updated or deleted - properties can only be added, and nested properties can be added as well. The runtime configuration also allows removing Map event types and adding them back with new type information.

After your application configures a Map event type by providing a type name, the type name can be used when defining further Map event types by specifying the type name as a property type or an array property type.

One-to-Many relationships in Map event types are represented via arrays. A property in a Map event type may be an array of primitive, an array of Java object or an array of Map.

The engine can process java.util.Map events via the sendEvent(Map map, String eventTypeName) method on the EPRuntime interface. Entries in the Map represent event properties. Keys must be of type java.util.String for the engine to be able to look up event property names specified by pattern or EPL statements.

The engine does not validate Map event property names or values. Your application should ensure that objects passed in as event properties match the create schema property names and types, or the configured event type information when using runtime or static configuration.

2.6.2. Map Properties

Map event properties can be of any type. Map event properties that are Java application objects or that are of type java.util.Map (or arrays thereof) offer additional power:

  • Properties that are Java application objects can be queried via the nested, indexed, mapped and dynamic property syntax as outlined earlier.

  • Properties that are of type Map allow Maps to be nested arbitrarily deep and thus can be used to represent complex domain information. The nested, indexed, mapped and dynamic property syntax can be used to query Maps within Maps and arrays of Maps within Maps.

In order to use Map events, the event type name and property names and types must be made known to the engine via Configuration. Please see the examples in Section 14.4.2, “Events represented by java.util.Map”.

The code snippet below creates and processes a Map event. It defines a CarLocationUpdateEvent event type first:

Map event = new HashMap();
event.put("carId", carId);
event.put("direction", direction);
epRuntime.sendEvent(event, "CarLocUpdateEvent");

The CarLocUpdateEvent can now be used in a statement:

select carId from CarLocUpdateEvent.win:time(1 min) where direction = 1

The engine can also query Java objects as values in a Map event via the nested property syntax. Thus Map events can be used to aggregate multiple data structures into a single event and query the composite information in a convenient way. The example below demonstrates a Map event with a transaction and an account object.

Map event = new HashMap();
event.put("txn", txn);
event.put("account", account);
epRuntime.sendEvent(event, "TxnEvent");

An example statement could look as follows.

select account.id, account.rate * txn.amount 
from TxnEvent.win:time(60 sec) 
group by account.id

2.6.3. Map Supertypes

Your Map event type may declare one or more supertypes when configuring the type at engine initialization time or at runtime through the administrative interface.

Supertypes of a Map event type must also be Map event types. All property names and types of a supertype are also available on a subtype and override such same-name properties of the subtype. In addition, anywhere within EPL that an event type name of a Map supertype is used, any of its Map subtypes also matches that expression (similar to the concept of interface in Java).

This example assumes that the BaseUpdate event type has been declared and acts as a supertype to the AccountUpdate event type (both Map event types):

epService.getEPAdministrator().getConfiguration().
    addEventType("AccountUpdate", accountUpdateDef, 
    new String[] {"BaseUpdate"});

Your application EPL statements may select BaseUpdate events and receive both BaseUpdate and AccountUpdate events, as well as any other subtypes of BaseUpdate and their subtypes.

// Receive BaseUpdate and any subtypes including subtypes of subtypes
select * from BaseUpdate

Your application Map event type may have multiple supertypes. The multiple inheritance hierarchy between Maps can be arbitrarily deep, however cyclic dependencies are not allowed. If using runtime configuration, supertypes must exist before a subtype to a supertype can be added.

See Section 14.4.2, “Events represented by java.util.Map” for more information on configuring Map event types.

2.6.4. Advanced Map Property Types

2.6.4.1. Nested Properties

Strongly-typed nested Map-within-Map events can be used to build rich, type-safe event types on the fly. Use the addEventType method on Configuration or ConfigurationOperations for initialization-time and runtime-time type definition.

Noteworthy points are:

  • JavaBean (POJO) objects can appear as properties in Map-within-Map.

  • One may represent Map-within-Map and Map-Array within Map using the name of a previously registered Map event type.

  • There is no limit to the number of nesting levels.

  • Dynamic properties can be used to query Map-within-Map keys that may not be known in advance.

  • The engine returns a null value for properties for which the access path into the nested structure cannot be followed where map entries do not exist.

For demonstration, in this example our top-level event type is an AccountUpdate event, which has an UpdatedField structure as a property. Inside the UpdatedField structure the example defines various fields, as well as a property by name 'history' that holds a JavaBean class UpdateHistory to represent the update history for the account. The code snippet to define the event type is thus:

Map<String, Object> updatedFieldDef = new HashMap<String, Object>();
updatedFieldDef.put("name", String.class);
updatedFieldDef.put("addressLine1", String.class);
updatedFieldDef.put("history", UpdateHistory.class);

Map<String, Object> accountUpdateDef = new HashMap<String, Object>();
accountUpdateDef.put("accountId", long.class);
accountUpdateDef.put("fields", updatedFieldDef);

epService.getEPAdministrator().getConfiguration().
    addEventType("AccountUpdate", accountUpdateDef);

The next code snippet populates a sample event and sends the event into the engine:

Map<String, Object> updatedField = new HashMap<String, Object>();
updatedField.put("name", "Joe Doe");
updatedField.put("addressLine1", "40 Popular Street");
updatedField.put("history", new UpdateHistory());

Map<String, Object> accountUpdate = new HashMap<String, Object>();
accountUpdate.put("accountId", 10009901);
accountUpdate.put("fields", updatedField);

epService.getEPRuntime().sendEvent(accountUpdate, "AccountUpdate");

Last, a sample query to interrogate AccountUpdate events is as follows:

select accountId, fields.name, fields.addressLine1, fields.history.lastUpdate
from AccountUpdate

Note that type information for nested maps is only available to the immediately selecting stream. For example, the second select-query does not work:

insert into MyStream select fields from NestedMapEvent
// this does not work ... instead select the individual fields in the insert-into statement
select fields.name from MyStream 

2.6.4.2. Map Event Type Properties

Your application may declare a Map event type for reuse within other Map event types or for one-to-many properties represented by an array of Maps.

This example declares a Map event type by name AmountCurrency with amount and currency properties:

Map<String, Object> amountAndCurr = new HashMap<String, Object>();
amountAndCurr.put("amount", double.class);
amountAndCurr.put("currency", String.class);

epService.getEPAdministrator().getConfiguration().
    addEventType("AmountCurrency", amountAndCurr);

The AmountCurrency type is now available for use as a property type itself. Below code snippet declares OrderItem to hold an item number and AmountCurrency:

Map<String, Object> orderItem = new HashMap<String, Object>();
orderItem.put("itemNum", int.class);
orderItem.put("price", "AmountCurrency");    // The property type is the name itself

epService.getEPAdministrator().getConfiguration().
    addEventType("OrderItem", orderItem);

2.6.4.3. One-to-Many Relationships

To model repeated properties within a Map, you may use arrays as properties in a Map. You may use an array of primitive types or an array of JavaBean objects or an array of a previously declared Map event type.

When using a previously declared Map event type as an array property, the literal [] must be appended after the event type name.

This following example defines a Map event type by name Sale to hold array properties of the various types. It assumes a SalesPerson Java class exists and a Map event type by name OrderItem was declared:

Map<String, Object> sale = new HashMap<String, Object>();
sale.put("userids", int[].class);
sale.put("salesPersons", SalesPerson[].class);
sale.put("items", "OrderItem[]");	 // The property type is the name itself appended by []

epService.getEPAdministrator().getConfiguration().
    addEventType("SaleEvent", sale);

The three properties that the above example declares are:

  • An integer array of user ids.

  • An array of SalesPerson Java objects.

  • An array of Maps for order items.

The next EPL statement is a sample query asking for property values held by arrays:

select userids[0], salesPersons[1].name, 
    items[1], items[1].price.amount from SaleEvent

2.7. org.w3c.dom.Node XML Events

Events can be represented as org.w3c.dom.Node instances and send into the engine via the sendEvent method on EPRuntime or via EventSender. Please note that configuration is required so the event type name and root element name is known. See Chapter 14, Configuration.

If a XML schema document (XSD file) can be made available as part of the configuration, then Esper can read the schema and appropriately present event type metadata and validate statements that use the event type and its properties. See Section 2.7.1, “Schema-Provided XML Events”.

When no XML schema document is provided, XML events can still be queried, however the return type and return values of property expressions are string-only and no event type metadata is available other then for explicitly configured properties. See Section 2.7.2, “No-Schema-Provided XML Events”.

In all cases Esper allows you to configure explicit XPath expressions as event properties. You can specify arbitrary XPath functions or expressions and provide a property name and type by which result values will be available for use in EPL statements. See Section 2.7.3, “Explicitly-Configured Properties”.

Nested, mapped and indexed event properties are also supported in expressions against org.w3c.dom.Node events. Thus XML trees can conveniently be interrogated via the property expression syntax.

Only one event type per root element name may be configured. The engine recognizes each event by its root element name or you may use EventSender to send events.

This section uses the following XML document as an example:

<?xml version="1.0" encoding="UTF-8"?>
<Sensor xmlns="SensorSchema">
  <ID>urn:epc:1:4.16.36</ID>
  <Observation Command="READ_PALLET_TAGS_ONLY">
    <ID>00000001</ID>
    <Tag>
      <ID>urn:epc:1:2.24.400</ID>
    </Tag>
    <Tag>
      <ID>urn:epc:1:2.24.401</ID>
    </Tag>
  </Observation>
</Sensor>

The schema for the example is:

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">

  <xs:element name="Sensor">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="ID" type="xs:string"/>
        <xs:element ref="Observation" />
      </xs:sequence>
    </xs:complexType>
  </xs:element>

  <xs:element name="Observation">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="ID" type="xs:string"/>
        <xs:element ref="Tag" maxOccurs="unbounded" />
      </xs:sequence>
      <xs:attribute name="Command" type="xs:string" use="required" />
    </xs:complexType>
  </xs:element>

  <xs:element name="Tag">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="ID" type="xs:string"/>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
</xs:schema>

2.7.1. Schema-Provided XML Events

If you have a XSD schema document available for your XML events, Esper can interrogate the schema. The benefits are:

  • New EPL statements that refer to event properties are validated against the types provided in the schema.

  • Event type metadata becomes available for retrieval as part of the EventType interface.

2.7.1.1. Getting Started

The engine reads a XSD schema file from an URL you provide. Make sure files imported by the XSD schema file can also be resolved.

The configuration accepts a schema URL. This is a sample code snippet to determine a schema URL from a file in classpath:

URL schemaURL = this.getClass().getClassLoader().getResource("sensor.xsd");

Here is a sample use of the runtime configuration API, please see Chapter 14, Configuration for further examples.

epService = EPServiceProviderManager.getDefaultProvider();
ConfigurationEventTypeXMLDOM sensorcfg = new ConfigurationEventTypeXMLDOM();
sensorcfg.setRootElementName("Sensor");
sensorcfg.setSchemaResource(schemaURL.toString());
epService.getEPAdministrator().getConfiguration()
    .addEventType("SensorEvent", sensorcfg);

You must provide a root element name. This name is used to look up the event type for the sendEvent(org.w3c.Node node) method. An EventSender is a useful alternative method for sending events if the type lookup based on the root or document element name is not desired.

After adding the event type, you may create statements and send events. Next is a sample statement:

select ID, Observation.Command, Observation.ID, 
  Observation.Tag[0].ID, Observation.Tag[1].ID
from SensorEvent

As you can see from the example above, property expressions can query property values held in the XML document's elements and attributes.

There are multiple ways to obtain a XML DOM document instance from a XML string. The next code snippet shows how to obtain a XML DOM org.w3c.Document instance:

InputSource source = new InputSource(new StringReader(xml));
DocumentBuilderFactory builderFactory = DocumentBuilderFactory.newInstance();
builderFactory.setNamespaceAware(true);
Document doc = builderFactory.newDocumentBuilder().parse(source);

Send the org.w3c.Node or Document object into the engine for processing:

epService.getEPRuntime().sendEvent(doc);

2.7.1.2. Property Expressions and Namespaces

By default, property expressions such as Observation.Tag[0].ID are evaluated by a fast DOM-walker implementation provided by Esper. This DOM-walker implementation is not namespace-aware.

Should you require namespace-aware traversal of the DOM document, you must set the xpath-property-expr configuration option to true (default is false). This flag causes Esper to generate namespace-aware XPath expressions from each property expression instead of the DOM-walker, as described next. Setting the xpath-property-expr option to true requires that you also configure namespace prefixes as described below.

When matching up the property names with the XSD schema information, the engine determines whether the attribute or element provides values. The algorithm checks attribute names first followed by element names. It takes the first match to the specified property name.

2.7.1.3. Property Expression to XPath Rewrite

By setting the xpath-property-expr option the engine rewrites each property expression as an XPath expression, effectively handing the evaluation over to the underlying XPath implementation available from classpath. Most JVM have a built-in XPath implementation and there are also optimized, fast implementations such as Jaxen that can be used as well.

Set the xpath-property-expr option if you need namespace-aware document traversal, such as when your schema mixes several namespaces and element names are overlapping.

The below table samples several property expressions and the XPath expression generated for each, without namespace prefixes to keep the example simple:

Table 2.5. Property Expression to XPath Expression

Property ExpressionEquivalent XPath
Observeration.ID/Sensor/Observation/ID
Observeration.Command/Sensor/Observation/@Command
Observeration.Tag[0].ID/Sensor/Observation/Tag[position() = 1]/ID

For mapped properties that are specified via the syntax name('key'), the algorithm looks for an attribute by name id and generates a XPath expression as mapped[@id='key'].

Finally, here is an example that includes all different types of properties and their XPath expression equivalent in one property expression:

select nested.mapped('key').indexed[1].attribute from MyEvent

The equivalent XPath expression follows, this time including n0 as a sample namespace prefix:

/n0:rootelement/n0:nested/n0:mapped[@id='key']/n0:indexed[position() = 2]/@attribute

2.7.1.4. Array Properties

All elements that are unbound or have max occurs greater then 1 in the XSD schema are represented as indexed properties and require an index for resolution.

For example, the following is not a valid property expression in the sample Sensor document: Observeration.Tag.ID. As no index is provided for Tag, the property expression is not valid.

Repeated elements within a parent element in which the repeated element is a simple type also are represented as an array.

Consider the next XML document:

<item>
  <book sku="8800090">
    <author>Isaac Asimov</author>
    <author>Robert A Heinlein</author>
  </book>
</item>

Here, the result of the expression book.author is an array of type String and the result of book.author[0] is a String value.

2.7.1.5. Dynamic Properties

Dynamic properties are not validated against the XSD schema information and their result value is always org.w3c.Node. You may use a user-defined function to process dynamic properties returning Node. As an alternative consider using an explicit property.

An example dynamic property is Origin?.ID which will look for an element by name Origin that contains an element or attribute node by name LocationCode:

select Origin?.LocationCode from SensorEvent

2.7.1.6. Transposing Properties

When providing a XSD document, the default configuration allows to transpose property values that are themselves complex elements, as defined in the XSD schema, into a new stream. This behavior can be controlled via the flag auto-fragment.

For example, consider the next query:

insert into ObservationStream
select ID, Observation from SensorEvent

The Observation as a property of the SensorEvent gets itself inserted into a new stream by name ObservationStream. The ObservationStream thus consists of a string-typed ID property and a complex-typed property named Observation, as described in the schema.

A further statement can use this stream to query:

select Observation.Command, Observation.Tag[0].ID from ObservationStream

Before continuing the discussion, here is an alternative syntax using the wildcard-select, that is also useful:

insert into TagListStream
select ID as sensorId, Observation.* from SensorEvent

The new TagListStream has a string-typed ID and Command property as well as an array of Tag properties that are complex types themselves as defined in the schema.

Next is a sample statement to query the new stream:

select sensorId, Command, Tag[0].ID from TagListStream

Please note the following limitations:

  • The XPath standard prescribes that XPath expressions against org.w3c.Node are evaluated against the owner document of the Node. Therefore XPath is not relative to the current node but absolute against each node's owner document. Since Esper does not create new document instances for transposed nodes, transposing properties is not possible when the xpath-property-expr flag is set.

  • Complex elements that have both simple element values and complex child elements are not transposed. This is to ensure their property value is not hidden. Use an explicit XPath expression to transpose such properties.

Esper automatically registers a new event type for transposed properties. It generates the type name of the new XML event type from the XML event type name and the property names used in the expression. The synposis is type_name.property_name[.property_name...]. The type name can be looked up, for example for use with EventSender or can be created in advance.

2.7.1.7. Event Sender

An EventSender sends events into the engine for a given type, saving a type lookup based on element name.

This brief example sends an event via EventSender:

EventSender sender = epRuntime.getEventSender("SensorEvent");
sender.sendEvent(node);

The XML DOM event sender checks the root element name before processing the event. Use the event-sender-validates-root setting to disable validation. This forces the engine to process XML documents according to any predefined type without validation of the root element name.

2.7.1.8. Limitations

The engine schema interrogation is based on the Xerces distribution packaged into Sun Java runtimes. Your application may not replace the JRE's Xerces version and use XML schemas, unless your application sets the DOM implementation registry as shown below before loading the engine configuration:

System.setProperty(DOMImplementationRegistry.PROPERTY, 
  "com.sun.org.apache.xerces.internal.dom.DOMXSImplementationSourceImpl");

2.7.2. No-Schema-Provided XML Events

Without a schema document a XML event may still be queried. However there are important differences in the metadata available without a schema document and therefore the property expression results. These differences are outlined below.

All property expressions against a XML type without schema are assumed valid. There is no validation of the property expression other then syntax validation. At runtime, property expressions return string-type values or null if the expression did not yield a matching element or attribute result.

When asked for property names or property metadata, a no-schema type returns empty array.

In all other aspects the type behaves the same as the schema-provided type described earlier.

2.7.3. Explicitly-Configured Properties

Regardless of whether or not you provide a XSD schema for the XML event type, you can always fall back to configuring explicit properties that are backed by XPath expressions.

For further documentation on XPath, please consult the XPath standard or other online material. Consider using Jaxen or Apache Axiom, for example, to provide faster XPath evaluation then your Java VM built-in XPath provider may offer.

2.7.3.1. Simple Explicit Property

Shown below is an example configuration that adds an explicit property backed by a XPath expression and that defines namespace prefixes:

epService = EPServiceProviderManager.getDefaultProvider();
ConfigurationEventTypeXMLDOM sensorcfg = new ConfigurationEventTypeXMLDOM();
sensorcfg.addXPathProperty("countTags", "count(/ss:Sensor/ss:Observation/ss:Tag)", 
    XPathConstants.NUMBER);
sensorcfg.addNamespacePrefix("ss", "SensorSchema");
sensorcfg.setRootElementName("Sensor");
epService.getEPAdministrator().getConfiguration()
    .addEventType("SensorEvent", sensorcfg);

The countTags property is now available for querying:

select countTags from SensorEvent

The XPath expression count(...) is a XPath built-in function that counts the number of nodes, for the example document the result is 2.

2.7.3.2. Explicit Property Casting and Parsing

Esper can parse or cast the result of your XPath expression to the desired type. Your property configuration provides the type to cast to, like this:

sensorcfg.addXPathProperty("countTags", "count(/ss:Sensor/ss:Observation/ss:Tag)", 
    XPathConstants.NUMBER, "int");

The type supplied to the property configuration must be one of the built-in types. Arrays of built-in type are also possible, requiring the XPathConstants.NODESET type returned by your XPath expression, as follows:

sensorcfg.addXPathProperty("idarray", "//ss:Tag/ss:ID", 
    XPathConstants.NODESET, "String[]");

The XPath expression //ss:Tag/ss:ID returns all ID nodes under a Tag node, regardless of where in the node tree the element is located. For the example document the result is 2 array elements urn:epc:1:2.24.400 and urn:epc:1:2.24.40.

2.7.3.3. Node and Nodeset Explicit Property

An explicit property may return XPathConstants.NODE or XPathConstants.NODESET and can provide the event type name of a pre-configured event type for the property. The method name to add such properties is addXPathPropertyFragment.

This code snippet adds two explicit properties and assigns an event type name for each property:

sensorcfg.addXPathPropertyFragment("tagOne", "//ss:Tag[position() = 1]", 
    XPathConstants.NODE, "TagEvent");
sensorcfg.addXPathPropertyFragment("tagArray", "//ss:Tag", 
    XPathConstants.NODESET, "TagEvent");

The configuration above references the TagEvent event type. This type must also be configured. Prefix the root element name with "//" to cause the lookup to search the nested schema elements for the definition of the type:

ConfigurationEventTypeXMLDOM tagcfg = new ConfigurationEventTypeXMLDOM();
tagcfg.setRootElementName("//Tag");
tagcfg.setSchemaResource(schemaURL);
epAdministrator.getConfiguration()
    .addEventType("TagEvent", tagcfg);

The tagOne and tagArray properties are now ready for selection and transposing to further streams:

insert into TagOneStream select tagOne.* from SensorEvent

Select from the new stream:

select ID from TagOneStream

An example with indexed properties is shown next:

insert into TagArrayStream select tagArray as mytags from SensorEvent

Select from the new stream:

select mytags[0].ID from TagArrayStream

2.8. Additional Event Representations

Part of the extension and plug-in features of Esper is an event representation API. This set of classes allow an application to create new event types and event instances based on information available elsewhere, statically or dynamically at runtime when EPL statements are created. Please see Section 16.8, “Event Type And Event Object” for details.

Creating a plug-in event representation can be useful when your application has existing Java classes that carry event metadata and event property values and your application does not want to (or cannot) extract or transform such event metadata and event data into one of the built-in event representations (POJO Java objects, Map or XML DOM).

Further use of a plug-in event representation is to provide a faster or short-cut access path to event data. For example, access to event data stored in a XML format through the Streaming API for XML (StAX) is known to be very efficient. A plug-in event representation can also provide network lookup and dynamic resolution of event type and dynamic sourcing of event instances.

Currently, EsperIO provides the following additional event representations:

  • Apache Axiom: Streaming API for XML (StAX) implementation

Please see the EsperIO documentation for details on the above.

The chapter on Section 16.8, “Event Type And Event Object” explains how to create your own custom event representation.

2.9. Updating, Merging and Versioning Events

To summarize, an event is an immutable record of a past occurrence of an action or state change, and event properties contain useful information about an event.

The length of time an event is of interest to the event processing engine (retention time) depends on your EPL statements, and especially the data window, pattern and output rate limiting clauses of your statements.

During the retention time of an event more information about the event may become available, such as additional properties or changes to existing properties. Esper provides three concepts for handling updates to events.

The first means to handle updating events is the update istream clause as further described in Section 5.20, “Updating an Insert Stream: the Update IStream Clause”. It is useful when you need to update events as they enter a stream, before events are evaluated by any particular consuming statement to that stream.

The second means to update events is the on-merge and on-update clauses, for use with named windows only, as further described in Section 5.15.9, “Triggered Upsert using the On-Merge Clause” and Section 5.15.7, “Updating Named Windows: the On Update clause”. On-merge is similar to the SQL merge clause and provides what is known as an "Upsert" operation: Update existing events or if no existing event(s) are found then insert a new event, all in one atomic operation provided by a single EPL statement. On-update can be used to update individual properties of events held in a named window.

The third means to handle updating events is the revision event types, for use with named windows only, as further described in Section 5.15.11, “Versioning and Revision Event Type Use with Named Windows”. With revision event types one can declare, via configuration only, multiple different event types and then have the engine present a merged event type that contains a superset of properties of all merged types, and have the engine merge events as they arrive without additional EPL statements.

Note that patterns do not reflect changes to past events. For the temporal nature of patterns, any changes to events that were observed in the past do not reflect upon current pattern state.

2.10. Coarse-Grained Events

Your application events may consist of fairly comprehensive, coarse-grained structures or documents. For example in business-to-business integration scenarios, XML documents or other event objects can be rich deeply-nested graphs of event properties.

To extract information from a coarse-grained event or to perform bulk operations on the rows of the property graph in an event, Esper provides a convenient syntax: When specifying a filter expression in a pattern or in a select clause, it may contain a contained-event selection syntax, as further described in Section 5.19, “Contained-Event Selection”.

2.11. Event Objects Instantiated and Populated by Insert Into

The insert into clause can populate instantiate new instances of Java object events and java.util.Map events directly from the results of select clause expressions and populate such instances. Simply use the event type name as the stream name in the insert into clause as described in Section 5.10, “Merging Streams and Continuous Insertion: the Insert Into Clause”.

If instead you have an existing instance of a Java object returned by an expression, such as a single-row function or static method invocation for example, you can transpose that expression result object to a stream. This is described further in Section 5.10.7, “Transposing an Expression Result” and Section 9.4, “Select-Clause transpose Function”.

The column names specified in the select and insert into clause must match available writable properties in the event object to be populated (the target event type). The expression result types of any expressions in the select clause must also be compatible with the property types of the target event type.

If populating a POJO-based event type and the class provides a matching constructor, the expression result types of expressions in the select clause must be compatible with the constructor parameters in the order listed by the constructor. The insert into clause column names are not relevant in this case.

Consider the following example statement:

insert into com.mycompany.NewEmployeeEvent 
select fname as firstName, lname as lastName from HRSystemEvent

The above example specifies the fully-qualified class name of NewEmployeeEvent. The engine instantianes NewEmployeeEvent for each result row and populates the firstName and lastName properties of each instance from the result of select clause expressions. The HRSystemEvent in the example is assumed to have lname and fname properties, and either setter-methods and a default constructor, or a matching constructor.

Note how the example uses the as-keyword to assign column names that match the property names of the NewEmployeeEvent target event. If the property names of the source and target events are the same, the as-keyword is not required.

The next example is an alternate form and specifies property names within the insert into clause instead. The example also assumes that NewEmployeeEvent has been defined or imported via configuration since it does not specify the event class package name:

insert into NewEmployeeEvent(firstName, lastName) 
select fname, lname from HRSystemEvent

Finally, this example populates HRSystemEvent events. The example populates the value of a type property where the event has the value 'NEW' and populates a new event object with the value 'HIRED', copying the fname and lname property values to the new event object:

insert into HRSystemEvent 
select fname, lname, 'HIRED' as type from HRSystemEvent(type='NEW')

The matching of the select or insert into-clause column names to target event type's property names is case-sensitive. It is allowed to only populate a subset of all available columns in the target event type. Wildcard (*) is also allowed and copies all fields of the events or multiple events in a join.

For Java object events, your event class must provide setter-methods according to JavaBean conventions or, alternatively, a matching constructor. If the event class provides setter methods the class should also provide a default constructor taking no parameters. If the event class provides a matching constructor there is no need for setter-methods. If your event class does not have a default constructor and setter methods, or a matching constructor, your application may configure a factory method via ConfigurationEventTypeLegacy.

The engine follows Java standards in terms of widening, performing widening automatically in cases where widening type conversion is allowed without loss of precision, for both boxed and primitive types and including BigInteger and BigDecimal.

When inserting array-typed properties into a Java or Map-type underlying event the event definition should declare the target property as an array.

Please note the following limitations:

  • Event types that utilize XML org.w3c.dom.Node underlying event objects cannot be target of an insert into clause.

2.12. Comparing Event Types

Each of the event representations of Java object, Map and XML document has advantages and disadvantages that are summarized in the table below:

Table 2.6. Comparing Event Representations

 Java Object (POJO/Bean or other)MapXML Document
PerformanceGoodGoodNot comparable and depending on use of XPath
Memory UseGoodGoodDepends on DOM and XPath implementation used
Call Method on EventYesYes, if contains Object(s)No
Nested, Indexed, Mapped and Dynamic PropertiesYesYesYes
Course-grained event syntaxYesYesYes
Insert-into that RepresentationYesYesNo
Runtime Type ChangeReload class, yesYesYes
Create-schema SyntaxYesYesNo, runtime and static configuration

Chapter 3. Processing Model

3.1. Introduction

The Esper processing model is continuous: Update listeners and/or subscribers to statements receive updated data as soon as the engine processes events for that statement, according to the statement's choice of event streams, views, filters and output rates.

As outlined in Chapter 13, API Reference the interface for listeners is com.espertech.esper.client.UpdateListener. Implementations must provide a single update method that the engine invokes when results become available:

A second, strongly-typed and native, highly-performant method of result delivery is provided: A subscriber object is a direct binding of query results to a Java object. The object, a POJO, receives statement results via method invocation. The subscriber class need not implement an interface or extend a superclass. Please see Section 13.3.3, “Setting a Subscriber Object”.

The engine provides statement results to update listeners by placing results in com.espertech.esper.client.EventBean instances. A typical listener implementation queries the EventBean instances via getter methods to obtain the statement-generated results.

The get method on the EventBean interface can be used to retrieve result columns by name. The property name supplied to the get method can also be used to query nested, indexed or array properties of object graphs as discussed in more detail in Chapter 2, Event Representations and Section 13.5, “Event and Event Type”

The getUnderlying method on the EventBean interface allows update listeners to obtain the underlying event object. For wildcard selects, the underlying event is the event object that was sent into the engine via the sendEvent method. For joins and select clauses with expressions, the underlying object implements java.util.Map.

3.2. Insert Stream

In this section we look at the output of a very simple EPL statement. The statement selects an event stream without using a data window and without applying any filtering, as follows:

select * from Withdrawal

This statement selects all Withdrawal events. Every time the engine processes an event of type Withdrawal or any sub-type of Withdrawal, it invokes all update listeners, handing the new event to each of the statement's listeners.

The term insert stream denotes the new events arriving, and entering a data window or aggregation. The insert stream in this example is the stream of arriving Withdrawal events, and is posted to listeners as new events.

The diagram below shows a series of Withdrawal events 1 to 6 arriving over time. The number in parenthesis is the withdrawal amount, an event property that is used in the examples that discuss filtering.

Output example for a simple statement

Figure 3.1. Output example for a simple statement

The example statement above results in only new events and no old events posted by the engine to the statement's listeners.

3.3. Insert and Remove Stream

A length window instructs the engine to only keep the last N events for a stream. The next statement applies a length window onto the Withdrawal event stream. The statement serves to illustrate the concept of data window and events entering and leaving a data window:

select * from Withdrawal.win:length(5)

The size of this statement's length window is five events. The engine enters all arriving Withdrawal events into the length window. When the length window is full, the oldest Withdrawal event is pushed out the window. The engine indicates to listeners all events entering the window as new events, and all events leaving the window as old events.

While the term insert stream denotes new events arriving, the term remove stream denotes events leaving a data window, or changing aggregation values. In this example, the remove stream is the stream of Withdrawal events that leave the length window, and such events are posted to listeners as old events.

The next diagram illustrates how the length window contents change as events arrive and shows the events posted to an update listener.

Output example for a length window

Figure 3.2. Output example for a length window

As before, all arriving events are posted as new events to listeners. In addition, when event W1 leaves the length window on arrival of event W6, it is posted as an old event to listeners.

Similar to a length window, a time window also keeps the most recent events up to a given time period. A time window of 5 seconds, for example, keeps the last 5 seconds of events. As seconds pass, the time window actively pushes the oldest events out of the window resulting in one or more old events posted to update listeners.

Note: By default the engine only delivers the insert stream to listeners and observers. EPL supports optional istream, irstream and rstream keywords on select-clauses and on insert-into clauses to control which stream to deliver, see Section 5.3.7, “Selecting insert and remove stream events”. There is also a related, engine-wide configuration setting described in Section 14.4.16, “Engine Settings related to Stream Selection”.

3.4. Filters and Where-clauses

Filters to event streams allow filtering events out of a given stream before events enter a data window. The statement below shows a filter that selects Withdrawal events with an amount value of 200 or more.

select * from Withdrawal(amount>=200).win:length(5)

With the filter, any Withdrawal events that have an amount of less then 200 do not enter the length window and are therefore not passed to update listeners. Filters are discussed in more detail in Section 5.4.1, “Filter-based Event Streams” and Section 6.4, “Filter Expressions In Patterns”.

Output example for a statement with an event stream filter

Figure 3.3. Output example for a statement with an event stream filter

The where-clause and having-clause in statements eliminate potential result rows at a later stage in processing, after events have been processed into a statement's data window or other views.

The next statement applies a where-clause to Withdrawal events. Where-clauses are discussed in more detail in Section 5.5, “Specifying Search Conditions: the Where Clause”.

select * from Withdrawal.win:length(5) where amount >= 200

The where-clause applies to both new events and old events. As the diagram below shows, arriving events enter the window however only events that pass the where-clause are handed to update listeners. Also, as events leave the data window, only those events that pass the conditions in the where-clause are posted to listeners as old events.

Output example for a statement with where-clause

Figure 3.4. Output example for a statement with where-clause

The where-clause can contain complex conditions while event stream filters are more restrictive in the type of filters that can be specified. The next statement's where-clause applies the ceil function of the java.lang.Math Java library class in the where clause. The insert-into clause makes the results of the first statement available to the second statement:

insert into WithdrawalFiltered select * from Withdrawal where Math.ceil(amount) >= 200
select * from WithdrawalFiltered

3.5. Time Windows

In this section we explain the output model of statements employing a time window view and a time batch view.

3.5.1. Time Window

A time window is a moving window extending to the specified time interval into the past based on the system time. Time windows enable us to limit the number of events considered by a query, as do length windows.

As a practical example, consider the need to determine all accounts where the average withdrawal amount per account for the last 4 seconds of withdrawals is greater then 1000. The statement to solve this problem is shown below.

select account, avg(amount) 
from Withdrawal.win:time(4 sec) 
group by account
having amount > 1000

The next diagram serves to illustrate the functioning of a time window. For the diagram, we assume a query that simply selects the event itself and does not group or filter events.

select * from Withdrawal.win:time(4 sec)

The diagram starts at a given time t and displays the contents of the time window at t + 4 and t + 5 seconds and so on.

Output example for a statement with a time window

Figure 3.5. Output example for a statement with a time window

The activity as illustrated by the diagram:

  1. At time t + 4 seconds an event W1 arrives and enters the time window. The engine reports the new event to update listeners.

  2. At time t + 5 seconds an event W2 arrives and enters the time window. The engine reports the new event to update listeners.

  3. At time t + 6.5 seconds an event W3 arrives and enters the time window. The engine reports the new event to update listeners.

  4. At time t + 8 seconds event W1 leaves the time window. The engine reports the event as an old event to update listeners.

3.5.2. Time Batch

The time batch view buffers events and releases them every specified time interval in one update. Time windows control the evaluation of events, as does the length batch window.

The next diagram serves to illustrate the functioning of a time batch view. For the diagram, we assume a simple query as below:

select * from Withdrawal.win:time_batch(4 sec)

The diagram starts at a given time t and displays the contents of the time window at t + 4 and t + 5 seconds and so on.

Output example for a statement with a time batch view

Figure 3.6. Output example for a statement with a time batch view

The activity as illustrated by the diagram:

  1. At time t + 1 seconds an event W1 arrives and enters the batch. No call to inform update listeners occurs.

  2. At time t + 3 seconds an event W2 arrives and enters the batch. No call to inform update listeners occurs.

  3. At time t + 4 seconds the engine processes the batched events and a starts a new batch. The engine reports events W1 and W2 to update listeners.

  4. At time t + 6.5 seconds an event W3 arrives and enters the batch. No call to inform update listeners occurs.

  5. At time t + 8 seconds the engine processes the batched events and a starts a new batch. The engine reports the event W3 as new data to update listeners. The engine reports the events W1 and W2 as old data (prior batch) to update listeners.

3.6. Batch Windows

The built-in data windows that act on batches of events are the win:time_batch and the win:length_batch views. The win:time_batch data window collects events arriving during a given time interval and posts collected events as a batch to listeners at the end of the time interval. The win:length_batch data window collects a given number of events and posts collected events as a batch to listeners when the given number of events has collected.

Related to batch data windows is output rate limiting. While batch data windows retain events the output clause offered by output rate limiting can control or stabilize the rate at which events are output, see Section 5.7, “Stabilizing and Controlling Output: the Output Clause”.

Let's look at how a time batch window may be used:

select account, amount from Withdrawal.win:time_batch(1 sec)

The above statement collects events arriving during a one-second interval, at the end of which the engine posts the collected events as new events (insert stream) to each listener. The engine posts the events collected during the prior batch as old events (remove stream). The engine starts posting events to listeners one second after it receives the first event and thereon.

For statements containing aggregation functions and/or a group by clause, the engine posts consolidated aggregation results for an event batch. For example, consider the following statement:

select sum(amount) as mysum from Withdrawal.win:time_batch(1 sec)

Note that output rate limiting also generates batches of events following the output model as discussed here.

3.7. Aggregation and Grouping

3.7.1. Insert and Remove Stream

Statements that aggregate events via aggregation functions also post remove stream events as aggregated values change.

Consider the following statement that alerts when 2 Withdrawal events have been received:

select count(*) as mycount from Withdrawal having count(*) = 2

When the engine encounters the second withdrawal event, the engine posts a new event to update listeners. The value of the "mycount" property on that new event is 2. Additionally, when the engine encounters the third Withdrawal event, it posts an old event to update listeners containing the prior value of the count, if specifing the rstream keyword in the select clause to select the remove stream. The value of the "mycount" property on that old event is also 2.

Note the statement above does not specify a data window and thereby counts all arriving events since statement start. The statement above retains no events and its memory allocation is only the aggregation state, i.e. a single long value to represent count(*).

The istream or rstream keyword can be used to eliminate either new events or old events posted to listeners. The next statement uses the istream keyword causing the engine to call the listener only once when the second Withdrawal event is received:

select istream count(*) as mycount from Withdrawal having count(*) = 2

3.7.2. Output for Aggregation and Group-By

Following SQL (Standard Query Language) standards for queries against relational databases, the presence or absence of aggregation functions and the presence or absence of the group by clause dictates the number of rows posted by the engine to listeners. The next sections outline the output model for batched events under aggregation and grouping. The examples also apply to data windows that don't batch events and post results continously as events arrive or leave data windows. The examples also apply to patterns providing events when a complete pattern matches.

In summary, as in SQL, if your query only selects aggregation values, the engine provides one row of aggregated values. It provides that row every time the aggregation is updated (insert stream), which is when events arrive or a batch of events gets processed, and when the events leave a data window or a new batch of events arrives. The remove stream then consists of prior aggregation values.

Also as in SQL, if your query selects non-aggregated values along with aggregation values in the select clause, the engine provides a row per event. The insert stream then consists of the aggregation values at the time the event arrives, while the remove stream is the aggregation value at the time the event leaves a data window, if any is defined in your query.

The documentation provides output examples for query types in Appendix A, Output Reference and Samples, and the next sections outlines each query type.

3.7.2.1. Un-aggregated and Un-grouped

An example statement for the un-aggregated and un-grouped case is as follows:

select * from Withdrawal.win:time_batch(1 sec)

At the end of a time interval, the engine posts to listeners one row for each event arriving during the time interval.

The appendix provides a complete example including input and output events over time at Section A.2, “Output for Un-aggregated and Un-grouped Queries”

3.7.2.2. Fully Aggregated and Un-grouped

If your statement only selects aggregation values and does not group, your statement may look as the example below:

select sum(amount) 
from Withdrawal.win:time_batch(1 sec)

At the end of a time interval, the engine posts to listeners a single row indicating the aggregation result. The aggregation result aggregates all events collected during the time interval.

The appendix provides a complete example including input and output events over time at Section A.3, “Output for Fully-aggregated and Un-grouped Queries”

3.7.2.3. Aggregated and Un-Grouped

If your statement selects non-aggregated properties and aggregation values, and does not group, your statement may be similar to this statement:

select account, sum(amount) 
from Withdrawal.win:time_batch(1 sec)

At the end of a time interval, the engine posts to listeners one row per event. The aggregation result aggregates all events collected during the time interval.

The appendix provides a complete example including input and output events over time at Section A.4, “Output for Aggregated and Un-grouped Queries”

3.7.2.4. Fully Aggregated and Grouped

If your statement selects aggregation values and all non-aggregated properties in the select clause are listed in the group by clause, then your statement may look similar to this example:

select account, sum(amount) 
from Withdrawal.win:time_batch(1 sec) 
group by account

At the end of a time interval, the engine posts to listeners one row per unique account number. The aggregation result aggregates per unique account.

The appendix provides a complete example including input and output events over time at Section A.5, “Output for Fully-aggregated and Grouped Queries”

3.7.2.5. Aggregated and Grouped

If your statement selects non-aggregated properties and aggregation values, and groups only some properties using the group by clause, your statement may look as below:

select account, accountName, sum(amount) 
from Withdrawal.win:time_batch(1 sec) 
group by account

At the end of a time interval, the engine posts to listeners one row per event. The aggregation result aggregates per unique account.

The appendix provides a complete example including input and output events over time at Section A.6, “Output for Aggregated and Grouped Queries”

3.8. Event Visibility and Current Time

An event sent by your application or generated by statements is visible to all other statements in the same engine instance. Similarly, current time (the time horizon) moves forward for all statements in the same engine instance. Please see the Chapter 13, API Reference chapter for how to send events and how time moves forward through system time or via simulated time, and the possible threading models.

Within an Esper engine instance you can additionally control event visibility and current time on a statement level, under the term isolated service as described in Section 13.9, “Service Isolation”.

An isolated service provides a dedicated execution environment for one or more statements. Events sent to an isolated service are visible only within that isolated service. In the isolated service you can move time forward at the pace and resolution desired without impacting other statements that reside in the engine runtime or other isolated services. You can move statements between the engine and an isolated service.

Chapter 4. Context and Context Partitions

4.1. Introduction

This section discusses the notion of context and its role in the Esper event processing language (EPL).

When you look up the word context in a dictionary, you may find: Context is the set of circumstances or facts that surround a particular event, situation, etc..

Context-dependent event processing occurs frequently: For example, consider a requirement that monitors banking transactions. For different customers your analysis considers customer-specific aggregations, patterns or data windows. In this example the context of detection is the customer. For a given customer you may want to analyze the banking transactions of that customer by using aggregations, data windows, patterns including other EPL constructs.

In a second example, consider traffic monitoring to detect speed violations. Assume the speed limit must be enforced only between 9 am and 5 pm. The context of detection is of temporal nature.

A context takes a cloud of events and classifies them into one or more sets. These sets are called context partitions. An event processing operation that is associated with a context operates on each of these context partitions independently. (Credit: Taken from the book "Event Processing in Action" by Opher Etzion and Peter Niblett.)

A context is a declaration of dimension and may thus result in one or more context partitions. In the banking transaction example there the context dimension is the customer and a context partition exists per customer. In the traffic monitoring example there is a single context partition that exists only between 9 am and 5 pm and does not exist outside of that daily time period.

In an event processing glossary you may find the term event processing agent. An EPL statement is an event processing agent. An alternative term for context partition is event processing agent instance.

Esper EPL allows you to declare contexts explicitly, offering the following benefits:

  1. Context can apply to multiple statements thereby eliminating the need to duplicate context dimensional information between statements.

  2. Context partitions can be temporally overlapping.

  3. Context partitions provide a fine-grained lifecycle that is independent of the lifecycle of statement lifecycle.

  4. Fine-grained lock granularity: The engine locks on the level of context partitions thereby allowing very high concurrency, with a maximum (theoretical) degree of parallelism at 2^31-1 (2,147,483,647) parallel threads working to process a single EPL statement under a hash segmented context.

  5. EPL can become easier to read as common predicate expressions can be factored out into a context.

  6. You may specify a nested context that is composed from two or more contexts. In particular a temporal context type is frequently used in combination with a segmentation-oriented context.

Esper EPL allows you to declare a context explicitly via the create context syntax introduced below.

After you have declared a context, one or more EPL statements can refer to that context by specifying context name. When an EPL statement refers to a context, all EPL-statement related state such as aggregations, patterns or data windows etc. exists once per context partition.

If an EPL statement does not declare a context, it implicitly has a single context partition. The single context partition lives as long as the EPL statement is started and ends when the EPL statement is stopped.

Variables are global state and are visible across context partitions. The same is true for event types and external data.

For more information on locking and threading please see Section 13.6, “Engine Threading and Concurrency”. For performance related information please refer to Chapter 19, Performance.

4.2. Context Declaration

The create context statement declares a context by specifying a context name and context dimension information.

A context declaration by itself does not consume any resources or perform any logic until your application starts at least one statement that refers to that context. Until then the context is inactive and not in use.

When your application creates or starts the first statement that refers to the context, the engine activates the context.

As soon as your application stops or destroys all statements that refer to the context, the context becomes inactive again.

When your application stops or destroys a statement that refers to a context, the context partitions associated to that statement also end (context partitions associated to other started statements live on).

When your application stops or destroys the statement that declared the context and does not also stop or destroy any statements that refer to the context, the context partitions associated to each such statement do not end.

When your application destroys the statement that declared the context and destroys all statements that refer to that context then the engine removes the context declaration entirely.

The create context statement posts no output events to listeners or subscribers and does not return any rows when iterated.

4.2.1. Context-Provided Properties

Each of the context declarations makes available a set of built-in context properties as well as initiating event or pattern properties, as applicable. You may select these context properties for output or use them in any of the statement expressions.

Refer to built-in context properties as context.property_name, wherein property_name refers to the name of the built-in context property.

Refer to initiating event or pattern match event properties as context.stream_name.property_name, wherein stream_name refers to the name assigned to the event or the tag name specified in a pattern and property_name refers to the name of the initiating event or pattern match event property.

4.2.1. Keyed Segmented Context

This context assigns events to context partitions based on the values of one or more event properties, using the value of these property(s) as a key that picks a unique context partition directly. Each event thus belongs to exactly one context partition or zero context partitions if the event does not match the optional filter predicate expression(s). Each context partition handles exactly one set of key values.

The syntax for creating a keyed segmented context is as follows:

create context context_name partition [by]
  event_property [and event_property [and ...]] from stream_def
  [, event_property [...] from stream_def]
  [, ...]

The context_name you assign to the context can be any identifier.

Following the context name is one or more lists of event properties and a stream definition for each entry, separated by comma (,).

The event_property is the name(s) of the event properties that provide the value(s) to pick a unique partition. Multiple event property names are separated by the and keyword.

The stream_def is a stream definition which consists of an event type name optionally followed by parenthesis that contains filter expressions. If providing filter expressions, only events matching the provided filter expressions for that event type are considered by context partitions.

You may list multiple event properties for each stream definition. You may list multiple stream definitions. Please refer to usage guidelines below when specifying multiple event properties and/or multiple stream definitions.

The next statement creates a context SegmentedByCustomer that considers the value of the custId property of the BankTxn event type to pick the context partition to assign events to:

create context SegmentedByCustomer partition by custId from BankTxn

The following statement refers to the context created as above to compute a total withdrawal amount per account for each customer:

context SegmentedByCustomer
select custId, account, sum(amount) from BankTxn group by account

The following statement refers to the context created as above and detects a withdrawal of more then 400 followed by a second withdrawal of more then 400 that occur within 10 minutes of the first withdrawal, all for the same customer:

context SegmentedByCustomer
select * from pattern [
  every a=BankTxn(amount > 400) -> b=BankTxn(amount > 400) where timer:within(10 minutes)
]

The EPL statement that refers to a keyed segmented context must have at least one filter expression, at any place within the EPL statement that looks for events of any of the event types listed in the context declaration.

For example, the following is not valid:

// Neither LoginEvent nor LogoutEvent are listed in the context declaration
context SegmentedByCustomer
select * from pattern [every a=LoginEvent -> b=LogoutEvent where timer:within(10 minutes)]

4.2.1.1. Multiple Stream Definitions

If the context declaration lists multiple streams, each event type must be unrelated: You may not list the same event type twice and you may not list a sub- or super-type of any event type already listed.

The following is not a valid declaration since the BankTxn event type is listed twice:

// Not valid
create context SegmentedByCustomer partition by custId from BankTxn, account from BankTxn

If the context declaration lists multiple streams, the number of event properties provided for each event type must also be the same. The value type returned by event properties of each event type must match within the respective position it is listed in, i.e. the first property listed for each event type must have the same type, the second property listed for each event type must have the same type, and so on.

The following is not a valid declaration since the customer id of BankTxn and login time of LoginEvent is not the same type:

// Invalid: Type mismatch between properties
create context SegmentedByCustomer partition by custId from BankTxn, loginTime from LoginEvent

The next statement creates a context SegmentedByCustomer that also considers LoginEvent and LogoutEvent:

create context SegmentedByCustomer partition by 
  custId from BankTxn, loginId from LoginEvent, loginId from LogoutEvent

As you may have noticed, the above example refers to loginId as the event property name for LoginEvent and LogoutEvent events. The assumption is that the loginId event property of the login and logout events has the same type and carries the same exact value as the custId of bank transaction events, thereby allowing all events of the three event types to apply to the same customer-specific context partition.

4.2.1.2. Filters

You may add a filter expression to each of the event types listed. The engine applies the filter expression to the EPL statement that refers to the context and to the same event type.

The next statement creates a context SegmentedByCustomer that does not consider login events that indicate that the login failed.

create context SegmentedByCustomer partition by 
  custId from BankTxn, loginId from LoginEvent(failed=false)

4.2.1.3. Multiple Properties Per Event Type

You may assign events to context partitions based on the values of two or more event properties. The engine thus uses the combination of values of these properties to pick a context partition.

An example context declaration follows:

create context ByCustomerAndAccount partition by custId and account from BankTxn

The next statement refers to the context and computes a total withdrawal amount, per account and customer:

context ByCustomerAndAccount select custId, account, sum(amount) from BankTxn

As you can see, the above statement does not need to specify group by clause to aggregate per customer and account, since events of each unique combination of customer id and account are assigned to separate context partitions.

4.2.1.4. Built-In Context Properties

The following context properties are available in your EPL statement when it refers to a keyed segmented context:

Table 4.1. Keyed Segmented Context Properties

NameDescription
nameThe string-type context name.
idThe integer-type internal context id that the engine assigns to the context partition.
key1The event property value for the first key.
keyNThe event property value for the Nth key.

Assume the keyed segmented context is declared as follows:

create context ByCustomerAndAccount partition by custId and account from BankTxn

You may, for example, select the context properties as follows:

context ByCustomerAndAccount
  select context.name, context.id, context.key1, context.key2 from BankTxn

4.2.1.5. Examples of Joins

This section discusses the impact of contexts on joins to provide further samples of use and deepen the understanding of context partitions.

Consider a context declared as follows:

create context ByCust partition by custId from BankTxn

The following statement matches, within the same customer id, the current event with the last 30 minutes of events to determine those events that match amounts:

context ByCust 
  select * from BankTxn as t1 unidirectional, BankTxn.win:time(30) t2 
  where t1.amount = t2.amount

Note that the where-clause in the join above does not mention customer id. Since each BankTxn applies to a specific context partition the join evaluates within that single context partition.

Consider the next statement that matches a security event with the last 30 minutes of transaction events for each customer:

context ByCust
  select * from SecurityEvent as t1 unidirectional, BankTxn.win:time(30) t2 
  where t1.customerName = t2.customerName

When a security event comes in, it applies to all context partitions and not any specific context partition, since the SecurityEvent event type is not part of the context declaration.

4.2.2. Hash Segmented Context

This context assigns events to context partitions based on result of a hash function and modulo operation. Each event thus belongs to exactly one context partition or zero context partitions if the event does not match the optional filter predicate expression(s). Each context partition handles exactly one result of hash value modulo granularity.

The syntax for creating a hashed segmented context is as follows:

create context context_name coalesce [by] 
  hash_func_name(hash_func_param) from stream_def
  [, hash_func_name(hash_func_param) from stream_def ]
  [, ...]
  granularity granularity_value
  [preallocate]

The context_name you assign to the context can be any identifier.

Following the context name is one or more lists of hash function name and parameters pairs and a stream definition for each entry, separated by comma (,).

The hash_func_name can either be consistent_hash_crc32 or hash_code or a plug-in single-row function. The hash_func_param is a list of parameter expressions.

  • If you specify consistent_hash_crc32 the engine computes a consistent hash code using the CRC-32 algorithm.

  • If you specify hash_code the engine uses the Java object hash code.

  • If you specify the name of a plug-in single-row function your function must return an integer value that is the hash code. You may use the wildcard (*) character among the parameters to pass the underlying event to the single-row function.

The stream_def is a stream definition which consists of an event type name optionally followed by parenthesis that contains filter expressions. If providing filter expressions, only events matching the provided filter expressions for that event type are considered by context partitions.

You may list multiple stream definitions. Please refer to usage guidelines below when specifying multiple stream definitions.

The granularity is required and is an integer number that defines the maximum number of context partitions. The engine computes hash code modulo granularity hash(params) mod granularity to determine the context partition. When you specify the hash_code function the engine uses the object hash code and the computation is params.hashCode() % granularity.

Since the engine locks on the level of context partition to protect state, the granularity defines the maximum degree of parallelism. For example, a granularity of 1024 means that 1024 context partitions handle events and thus a maximum 1024 threads can process each assigned statement concurrently.

The optional preallocate keyword instructs the engine to allocate all context partitions at once at the time a statement refers to the context. This is beneficial for performance as the engine does not need to determine whether a context partition exists and dynamically allocate, but may require more memory.

The next statement creates a context SegmentedByCustomerHash that considers the CRC-32 hash code of the custId property of the BankTxn event type to pick the context partition to assign events to, with up to 16 different context partitions that are preallocated:

create context SegmentedByCustomerHash
  coalesce by consistent_hash_crc32(custId) from BankTxn granularity 16 preallocate

The following statement refers to the context created as above to compute a total withdrawal amount per account for each customer:

context SegmentedByCustomerHash
select custId, account, sum(amount) from BankTxn group by custId, account

Note that the statement above groups by custId: Since the events for different customer ids can be assigned to the same context partition, it is necessary that the EPL statement also groups by customer id.

The context declaration shown next assumes that the application provides a computeHash single-row function that accepts BankTxn as a parameter, wherein the result of this function must be an integer value that returns the context partition id for each event:

create context MyHashContext
  coalesce by computeHash(*) from BankTxn granularity 16 preallocate

The EPL statement that refers to a hash segmented context must have at least one filter expression, at any place within the EPL statement that looks for events of any of the event types listed in the context declaration.

For example, the following is not valid:

// Neither LoginEvent nor LogoutEvent are listed in the context declaration
context SegmentedByCustomerHash
select * from pattern [every a=LoginEvent -> b=LogoutEvent where timer:within(10 minutes)]

4.2.2.1. Multiple Stream Definitions

If the context declaration lists multiple streams, each event type must be unrelated: You may not list the same event type twice and you may not list a sub- or super-type of any event type already listed.

If the context declaration lists multiple streams, the hash code function should return the same hash code for the related keys of all streams.

The next statement creates a context HashedByCustomer that also considers LoginEvent and LogoutEvent:

create context HashedByCustomer as coalesce
  consistent_hash_crc32(custId) from BankTxn,
  consistent_hash_crc32(loginId) from LoginEvent, 
  consistent_hash_crc32(loginId) from LogoutEvent
  granularity 32 preallocate

4.2.2.2. Filters

You may add a filter expression to each of the event types listed. The engine applies the filter expression to the EPL statement that refers to the context and to the same event type.

The next statement creates a context HashedByCustomer that does not consider login events that indicate that the login failed.

create context HashedByCustomer 
  coalesce consistent_hash_crc32(loginId) from LoginEvent(failed = false) 
  granularity 1024 preallocate

4.2.2.3. Built-In Context Properties

The following context properties are available in your EPL statement when it refers to a keyed segmented context:

Table 4.2. Keyed Segmented Context Properties

NameDescription
nameThe string-type context name.
idThe integer-type internal context id that the engine assigns to the context partition.

Assume the hashed segmented context is declared as follows:

create context ByCustomerHash coalesce consistent_hash_crc32(custId) from BankTxn granularity 1024

You may, for example, select the context properties as follows:

context ByCustomerHash
  select context.name, context.id from BankTxn

4.2.2.4. Performance Considerations

The hash_code function based on the Java object hash code is generally faster then the CRC32 algorithm. The CRC32 algorithm, when used with a non-String parameter or with multiple parameters, requires the engine to serialize all expression results to a byte array to compute the CRC32 hash code.

We recommend keeping the granularity small (1k and under) when using preallocate.

When specifying a granularity greater then 65536 (64k) the engine switches to a Map-based lookup of context partition state which can slow down statement processing.

4.2.3. Category Segmented Context

This context assigns events to context partitions based on the values of one or more event properties, using a predicate expression(s) to define context partition membership. Each event can thus belong to zero, one or many context partitions depending on the outcome of the predicate expression(s).

The syntax for creating a category segmented context is as follows:

create context context_name
  group [by] group_expression as category_label
  [, group [by] group_expression as category_label]
  [, ...]
  from stream_def

The context_name you assign to the context can be any identifier.

Following the context name is a list of groups separated by the group keyword. The list of group is followed by the from keyword and a stream definition.

The group_expression is an expression that categorizes events. Each group expression must be followed by the as keyword and a category label which can be any identifier.

Group expressions are predicate expression and must return a Boolean true or false when applied to an event. For a given event, any number of the group expressions may return true thus categories can be overlapping.

The stream_def is a stream definition which consists of an event type name optionally followed by parenthesis that contains filter expressions. If providing filter expressions, only events matching the provided filter expressions for that event type are considered by context partitions.

The next statement creates a context CategoryByTemp that consider the value of the temperature property of the SensorEvent event type to pick context partitions to assign events to:

create context CategoryByTemp
  group temp < 65 as cold,
  group temp between 65 and 85 as normal,
  group temp > 85 as large
  from SensorEvent

The following statement simply counts, for each category, the number of events and outputs the category label and count:

context CategoryByTemp select context.label, count(*) from SensorEvent

4.2.3.1. Built-In Context Properties

The following context properties are available in your EPL statement when it refers to a category segmented context:

Table 4.3. Category Segmented Context Properties

NameDescription
nameThe string-type context name.
idThe integer-type internal context id that the engine assigns to the context partition.
labelThe category label is the string identifier value after the as keyword that is specified for each group.

You may, for example, select the context properties as follows:

context CategoryByTemp
  select context.name, context.id, context.label from SensorEvent

4.2.4. Non-Overlapping Context

You may declare a non-overlapping context that exists once or that repeats in a regular fashion as controlled by start and end conditions. The number of context partitions is always either one or zero: Context partitions do not overlap.

The syntax for creating a non-overlapping context is as follows:

create context context_name
  start start_condition end end_condition

The context_name you assign to the context can be any identifier.

Following the context name is the start keyword, a start_condition, the end keyword and an end_condition.

Both the start and end condition can be an event filter, a pattern, a crontab or a time period. The syntax of start and end conditions is described in Section 4.2.6, “Context Conditions”.

Once the start condition occurs, the engine no longer observes the start condition and begins observing the end condition. Once the end condition occurs, the engine observes the start condition again.

If you specified an event filter as the start condition, then the event also counts towards the statement(s) that refer to that context. If you specified a pattern as the start condition, then the events that may constitute the pattern match can also count towards the statement(s) that refer to the context provided that @inclusive and event tags are both specified (see below).

At the time of context activation when your application creates a statement that utilizes the context, the engine checks whether the start and end condition are crontab expressions. The engine evaluates the start and end crontab expressions and determines whether the current time is a time between start and end. If the current time is between start and end times, the engine allocates the context partition and waits for observing the end time. Otherwise the engine waits to observe the start time and does not allocate a context partition.

The built-in context properties that are available are the same as described in Section 4.2.5.1, “Built-In Context Properties”.

The next statement creates a context NineToFive that declares a daily time period that starts at 9 am and ends at 5 pm:

create context NineToFive start (0, 9, *, *, *) end (0, 17, *, *, *)

The following statement outputs speed violations between 9 am and 5 pm, considering a speed of 100 or greater as a violation:

context NineToFive select * from TrafficEvent(speed >= 100)

The example that follows demonstrates the use of an event filter as the start condition and a pattern as the end condition.

The next statement creates a context PowerOutage that declares a context that starts when the first PowerOutageEvent event arrives and that ends 5 seconds after a subsequent PowerOnEvent arrives:

create context PowerOutage start PowerOutageEvent end pattern [PowerOnEvent -> timer:interval(5)]

The following statement outputs the temperature during a power outage and for 5 seconds after the power comes on:

context PowerOutage select * from TemperatureEvent

To output only the last value when a context partition ends (terminates, expires), please read on to the description of output rate limiting.

4.2.5. Overlapping Context

This context initiates a new context partition when an initiating condition occurs, and terminates one or more context partitions when the terminating condition occurs. The engine maintains as many context partitions as the initiating condition fired, and discards context partitions that terminate when the termination condition fires.

The syntax for creating an overlapping context is as follows:

create context context_name
  initiated [by] initiating_condition
  terminated [by] terminating_condition

The context_name you assign to the context can be any identifier.

Following the context name is the initiated keyword followed by the initiating condition. It follows the terminated keyword followed by the terminating condition.

Both the initiating and terminating condition can be an event filter, a pattern, a crontab or a time period. The syntax of initiating and terminating conditions is described in Section 4.2.6, “Context Conditions”.

If you specified an event filter for the initiating condition, then the event that initiates a new context partition also counts towards the statement(s) that refer to that context. If you specified a pattern to initiate a new context partition, then the events that may constitute the pattern match can also count towards the statement(s) that refer to the context provided that @inclusive and event tags are both specified (see below).

The next statement creates a context CtxTrainEnter that allocates a new context partition when a train enters a station, and that terminates each context partition 5 minutes after the time the context partition was allocated:

create context CtxTrainEnter
  initiated by TrainEnterEvent as te
  terminated after 5 minutes

The context declared above assigns the stream name te. Thereby the initiating event's properties can be accessed, for example, by specifying context.te.trainId.

The following statement detects when a train enters a station as indicated by a TrainEnterEvent, but does not leave the station within 5 minutes as would be indicated by a matching TrainLeaveEvent:

context CtxTrainEnter
select t1 from pattern [
  t1=TrainEnterEvent -> timer:interval(5 min) and not TrainLeaveEvent(trainId = context.te.trainId)
  ]

Since the TrainEnterEvent that initiates a new context partition also counts towards the statement, the first part of the pattern (the t1=TrainEnterEvent) is satisfied by that initiating event.

The next statement creates a context CtxEachMinute that allocates a new context partition every 1 minute, and that terminates each context partition 5 minutes after the time the context partition was allocated:

create context CtxEachMinute
  initiated by pattern [every timer:interval(1 minute)]
  terminated after 1 minutes

The following statement averages the temperature, starting anew every 1 minute and outputs the aggregate value continuously:

context CtxEachMinute select avg(temp) from SensorEvent

To output only the last value when a context partition ends (terminates, expires), please read on to the description of output rate limiting.

4.2.5.1. Built-In Context Properties

The following context properties are available in your EPL statement when it refers to a context:

Table 4.4. Context Properties

NameDescription
nameThe string-type context name.
startTimeThe start time of the context partition.
endTimeThe end time of the context partition. This field is only available in the case that it can be computed from the crontab or time period expression that is provided.

You may, for example, select the context properties as follows:

context NineToFive 
select context.name, context.startTime, context.endTime from TrafficEvent(speed >= 100)

The following statement looks for the next train leave event for the same train id and selects a few of the context properties:

context CtxTrainEnter
select *, context.te.trainId, context.id, context.name
from TrainLeaveEvent(trainId = context.te.trainId)

4.2.6. Context Conditions

Context start/initiating and end/terminating conditions are for use with overlapping and non-overlapping contexts. Any combination of conditions may be specified.

4.2.6.1. Filter Context Condition

Use the syntax described here to define the stream that starts/initiates a context partition or that ends/terminates a context partition.

The syntax is:

event_stream_name [(filter_criteria)] [as stream_name]

The event_stream_name is either the name of an event type or name of an event stream populated by an insert into statement. The filter_criteria is optional and consists of a list of expressions filtering the events of the event stream, within parenthesis after the event stream name.

Two examples are:

// A non-overlapping context that starts when MyStartEvent arrives and ends when MyEndEvent arrives
create context MyContext start MyStartEvent end MyEndEvent
// An overlapping context where each MyEvent with level greater zero 
// initiates a new context partition that terminates after 10 seconds
create context MyContext initiated MyEvent(level > 0) terminated after 10 seconds

You may correlate the start/initiating and end/terminating streams by providing a stream name following the as keyword, and by referring to that stream name in the filter criteria of the end condition.

Two examples that correlate the start/initiating and end/terminating condition are:

// A non-overlapping context that starts when MyEvent arrives
// and ends when a matching MyEvent arrives (same id)
create context MyContext 
start MyEvent as myevent
end MyEvent(id=myevent.id)
// An overlapping context where each MyInitEvent initiates a new context partition 
// that terminates when a matching MyTermEvent arrives 
create context MyContext 
initiated by MyInitEvent as e1 
terminated by MyTermEvent(id=e1.id, level <> e1.level)

4.2.6.2. Pattern Context Condition

You can define a pattern that starts/initiates a context partition or that ends/terminates a context partition.

The syntax is:

pattern [pattern_expression] [@inclusive]

The pattern_expression is a pattern at Chapter 6, EPL Reference: Patterns.

Specify @inclusive after the pattern to have those same events that constitute the pattern match also count towards any statements that are associated to the context. You must also provide a tag for each event in a pattern that should be included.

Examples are:

// A non-overlapping context that starts when either StartEventOne or StartEventTwo arrive
// and that ends after 5 seconds.
// Here neither StartEventOne or StartEventTwo count towards any statements
// that are referring to the context.
create context MyContext 
  start pattern [StartEventOne or StartEventTwo] 
  end after 5 seconds
// Same as above. 
// Here both StartEventOne or StartEventTwo do count towards any statements
// that are referring to the context.
create context MyContext 
  start pattern [a=StartEventOne or b=StartEventTwo] @inclusive 
  end after 5 seconds
// An overlapping context where each distinct MyInitEvent initiates a new context 
// and each context partition terminates after 20 seconds
// We use @inclusive to say that the same MyInitEvent that fires the pattern
// also applies to statements that are associated to the context.
create context MyContext
  initiated by pattern [every-distinct(a.id, 20 sec) a=MyInitEvent]@inclusive
  terminated after 20 sec
// An overlapping context where each pattern match initiates a new context 
// and all context partitions terminate when MyTermEvent arrives.
// The MyInitEvent and MyOtherEvent that trigger the pattern are themselves not included 
// in any statements that are associated to the context.
create context MyContext
  initiated by pattern [every MyInitEvent -> MyOtherEvent where timer:within(5)]
  terminated by MyTermEvent

You may correlate the start and end streams by providing tags as part of the pattern, and by referring to the tag name(s) in the filter criteria of the end condition.

An example that correlates the start and end condition is:

// A non-overlapping context that starts when either StartEventOne or StartEventTwo arrive
// and that ends when either a matching EndEventOne or EndEventTwo arrive
create context MyContext 
  start pattern [a=StartEventOne or b=StartEventTwo]@inclusive
  end pattern [EndEventOne(id=a.id) or EndEventTwo(id=b.id)]

4.2.6.3. Crontab Context Condition

Crontab expression are described in Section 6.6.2.2, “timer:at”.

Examples are:

// A non-overlapping context started daily between 9 am to 5 pm
// and not started outside of these hours:
create context NineToFive start (0, 9, *, *, *) end (0, 17, *, *, *)
// An overlapping context where crontab initiates a new context every 1 minute
// and each context partition terminates after 10 seconds:
create context MyContext initiated (*, *, *, *, *) terminated after 10 seconds

4.2.6.4. Time Period Context Condition

You may specify a time period that the engine observes before the condition fires. Time period expressions are described in Section 5.2.1, “Specifying Time Periods”.

The syntax is:

after time_period_expression

Examples are:

// A non-overlapping context started after 10 seconds 
// that ends 1 minute after it starts and that again starts 10 seconds thereafter.
create context NonOverlap10SecFor1Min start after 10 seconds end after 1 minute
// An overlapping context that starts a new context partition every 5 seconds
// and each context partition lasts 1 minute
create context Overlap5SecFor1Min initiated after 5 seconds terminated after 1 minute

4.3. Context Nesting

A nested context is a context that is composed from two or more contexts.

The syntax for creating a nested context is as follows:

create context context_name
  context nested_context_name [as] nested_context_definition ,
  context nested_context_name [as] nested_context_definition [, ...]

The context_name you assign to the context can be any identifier.

Following the context name is a comma-separated list of nested contexts. For each nested context specify the context keyword followed a nested context name and the nested context declaration. Any of the context declarations as outlined in Section 4.2, “Context Declaration” are allowed for nested contexts. The order of nested context declarations matters as outlined below.

The next statement creates a nested context NineToFiveSegmented that, between 9 am and 5 pm, allocates a new context partition for each customer id:

create context NineToFiveSegmented
  context NineToFive start (0, 9, *, *, *) end (0, 17, *, *, *),
  context SegmentedByCustomer partition by custId from BankTxn

The following statement refers to the nested context to compute a total withdrawal amount per account for each customer but only between 9 am and 5 pm:

context NineToFiveSegmented
select custId, account, sum(amount) from BankTxn group by account

Esper implements nested contexts as a context tree: The context declared first controls the lifecycle of the context(s) declared thereafter. Thereby, in the above example, outside of the 9am-to-5pm time the engine has no memory and consumes no resources in relationship to bank transactions or customer ids.

When combining segmented contexts, the set of context partitions for the nested context effectively is the Cartesian product of the partition sets of the nested segmented contexts.

When combining temporal contexts with other contexts, since temporal contexts may overlap and may terminate, it is important to understand that temporal contexts control the lifecycle of sub-contexts (contexts declared thereafter). The order of declaration of contexts in a nested context can thereby change resource usage and output result.

The next statement creates a context that allocates context partition only when a train enters a station and then for each hash of the tag id of a passenger as indicated by PassengerScanEvent events, and terminates all context partitions after 5 minutes:

create context CtxNestedTrainEnter
  context InitCtx initiated by TrainEnterEvent as te terminated after 5 minutes,
  context HashCtx coalesce by consistent_hash_crc32(tagId) from PassengerScanEvent
    granularity 16 preallocate

In the example above the engine does not start tracking PassengerScanEvent events or hash codes or allocate context partitions until a TrainEnterEvent arrives.

4.3.1. Built-In Nested Context Properties

Context properties of all nested contexts are available for use. Specify context.nested_context_name.property_name or if nested context declaration provided stream names or tags for patterns then context.nested_context_name.stream_name.property_name.

For example, consider the CtxNestedTrainEnter context declared earlier. The following statement selects a few of the context properties:

context CtxNestedTrainEnter
select context.InitCtx.te.trainId, context.HashCtx.id,
  tagId, count(*) from PassengerScanEvent group by tagId

In a second example, consider the NineToFiveSegmented context declared earlier. The following statement selects a few of the context properties:

context NineToFiveSegmented
select context.NineToFive.startTime, context.SegmentedByCustomer.key1 from BankTxn

The following context properties are available in your EPL statement when it refers to a nested context:

Table 4.5. Nested Context Properties

NameDescription
nameThe string-type context name.
idThe integer-type internal context id that the engine assigns to the context partition.

This example selects the nested context name and context partition id:

context NineToFiveSegmented select context.name, context.id from BankTxn

4.4. Partitioning Without Context Declaration

You do not need to declare a context to partition data windows, aggregation values or patterns themselves individually. You may mix-and-match partitioning as needed.

The table below outlines other partitioning syntax supported by EPL:

Table 4.6. Partition in EPL without the use of Context Declaration

Partition TypeDescriptionExample
Grouped Data Window

Partitions at the level of data window, only applies to appended data window(s).

Syntax: std:groupby(...)

// Length window of 2 events per customer
select * from
  BankTxn.std:groupwin(custId).win:length(2)
Grouped Aggregation

Partitions at the level of aggregation, only applies to any aggregations.

Syntax: group by ....

select avg(price), window(*)
  from BankTxn group by custId
Pattern

Partitions pattern subexpressions.

Syntax: every or every-distinct

select * from pattern [
  every a=BankTxn -> BankTxn(custId = a.custId)...]
Match-Recognize

Partitions match-recognize patterns.

Syntax: partition by

select * from match_recognize 
  ... partition by custId
Join and Subquery

Partitions join and subqueries.

Syntax: where ...

select * from ... where a.custId = b.custId

4.5. Output When Context Partition Ends

You may use output rate limiting to trigger output when a context partition ends, as further described in Section 5.7, “Stabilizing and Controlling Output: the Output Clause”.

Consider the fixed temporal context: A new context partition gets allocated at the designated start time and the current context partition ends at the designated end time. To trigger output when the context partition ends and before it gets removed, read on.

The same is true for the initiated temporal context: That context starts a new context partition when trigger events arrive or when a pattern matches. Each context partition expires (ends, terminates) after the specified time period passed. To trigger output at the time the context partition expires, read on.

You may use the when terminated syntax with output rate limiting to trigger output when a context partition ends. The following example demonstrates the idea by declaring an initiated temporal context.

The next statement creates a context CtxEachMinute that initiates a new context partition every 1 minute, and that expires each context partition after 5 minutes:

create context CtxEachMinute
initiated by pattern [every timer:interval(1 min)]
terminated after 5 minutes

The following statement computes an ongoing average temperature however only outputs the last value of the average temperature after 5 minutes when a context partition ends:

context CtxEachMinute
select context.id, avg(temp) from SensorEvent output snapshot when terminated

The when terminated syntax can be combined with other output rates.

The next example outputs every 1 minute and also when the context partition ends:

context CtxEachMinute
select context.id, avg(temp) from SensorEvent output snapshot every 1 minute and when terminated

In the case that the end/terminating condition of the context partition is an event or pattern, the context properties contain the information of the tagged events in the pattern or the single event that ended/terminated the context partition.

For example, consider the following context wherein the engine initializes a new context partition for each arriving MyStartEvent event and that terminates a context partition when a matching MyEndEvent arrives:

create context CtxSample
initiated by MyStartEvent as startevent
terminated by MyEndEvent(id = startevent.id) as endevent

The following statement outputs the id property of the initiating and terminating event and only outputs when a context partition ends:

context CtxSample 
select context.startevent.id, context.endevent.id, count(*) from MyEvent
output snapshot when terminated
=======

You may in addition specify a termination expression that the engine evaluates when a context partition terminates. Only when the terminaton expression evaluates to true does output occur. The expression may refer to built-in properties as described in Section 5.7.1.1, “Controlling Output Using an Expression”. The syntax is as follows:

...output when terminated and termination_expression

The next example statement outputs when a context partition ends but only if at least two events are available for output:

context CtxEachMinute
select * from SensorEvent output when terminated and count_insert >= 2

The final example EPL outputs when a context partition ends and sets the variable myvar to a new value:

context CtxEachMinute
select * from SensorEvent output when terminated then set myvar=3

4.6. Context and Named Window

Named windows are globally-visible data window that may be referred to by multiple statements. You may refer to named windows in statements that declare a context without any special considerations.

You may also create a named window and declare a context for the named window. In this case the engine in effect manages separate named windows, one for each context partition. Limitations apply in this case that we discuss herein.

For example, consider the 9 am to 5 pm fixed temoral context as shown earlier:

create context NineToFive start (0, 9, *, *, *) end (0, 17, *, *, *)

You may create a named window that only exists between 9 am and 5 pm:

context NineToFive create window SpeedingEvents1Hour.win:time(30 min) as TrafficEvent

You can insert into the named window:

insert into SpeedingEvents1Hour select * from TrafficEvent(speed > 100)

Any on-merge, on-select, on-update and on-delete statements must however declare the same context.

The following is not a valid statement as it does not declare the same context that was used to declare the named window:

// You must declare the same context for on-trigger statements
on TruncateEvent delete from SpeedingEvents1Hour

The following is valid:

context NineToFive on TruncateEvent delete from SpeedingEvents1Hour

4.7. Operations on Specific Context Partitions

Selecting specific context partitions and interrogating context partitions is useful for:

  1. Iterating a specific context partition or a specific set of context partitions. Iterating a statement is described in Section 13.3.5, “Using Iterators”.

  2. Executing an on-demand (fire-and-forget) query against specific context partition(s). On-demand queries are described in Section 13.4.3, “On-Demand Snapshot Query Execution”.

Esper provides APIs to identify, filter and select context partitions for statement iteration and on-demand queries. The APIs are described in detail at Section 13.18, “Context Partition Selection”.

For statement iteration, your application can provide context selector objects to the iterate and safeIterate methods on EPStatement. If your code does not provide context selectors the iteration considers all context partitions. At the time of iteration, the engine obtains the current set of context partitions and iterates each independently. If your statement has an order-by clause, the order-by clause orders within the context partition and does not order across context partitions.

For on-demand queries, your application can provide context selector objects to the executeQuery method on EPRuntime and to the execute method on EPOnDemandPreparedQuery. If your code does not provide context selectors the on-demand query considers all context partitions. At the time of on-demand query execution, the engine obtains the current set of context partitions and queries each independently. If the on-demand query has an order-by clause, the order-by clause orders within the context partition and does not order across context partitions.

Chapter 5. EPL Reference: Clauses

5.1. EPL Introduction

The Event Processing Language (EPL) is a SQL-like language with SELECT, FROM, WHERE, GROUP BY, HAVING and ORDER BY clauses. Streams replace tables as the source of data with events replacing rows as the basic unit of data. Since events are composed of data, the SQL concepts of correlation through joins, filtering and aggregation through grouping can be effectively leveraged.

The INSERT INTO clause is recast as a means of forwarding events to other streams for further downstream processing. External data accessible through JDBC may be queried and joined with the stream data. Additional clauses such as the PATTERN and OUTPUT clauses are also available to provide the missing SQL language constructs specific to event processing.

The purpose of the UPDATE clause is to update event properties. Update takes place before an event applies to any selecting statements or pattern statements.

EPL statements are used to derive and aggregate information from one or more streams of events, and to join or merge event streams. This section outlines EPL syntax. It also outlines the built-in views, which are the building blocks for deriving and aggregating information from event streams.

EPL statements contain definitions of one or more views. Similar to tables in a SQL statement, views define the data available for querying and filtering. Some views represent windows over a stream of events. Other views derive statistics from event properties, group events or handle unique event property values. Views can be staggered onto each other to build a chain of views. The Esper engine makes sure that views are reused among EPL statements for efficiency.

The built-in set of views is:

  1. Data window views: win:length, win:length_batch, win:time, win:time_batch, win:time_length_batch, win:time_accum, win:ext_timed, ext:sort_window, ext:time_order, std:unique, std:groupwin, std:lastevent, std:firstevent, std:firstunique, win:firstlength, win:firsttime.

  2. Views that derive statistics: std:size, stat:uni, stat:linest, stat:correl, stat:weighted_avg.

EPL provides the concept of named window. Named windows are data windows that can be inserted-into and deleted-from by one or more statements, and that can queried by one or more statements. Named windows have a global character, being visible and shared across an engine instance beyond a single statement. Use the CREATE WINDOW clause to create named windows. Use the ON MERGE clause to atomically merge events into named window state, the INSERT INTO clause to insert data into a named window, the ON DELETE clause to remove events from a named window, the ON UPDATE clause to update events held by a named window and the ON SELECT clause to perform a query triggered by a pattern or arriving event on a named window. Finally, the name of the named window can occur in a statement's FROM clause to query a named window or include the named window in a join or subquery.

EPL allows execution of on-demand (fire-and-forget, non-continuous, triggered by API) queries against named windows through the runtime API. The query engine automatically indexes named window data for fast access by ON SELECT/UPDATE/INSERT/DELETE without the need to create an index explicitly. For fast on-demand query execution via runtime API use the CREATE INDEX syntax to create an explicit index.

Use CREATE SCHEMA to declare an event type.

Variables can come in handy to parameterize statements and change parameters on-the-fly and in response to events. Variables can be used in an expression anywhere in a statement as well as in the output clause for dynamic control of output rates.

Esper can be extended by plugging-in custom developed views and aggregation functions.

5.2. EPL Syntax

EPL queries are created and stored in the engine, and publish results to listeners as events are received by the engine or timer events occur that match the criteria specified in the query. Events can also be obtained from running EPL queries via the safeIterator and iterator methods that provide a pull-data API.

The select clause in an EPL query specifies the event properties or events to retrieve. The from clause in an EPL query specifies the event stream definitions and stream names to use. The where clause in an EPL query specifies search conditions that specify which event or event combination to search for. For example, the following statement returns the average price for IBM stock ticks in the last 30 seconds.

select avg(price) from StockTick.win:time(30 sec) where symbol='IBM'

EPL queries follow the below syntax. EPL queries can be simple queries or more complex queries. A simple select contains only a select clause and a single stream definition. Complex EPL queries can be build that feature a more elaborate select list utilizing expressions, may join multiple streams, may contain a where clause with search conditions and so on.

[annotations]
[expression_declarations]
[context context_name]
[insert into insert_into_def]
select select_list
from stream_def [as name] [, stream_def [as name]] [,...]
[where search_conditions]
[group by grouping_expression_list]
[having grouping_search_conditions]
[output output_specification]
[order by order_by_expression_list]
[limit num_rows]

5.2.1. Specifying Time Periods

Time-based windows as well as pattern observers and guards take a time period as a parameter. Time periods follow the syntax below.

time-period : [year-part] [month-part] [week-part] [day-part] [hour-part] 
      [minute-part] [seconds-part] [milliseconds-part]

year-part : (number|variable_name) ("years" | "year")
month-part : (number|variable_name) ("months" | "month")
week-part : (number|variable_name) ("weeks" | "week")
day-part : (number|variable_name) ("days" | "day")
hour-part : (number|variable_name) ("hours" | "hour")
minute-part : (number|variable_name) ("minutes" | "minute" | "min")
seconds-part : (number|variable_name) ("seconds" | "second" | "sec")
milliseconds-part : (number|variable_name) ("milliseconds" | "millisecond" | "msec")

Some examples of time periods are:

10 seconds
10 minutes 30 seconds
20 sec 100 msec
1 day 2 hours 20 minutes 15 seconds 110 milliseconds
0.5 minutes
1 year
1 year 1 month

Variable names and substitution parameters '?' for prepared statements are also allowed as part of a time period expression.

A unit in the month part is equivalent to 30 days.

5.2.2. Using Comments

Comments can appear anywhere in the EPL or pattern statement text where whitespace is allowed. Comments can be written in two ways: slash-slash (// ...) comments and slash-star (/* ... */) comments.

Slash-slash comments extend to the end of the line:

// This comment extends to the end of the line.
// Two forward slashes with no whitespace between them begin such comments.

select * from MyEvent  // this is a slash-slash comment

// All of this text together is a valid statement.

Slash-star comments can span multiple lines:

/* This comment is a "slash-star" comment that spans multiple lines.
 * It begins with the slash-star sequence with no space between the '/' and '*' characters.
 * By convention, subsequent lines can begin with a star and are aligned, but this is 
 * not required.
 */		
select * from MyEvent  /* this also works */

Comments styles can also be mixed:

select field1, // first comment
  /* second comment*/  field2
  from MyEvent

5.2.3. Reserved Keywords

Certain words such as select, delete or set are reserved and may not be used as identifiers. Please consult Appendix B, Reserved Keywords for the list of reserved keywords and permitted keywords.

Names of built-in functions and certain auxiliary keywords are permitted as event property names and in the rename syntax of the select clause. For example, count is acceptable.

Consider the example below, which assumes that 'last' is an event property of MyEvent:

// valid
select last, count(*) as count from MyEvent

This example shows an incorrect use of a reserved keyword:

// invalid
select insert from MyEvent

EPL offers an escape syntax for reserved keywords: Event properties as well as event or stream names may be escaped via the backwards apostrophe ` (ASCII 96) character.

The next example queries an event type by name Order (a reserved keyword) that provides a property by name insert (a reserved keyword):

// valid
select `insert` from `Order`

5.2.4. Escaping Strings

You may surround string values by either double-quotes (") or single-quotes ('). When your string constant in an EPL statement itself contains double quotes or single quotes, you must escape the quotes.

Double and single quotes may be escaped by the backslash (\) character or by unicode notation. Unicode 0027 is a single quote (') and 0022 is a double quote (").

The sample EPL below escapes the single quote in the string constant John's, and filters out order events where the name value matches:

select * from OrderEvent(name='John\'s')
// ...equivalent to...
select * from OrderEvent(name='John\u0027s')

The next EPL escapes the string constant Quote "Hello":

select * from OrderEvent(description like "Quote \"Hello\"")
// is equivalent to
select * from OrderEvent(description like "Quote \u0022Hello\u0022")

When building an escape string via the API, escape the backslash, as shown in below code snippet:

epService.getEPAdministrator().createEPL("select * from OrderEvent(name='John\\'s')");
// ... and for double quotes...
epService.getEPAdministrator().createEPL("select * from OrderEvent(
  description like \"Quote \\\"Hello\\\"\")");

5.2.5. Data Types

EPL honors all Java built-in primitive and boxed types, including java.math.BigInteger and java.math.BigDecimal.

EPL also follows Java standards in terms of widening, performing widening automatically in cases where widening type conversion is allowed without loss of precision, for both boxed and primitive types and including BigInteger and BigDecimal:

  1. byte to short, int, long, float, double, BigInteger or BigDecimal

  2. short to int, long, float, or double, BigInteger or BigDecimal

  3. char to int, long, float, or double, BigInteger or BigDecimal

  4. int to long, float, or double, BigInteger or BigDecimal

  5. long to float or double, BigInteger or BigDecimal

  6. float to double or BigDecimal

  7. double to BigDecimal

In cases where loss of precision is possible because of narrowing requirements, EPL compilation outputs a compilation error.

EPL supports casting via the cast function.

EPL returns double-type values for division regardless of operand type. EPL can also be configured to follow Java rules for integer arithmetic instead as described in Section 14.4.20, “Engine Settings related to Expression Evaluation”.

Division by zero returns positive or negative infinity. Division by zero can be configured to return null instead.

5.2.5.1. Data Type of Constants

An EPL constant is a number or a character string that indicates a fixed value. Constants can be used as expressions in many EPL statements, including variable assignment and case-when statements. They can also be used as parameter values for many built-in objects and clauses. Constants are also called literals.

EPL supports the standard SQL constant notation as well as Java data type literals.

The following are types of EPL constants:

Table 5.1. Types of EPL constants

TypeDescriptionExamples
stringA single character to an unlimited number of characters. Valid delimiters are the single quote (') or double quote (").
select 'volume' as field1,
   "sleep" as field2, 
  "\u0041" as unicodeA
booleanA boolean value.
select true as field1, 
  false as field2
integerAn integer value (4 byte).
select 1 as field1, 
  -1 as field2, 
  1e2 as field3
longA long value (8 byte). Use the "L" or "l" (lowercase L) suffix.
select 1L as field1, 
  1l as field2
doubleA double-precision 64-bit IEEE 754 floating point.
select 1.67 as field1, 
  167e-2 as field2, 
  1.67d as field3
floatA single-precision 32-bit IEEE 754 floating point. Use the "f" suffix.
select 1.2f as field1, 
  1.2F as field2
byteA 8-bit signed two's complement integer.
select 0x10 as field1

EPL does not have a single-byte character data type for its literals. Single character literals are treated as string.

Internal byte representation and boundary values of constants follow the Java standard.

5.2.5.2. BigInteger and BigDecimal

EPL automatically performs widening of numbers to BigInteger and BigDecimal as required, and employs the respective equals, compareTo and arithmetic methods provided by BigInteger and BigDecimal.

To explicitly create BigInteger and BigDecimal constants in EPL, please use the cast syntax : cast(value, BigInteger).

Note that since BigDecimal.valueOf(1.0) is not the same as BigDecimal.valueOf(1) (in terms of equality through equals), care should be taken towards the consistent use of scale.

When using aggregation functions for BigInteger and BigDecimal values, please note these limitations:

  1. The median, stddev and avedev aggregation functions operate on the double value of the object and return a double value.

  2. All other aggregation functions return BigDecimal or BigInteger values (except count).

5.2.6. Annotation

An annotation is an addition made to information in a statement. Esper provides certain built-in annotations for defining statement name, adding a statement description or for tagging statements such as for managing statements or directing statement output. Other then the built-in annotations, applications can provide their own annotation classes that the EPL compiler can populate.

An annotation is part of the statement text and precedes the EPL select or pattern statement. Annotations are therefore part of the EPL grammar. The syntax for annotations follows the host language (Java, .NET) annotation syntax:

@annotation_name [(annotation_parameters)]

An annotation consists of the annotation name and optional annotation parameters. The annotation_name is the simple class name or fully-qualified class name of the annotation class. The optional annotation_parameters are a list of key-value pairs following the syntax:

@annotation_name (attribute_name = attribute_value, [name=value, ...])

The attribute_name is an identifier that must match the attributes defined by the annotation class. An attribute_value is a constant of any of the primitive types or string, an array, an enumeration value or another (nested) annotation. Null values are not allowed as annotation attribute values. Enumeration values are supported in EPL statements and not support in statements created via the createPattern method.

Use the getAnnotations method of EPStatement to obtain annotations provided via statement text.

5.2.6.1. Application-Provided Annotations

Your application may provide its own annotation classes. The engine detects and populates annotation instances for application annotation classes.

To enable the engine to recognize application annotation classes, your annotation name must include the package name (i.e. be fully-qualified) or your engine configuration must import the annotation class or package via the configuration API.

For example, assume that your application defines an annotation in its application code as follows:

public @interface ProcessMonitor {
  String processName();
  boolean isLongRunning default false;
  int[] subProcessIds;
}

Shown next is an EPL statement text that utilizes the annotation class defined earlier:

@ProcessMonitor(processName='CreditApproval',
  isLongRunning=true, subProcessIds = {1, 2, 3} )
  
select count(*) from ProcessEvent(processId in (1, 2, 3).win:time(30)

Above example assumes the ProcessMonitor annotation class is imported via configuration XML or API. Here is an example API call to import annotations provided by a package com.mycompany.myannotations:

epService.getEPAdministrator().getConfiguration().addImport("com.mycompany.myannotations.*");

5.2.6.2. Built-In Annotations

The list of built-in EPL annotations is:

Table 5.2. Built-In EPL Annotations

NamePurpose and AttributesExample
Name

Provides a statement name. Attributes are:

value : Statement name.

@Name("MyStatementName")
Description

Provides a statement textual description. Attributes are:

value : Statement description.

@Description("A statement description 
is placed here.")
Tag

For tagging a statement with additional information. Attributes are:

name : Tag name.

value : Tag value.

@Tag(name="MyTagName", 
 value="MyTagValue")
Priority

Applicable when an event (or schedule) matches filter criteria for multiple statements: Defines the order of statement processing (requires an engine-level setting).

Attributes are:

value : priority value.

@Priority(10)
Drop

Applicable when an event (or schedule) matches filter criteria for multiple statements, drops the event after processing the statement (requires an engine-level setting).

No attributes.

@Drop
Hint

For providing one or more hints towards how the engine should execute a statement. Attributes are:

value : A comma-separated list of one or more case-insensitive keywords.

@Hint('ITERATE_ONLY')
Hook

Use this annotation to register one or more statement-specific hooks providing a hook type for each individual hook, such as for SQL parameter, column or row conversion.

Attributes are the hook type and the hook itself (usually a import or class name):

@Hook(type=HookType.SQLCOL, hook='MyDBTypeConvertor')
Audit

Causes the engine to output detailed processing information for a statement.

optional value : A comma-separated list of one or more case-insensitive keywords.

@Audit

The following example statement text specifies some of the built-in annotations in combination:

@Name("RevenuePerCustomer")
@Description("Outputs revenue per customer considering all events encountered so far.")
@Tag(name="grouping", value="customer")

select customerId, sum(revenue) from CustomerRevenueEvent

5.2.6.3. @Name

Use the @Name EPL annotation to specify a statement name within the EPL statement itself, as an alternative to specifying the statement name via API.

If your application is also providing a statement name through the API, the statement name provided through the API overrides the annotation-provided statement name.

Example:

@Name("SecurityFilter1") select * from SecurityFilter(ip="127.0.0.1")

5.2.6.4. @Description

Use the @Description EPL annotation to add a statement textual description.

Example:

@Description('This statement filters localhost.') select * from SecurityFilter(ip="127.0.0.1")

5.2.6.5. @Tag

Use the @Tag EPL annotation to tag statements with name-value pairs, effectively adding a property to the statement. The attributes name and value are of type string.

Example:

@Tag(name='ip_sensitive', value='Y') 
@Tag(name='author', value='Jim')
select * from SecurityFilter(ip="127.0.0.1")

5.2.6.6. @Priority

This annotation only takes effect if the engine-level setting for prioritized execution is set via configuration, as described in Section 14.4.21, “Engine Settings related to Execution of Statements”.

Use the @Priority EPL annotation to tag statements with a priority value. The default priority value is zero (0) for all statements. When an event (or single timer execution) requires processing the event for multiple statements, processing begins with the highest priority statement and ends with the lowest-priority statement.

Example:

@Priority(10) select * from SecurityFilter(ip="127.0.0.1")

5.2.6.7. @Drop

This annotation only takes effect if the engine-level setting for prioritized execution is set via configuration, as described in Section 14.4.21, “Engine Settings related to Execution of Statements”.

Use the @Drop EPL annotation to tag statements that preempt all other same or lower-priority statements. When an event (or single timer execution) requires processing the event for multiple statements, processing begins with the highest priority statement and ends with the first statement marked with @Drop, which becomes the last statement to process that event.

Unless a different priority is specified, the statement with the @Drop EPL annotation executes at priority 1. Thereby @Drop alone is an effective means to remove events from a stream.

Example:

@Drop select * from SecurityFilter(ip="127.0.0.1")

5.2.6.8. @Hint

A hint can be used to provide tips for the engine to affect statement execution. Hints change performance or memory-use of a statement but generally do not change its output.

The string value of a Hint annotation contains a keyword or a comma-separated list of multiple keywords. Hint keywords are case-insensitive. A list of hints is available in Section 19.2.23, “Consider using Hints”.

Example:

@Hint('disable_reclaim_group') 
select ipaddress, count(*) from SecurityFilter.win:time(60 sec) group by ipaddress

5.2.6.9. @Hook

A hook is for attaching a callback to a statement.

The type value of a Hook annotation defines the type of hook and the hook value is an imported or fully-qualified class name providing the callback implementation.

5.2.6.10. @Audit

Causes the engine to output detailed information about the statements processing. Described in more detail at Section 15.3.1, “@Audit Annotation”.

5.2.7. Expression Declaration

An EPL statement can contain expression declarations. Expressions that are common to multiple places in an EPL statement can be moved to a named expression declaration and reused within the statement without duplicating the expression itself.

An expression declaration follows the lambda-style expression syntax. This syntax was chosen as it typically allows for a shorter and more concise expression body that can be easier to read then most procedural code.

The syntax for an expression declaration is:

expression expression_name { expression_body }

An expression declaration consists of the expression name and an expression body. The expression_name is any identifier. The expression_body contains optional parameters and the expression. The parameter types and the return type of the expression is determined by the engine and do not need to be specified.

Please note the following limitation: Parameters to a declared expression can only be a stream name. For limitations as to the type of expression that can be used in the expression body please see below.

In the expression body the => lambda operator reads as "goes to". The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression. The lambda expression x => x * x is read "x goes to x times x".

In the expression body, if your expression takes no parameters, you may simply specify the expression and do not need the => lambda operator.

If your expression takes one parameters, specify the input parameter name followed by the => lambda operator and followed by the expression. The synopsis for use with a single input parameter is:

expression_body:   input_param_name => expression 

If your expression takes two or more parameters, specify the input parameter names in parenthesis followed by the => lambda operator followed by the expression. The synopsis for use with a multiple input parameter is:

expression_body:   (input_param [,input_param [,...]]) => expression 

The following example declares an expression that returns two times PI (ratio of the circumference of a circle to its diameter) and demonstrates its use in a select-clause:

expression twoPI { Math.PI * 2} select twoPI() from SampleEvent

The next example declares an expression that accepts one parameter: a MarketData event. The expression computes a new "mid" price based on the buy and sell price:

expression midPrice { x => x.buy + x.sell / 2 } 
select midPrice(md) from MarketDataEvent as md

A further example that demonstrates two parameters is listed next. The example joins two streams and uses the price value from MarketDataEvent and the sentiment value of NewsEvent to compute a weighted sentiment:

expression weightedSentiment { (x, y) => x.price * y.sentiment } 
select weightedSentiment(md, news) 
from MarketDataEvent.std:lastevent() as md, NewsEvent.std:lastevent() news

Any expression can be used in the expression body including aggregations, variables, subqueries or further declared expressions. Sub-queries, when used without in or exists, must be placed within parenthesis.

An example subquery within a declared expression is shown next:

expression newsSubq(md) { 
    (select sentiment from NewsEvent.std:unique(symbol) where symbol = md.symbol) 
} 
select newsSubq(mdstream)
from MarketDataEvent mdstream

When using declared expressions please note these limitations:

  1. Parameters to a declared expression can currently only be a stream name.

The following scope rules apply for declared expressions:

  1. The scope of the expression body of a declared expression only includes the parameters explicitly listed.

5.2.8. Script Declaration

Esper allows the use of scripting languages within EPL. Any scripting language that supports JSR 223 and also the MVEL scripting language can be specified in EPL.

Please see Chapter 17, Script Support for more information.

5.2.9. Referring to a Context

You may refer to a context in the EPL text by specifying the context keyword followed by a context name. Context are described in more detail at Chapter 4, Context and Context Partitions

The effect of referring to a context is that your statement operates according to the context dimensional information as declared for the context.

The synopsis is:

... context context_name ...

You may refer to a context in all statements except for the following types of statements:

  1. create schema for declaring event types.

  2. create variable for declaring a variable.

  3. create index for creating an index on a named window.

  4. update istream for updating insert stream events.

5.3. Choosing Event Properties And Events: the Select Clause

The select clause is required in all EPL statements. The select clause can be used to select all properties via the wildcard *, or to specify a list of event properties and expressions. The select clause defines the event type (event property names and types) of the resulting events published by the statement, or pulled from the statement via the iterator methods.

The select clause also offers optional istream, irstream and rstream keywords to control whether input stream, remove stream or input and remove stream events are posted to UpdateListener instances and observers to a statement. By default, the engine provides only the insert stream to listener and observers. See Section 14.4.16, “Engine Settings related to Stream Selection” on how to change the default.

The syntax for the select clause is summarized below.

select [istream | irstream | rstream] [distinct] * | expression_list ... 

The istream keyword is the default, and indicates that the engine only delivers insert stream events to listeners and observers. The irstream keyword indicates that the engine delivers both insert and remove stream. Finally, the rstream keyword tells the engine to deliver only the remove stream.

The distinct keyword outputs only unique rows depending on the column list you have specified after it. It must occur after the select and after the optional stream keywords, as described in more detail below.

5.3.1. Choosing all event properties: select *

The syntax for selecting all event properties in a stream is:

select * from stream_def

The following statement selects StockTick events for the last 30 seconds of IBM stock ticks.

select * from StockTick(symbol='IBM').win:time(30 sec)

You may well be asking: Why does the statement specify a time window here? First, the statement is meant to demonstrate the use of * wildcard. When the engine pushes statement results to your listener and as the statement does not select remove stream events via rstream keyword, the listener receives only new events and the time window could be left off. By adding the time window the pull API (iterator API or JDBC driver) returns the last 30 seconds of events.

The * wildcard and expressions can also be combined in a select clause. The combination selects all event properties and in addition the computed values as specified by any additional expressions that are part of the select clause. Here is an example that selects all properties of stock tick events plus a computed product of price and volume that the statement names 'pricevolume':

select *, price * volume as pricevolume from StockTick

When using wildcard (*), Esper does not actually copy your event properties out of your event or events. It simply wraps your native type in an EventBean interface. Your application has access to the underlying event object through the getUnderlying method and has access to the property values through the get method.

In a join statement, using the select * syntax selects one event property per stream to hold the event for that stream. The property name is the stream name in the from clause.

5.3.2. Choosing specific event properties

To choose the particular event properties to return:

select event_property [, event_property] [, ...] from stream_def

The following statement simply selects the symbol and price properties of stock ticks, and the total volume for stock tick events in a 60-second time window.

select symbol, price, sum(volume) from StockTick(symbol='IBM').win:time(60 sec)

The following statement declares a further view onto the event stream of stock ticks: the univariate statistics view (stat:uni). The statement selects the properties that this view derives from the stream, for the last 100 events of IBM stock ticks in the length window.

select datapoints, total, average, variance, stddev, stddevpa
from StockTick(symbol='IBM').win:length(100).stat:uni(volume)

5.3.3. Expressions

The select clause can contain one or more expressions.

select expression [, expression] [, ...] from stream_def

The following statement selects the volume multiplied by price for a time batch of the last 30 seconds of stock tick events.

select volume * price from StockTick.win:time_batch(30 sec)

5.3.4. Renaming event properties

Event properties and expressions can be renamed using below syntax.

select [event_property | expression] as identifier [, ...]

The following statement selects volume multiplied by price and specifies the name volPrice for the resulting column.

select volume * price as volPrice from StockTick

Identifiers cannot contain the "." (dot) character, i.e. "vol.price" is not a valid identifier for the rename syntax.

5.3.5. Choosing event properties and events in a join

If your statement is joining multiple streams, your may specify property names that are unique among the joined streams, or use wildcard (*) as explained earlier.

In case the property name in your select or other clauses is not unique considering all joined streams, you will need to use the name of the stream as a prefix to the property.

This example is a join between the two streams StockTick and News, respectively named as 'tick' and 'news'. The example selects from the StockTick event the symbol value using the 'tick' stream name as a prefix:

select tick.symbol from StockTick.win:time(10) as tick, News.win:time(10) as news
where news.symbol = tick.symbol

Use the wildcard (*) selector in a join to generate a property for each stream, with the property value being the event itself. The output events of the statement below have two properties: the 'tick' property holds the StockTick event and the 'news' property holds the News event:

select * from StockTick.win:time(10) as tick, News.win:time(10) as news

The following syntax can also be used to specify what stream's properties to select:

select stream_name.* [as name] from ...

The selection of tick.* selects the StockTick stream events only:

select tick.* from StockTick.win:time(10) as tick, News.win:time(10) as news
where tick.symbol = news.symbol

The next example uses the as keyword to name each stream's joined events. This instructs the engine to create a property for each named event:

select tick.* as stocktick, news.* as news 
from StockTick.win:time(10) as tick, News.win:time(10) as news
where stock.symbol = news.symbol

The output events of the above example have two properties 'stocktick' and 'news' that are the StockTick and News events.

The stream name itself, as further described in Section 5.4.5, “Using the Stream Name”, may be used within expressions or alone.

This example passes events to a user-defined function named compute and also shows insert-into to populate an event stream of combined events:

insert into TickNewStream select tick, news, MyLib.compute(news, tick) as result
from StockTick.win:time(10) as tick, News.win:time(10) as news
where tick.symbol = news.symbol
// second statement that uses the TickNewStream stream
select tick.price, news.text, result from TickNewStream

In summary, the stream_name.* streamname wildcard syntax can be used to select a stream as the underlying event or as a property, but cannot appear within an expression. While the stream_name syntax (without wildcard) always selects a property (and not as an underlying event), and can occur anywhere within an expression.

5.3.6. Choosing event properties and events from a pattern

If your statement employs pattern expressions, then your pattern expression tags events with a tag name. Each tag name becomes available for use as a property in the select clause and all other clauses.

For example, here is a very simple pattern that matches on every StockTick event received within 30 seconds after start of the statement. The sample selects the symbol and price properties of the matching events:

select tick.symbol as symbol, tick.price as price
from pattern[every tick=StockTick where timer:within(10 sec)]

The use of the wildcard selector, as shown in the next statement, creates a property for each tagged event in the output. The next statement outputs events that hold a single 'tick' property whose value is the event itself:

select * from pattern[every tick=StockTick where timer:within(10 sec)]

You may also select the matching event itself using the tick.* syntax. The engine outputs the StockTick event itself to listeners:

select tick.* from pattern[every tick=StockTick where timer:within(10 sec)]

A tag name as specified in a pattern is a valid expression itself. This example uses the insert into clause to make available the events matched by a pattern to further statements:

// make a new stream of ticks and news available
insert into StockTickAndNews 
select tick, news from pattern [every tick=StockTick -> news=News(symbol=tick.symbol)]
// second statement to select from the stream of ticks and news
select tick.symbol, tick.price, news.text from StockTickAndNews

5.3.7. Selecting insert and remove stream events

The optional istream, irstream and rstream keywords in the select clause control the event streams posted to listeners and observers to a statement.

If neither keyword is specified, and in the default engine configuration, the engine posts only insert stream events via the newEvents parameter to the update method of UpdateListener instances listening to the statement. The engine does not post remove stream events, by default.

The insert stream consists of the events entering the respective window(s) or stream(s) or aggregations, while the remove stream consists of the events leaving the respective window(s) or the changed aggregation result. See Chapter 3, Processing Model for more information on insert and remove streams.

The engine posts remove stream events to the oldEvents parameter of the update method only if the irstream keyword occurs in the select clause. This behavior can be changed via engine-wide configuration as described in Section 14.4.16, “Engine Settings related to Stream Selection”.

By specifying the istream keyword you can instruct the engine to only post insert stream events via the newEvents parameter to the update method on listeners. The engine will then not post any remove stream events, and the oldEvents parameter is always a null value.

By specifying the irstream keyword you can instruct the engine to post both insert stream and remove stream events.

By specifying the rstream keyword you can instruct the engine to only post remove stream events via the newEvents parameter to the update method on listeners. The engine will then not post any insert stream events, and the oldEvents parameter is also always a null value.

The following statement selects only the events that are leaving the 30 second time window.

select rstream * from StockTick.win:time(30 sec)

The istream and rstream keywords in the select clause are matched by same-name keywords available in the insert into clause. While the keywords in the select clause control the event stream posted to listeners to the statement, the same keywords in the insert into clause specify the event stream that the engine makes available to other statements.

5.3.8. Qualifying property names and stream names

Property or column names can optionally be qualified by a stream name and the provider URI. The syntax is:

[[provider_URI.]stream_name.]property_name

The provider_URI is the URI supplied to the EPServiceProviderManager class, or the string default for the default provider.

This example assumes the provider is the default provider:

select MyEvent.myProperty from MyEvent
// ... equivalent to ...
select default.MyEvent.myProperty from MyEvent

Stream names can also be qualified by the provider URI. The syntax is:

[provider_URI.]stream_name

The next example assumes a provider URI by name of Processor:

select Processor.MyEvent.myProperty from Processor.MyEvent

5.3.9. Select Distinct

The optional distinct keyword removes duplicate output events from output. The keyword must occur after the select keyword and after the optional irstream keyword.

The distinct keyword in your select instructs the engine to consolidate, at time of output, the output event(s) and remove output events with identical property values. Duplicate removal only takes place when two or more events are output together at any one time, therefore distinct is typically used with a batch data window, output rate limiting, on-demand queries, on-select or iterator pull API.

If two or more output event objects have same property values for all properties of the event, the distinct removes all but one duplicated event before outputting events to listeners. Indexed, nested and mapped properties are considered in the comparison, if present in the output event.

The next example outputs sensor ids of temperature sensor events, but only every 10 seconds and only unique sensor id values during the 10 seconds:

select distinct sensorId from TemperatureSensorEvent output every 10 seconds

Use distinct with wildcard (*) to remove duplicate output events considering all properties of an event.

This example statement outputs all distinct events either when 100 events arrive or when 10 seconds passed, whichever occurs first:

select distinct * from TemperatureSensorEvent.win:time_length_batch(10, 100)

When selecting nested, indexed, mapped or dynamic properties in a select clause with distinct, it is relevant to know that the comparison uses hash code and the Java equals semantics.

5.3.10. Transposing an Expression Result to a Stream

For transposing an instance of a Java object returned by an expression to a stream use the transpose function as described in Section 9.4, “Select-Clause transpose Function”.

5.4. Specifying Event Streams: the From Clause

The from clause is required in all EPL statements. It specifies one or more event streams or named windows. Each event stream or named window can optionally be given a name by means of the as keyword.

from stream_def [as name] [unidirectional] [retain-union | retain-intersection] 
    [, stream_def [as stream_name]] [, ...]

The event stream definition stream_def as shown in the syntax above can consists of either a filter-based event stream definition or a pattern-based event stream definition.

For joins and outer joins, specify two or more event streams. Joins between pattern-based and filter-based event streams are also supported. Joins and the unidirectional keyword are described in more detail in Section 5.12, “Joining Event Streams”.

Esper supports joins against relational databases for access to historical or reference data as explained in Section 5.13, “Accessing Relational Data via SQL”. Esper can also join results returned by an arbitrary method invocation, as discussed in Section 5.14, “Accessing Non-Relational Data via Method Invocation”.

The stream_name is an optional identifier assigned to the stream. The stream name can itself occur in any expression and provides access to the event itself from the named stream. Also, a stream name may be combined with a method name to invoke instance methods on events of that stream.

For all streams with the exception of historical sources your query may employ data window views as outlined below. The retain-intersection (the default) and retain-union keywords build a union or intersection of two or more data windows as described in Section 5.4.4, “Multiple Data Window Views”.

5.4.1. Filter-based Event Streams

The stream_def syntax for a filter-based event stream is as below:

event_stream_name [(filter_criteria)] [contained_selection] [.view_spec] [.view_spec] [...]

The event_stream_name is either the name of an event type or name of an event stream populated by an insert into statement or the name of a named window.

The filter_criteria is optional and consists of a list of expressions filtering the events of the event stream, within parenthesis after the event stream name.

The contained_selection is optional and is for use with coarse-grained events that have properties that are themselves one or more events, see Section 5.19, “Contained-Event Selection” for the synopsis and examples.

The view_spec are optional view specifications, which are combinable definitions for retaining events and for deriving information from events.

The following EPL statement shows event type, filter criteria and views combined in one statement. It selects all event properties for the last 100 events of IBM stock ticks for volume. In the example, the event type is the fully qualified Java class name org.esper.example.StockTick. The expression filters for events where the property symbol has a value of "IBM". The optional view specifications for deriving data from the StockTick events are a length window and a view for computing statistics on volume. The name for the event stream is "volumeStats".

select * from 
  org.esper.example.StockTick(symbol='IBM').win:length(100).stat:uni(volume) as volumeStats

Esper filters out events in an event stream as defined by filter criteria before it sends events to subsequent views. Thus, compared to search conditions in a where clause, filter criteria remove unneeded events early. In the above example, events with a symbol other then IBM do not enter the time window.

5.4.1.1. Specifying an Event Type

The simplest form of filter is a filter for events of a given type without any conditions on the event property values. This filter matches any event of that type regardless of the event's properties. The example below is such a filter.

select * from com.mypackage.myevents.RfidEvent

Instead of the fully-qualified Java class name any other event name can be mapped via Configuration to a Java class, making the resulting statement more readable:

select * from RfidEvent

Interfaces and superclasses are also supported as event types. In the below example IRfidReadable is an interface class.

select * from org.myorg.rfid.IRfidReadable

5.4.1.2. Specifying Filter Criteria

The filtering criteria to filter for events with certain event property values are placed within parenthesis after the event type name:

select * from RfidEvent(category="Perishable")

All expressions can be used in filters, including static methods that return a boolean value:

select * from com.mycompany.RfidEvent(MyRFIDLib.isInRange(x, y) or (x < 0 and y < 0))

Filter expressions can be separated via a single comma ','. The comma represents a logical AND between filter expressions:

select * from RfidEvent(zone=1, category=10)
...is equivalent to...
select * from RfidEvent(zone=1 and category=10)

The following operators are highly optimized through indexing and are the preferred means of filtering in high-volume event streams:

  • equals =

  • not equals !=

  • comparison operators < , > , >=, <=

  • ranges

    • use the between keyword for a closed range where both endpoints are included

    • use the in keyword and round () or square brackets [] to control how endpoints are included

    • for inverted ranges use the not keyword and the between or in keywords

  • list-of-values checks using the in keyword or the not in keywords followed by a comma-separated list of values

At compile time as well as at run time, the engine scans new filter expressions for sub-expressions that can be indexed. Indexing filter values to match event properties of incoming events enables the engine to match incoming events faster. The above list of operators represents the set of operators that the engine can best convert into indexes. The use of comma or logical and in filter expressions does not impact optimizations by the engine.

5.4.1.3. Filtering Ranges

Ranges come in the following 4 varieties. The use of round () or square [] bracket dictates whether an endpoint is included or excluded. The low point and the high-point of the range are separated by the colon : character.

  • Open ranges that contain neither endpoint (low:high)

  • Closed ranges that contain both endpoints [low:high]. The equivalent 'between' keyword also defines a closed range.

  • Half-open ranges that contain the low endpoint but not the high endpoint [low:high)

  • Half-closed ranges that contain the high endpoint but not the low endpoint (low:high]

The next statement shows a filter specifying a range for x and y values of RFID events. The range includes both endpoints therefore uses [] hard brackets.

mypackage.RfidEvent(x in [100:200], y in [0:100])

The between keyword is equivalent for closed ranges. The same filter using the between keyword is:

mypackage.RfidEvent(x between 100 and 200, y between 0 and 50)

The not keyword can be used to determine if a value falls outside a given range:

mypackage.RfidEvent(x not in [0:100])

The equivalent statement using the between keyword is:

mypackage.RfidEvent(x not between 0 and 100)

5.4.1.4. Filtering Sets of Values

The in keyword for filter criteria determines if a given value matches any value in a list of values.

In this example we are interested in RFID events where the category matches any of the given values:

mypackage.RfidEvent(category in ('Perishable', 'Container'))

By using the not in keywords we can filter events with a property value that does not match any of the values in a list of values:

mypackage.RfidEvent(category not in ('Household', 'Electrical'))

5.4.1.5. Filter Limitations

The following restrictions apply to filter criteria:

  • Range and comparison operators require the event property to be of a numeric or string type.

  • Aggregation functions are not allowed within filter expressions.

  • The prev previous event function and the prior prior event function cannot be used in filter expressions.

5.4.2. Pattern-based Event Streams

Event pattern expressions can also be used to specify one or more event streams in an EPL statement. For pattern-based event streams, the event stream definition stream_def consists of the keyword pattern and a pattern expression in brackets []. The syntax for an event stream definition using a pattern expression is below. As in filter-based event streams, an optional list of views that derive data from the stream can be supplied.

pattern [pattern_expression] [.view_spec] [.view_spec] [...]

The next statement specifies an event stream that consists of both stock tick events and trade events. The example tags stock tick events with the name "tick" and trade events with the name "trade".

select * from pattern [every tick=StockTickEvent or every trade=TradeEvent]

This statement generates an event every time the engine receives either one of the event types. The generated events resemble a map with "tick" and "trade" keys. For stock tick events, the "tick" key value is the underlying stock tick event, and the "trade" key value is a null value. For trade events, the "trade" key value is the underlying trade event, and the "tick" key value is a null value.

Lets further refine this statement adding a view the gives us the last 30 seconds of either stock tick or trade events. Lets also select prices and a price total.

select tick.price as tickPrice, trade.price as tradePrice, 
       sum(tick.price) + sum(trade.price) as total
  from pattern [every tick=StockTickEvent or every trade=TradeEvent].win:time(30 sec)

Note that in the statement above tickPrice and tradePrice can each be null values depending on the event processed. Therefore, an aggregation function such as sum(tick.price + trade.price)) would always return null values as either of the two price properties are always a null value for any event matching the pattern. Use the coalesce function to handle null values, for example: sum(coalesce(tick.price, 0) + coalesce(trade.price, 0)).

5.4.3. Specifying Views

Views are used to specify an expiry policy for events (data window views) and also to derive data. Views can be staggered onto each other. See the section Chapter 12, EPL Reference: Views on the views available that also outlines the different types of views: Data Window views and Derived-Value views.

Views can optionally take one or more parameters. These parameters are expressions themselves that may consist of any combination of variables, arithmetic, user-defined function or substitution parameters for prepared statements, for example.

The example statement below outputs a count per expressway for car location events (contains information about the location of a car on a highway) of the last 60 seconds:

select expressway, count(*) from CarLocEvent.win:time(60) 
group by expressway

The next example serves to show staggering of views. It uses the std:groupwin view to create a separate length window per car id:

select cardId, expressway, direction, segment, count(*) 
from CarLocEvent.std:groupwin(carId).win:length(4) 
group by carId, expressway, direction, segment

The first view std:groupwin(carId) groups car location events by car id. The second view win:length(4) keeps a length window of the 4 last events, with one separate length window for each car id. The example reports the number of events per car id and per expressway, direction and segment considering the last 4 events for each car id only.

Note that the group by syntax is generally preferable over std:groupwin for grouping information as it is SQL-compliant, easier to read and does not create a separate data window per group. The std:groupwin in above example creates a separate data window (length window in the example) per group, demonstrating staggering views.

When views are staggered onto each other as a chain of views, then the insert and remove stream received by each view is the insert and remove stream made available by the view (or stream) earlier in the chain.

The special keep-all view keeps all events: It does not provide a remove stream, i.e. events are not removed from the keep-all view unless by means of the on-delete syntax or by revision events.

5.4.4. Multiple Data Window Views

Data window views provide an expiry policy that indicates when to remove events from the data window, with the exception of the keep-all data window which has no expiry policy and the std:groupwin grouped-window view for allocating a new data window per group.

EPL allows the freedom to use multiple data window views onto a stream and thus combine expiry policies. Combining data windows into an intersection (the default) or a union can achieve a useful strategy for retaining events and expiring events that are no longer of interest. Named windows and the on-delete syntax provide an additional degree of freedom.

In order to combine two or more data window views there is no keyword required. The retain-intersection keyword is the default and the retain-union keyword may instead be provided for a stream.

The concept of union and intersection come from Set mathematics. In the language of Set mathematics, two sets A and B can be "added" together: The intersection of A and B is the set of all things which are members of both A and B, i.e. the members two sets have "in common". The union of A and B is the set of all things which are members of either A or B.

Use the retain-intersection (the default) keyword to retain an intersection of all events as defined by two or more data windows. All events removed from any of the intersected data windows are entered into the remove stream. This is the default behavior if neither retain keyword is specified.

Use the retain-union keyword to retain a union of all events as defined by two or more data windows. Only events removed from all data windows are entered into the remove stream.

The next example statement totals the price of OrderEvent events in a union of the last 30 seconds and unique by product name:

select sum(price) from OrderEvent.win:time(30 sec).std:unique(productName) retain-union

In the above statement, all OrderEvent events that are either less then 30 seconds old or that are the last event for the product name are considered.

Here is an example statement totals the price of OrderEvent events in an intersection of the last 30 seconds and unique by product name:

select sum(price) from OrderEvent.win:time(30 sec).std:unique(productName) retain-intersection

In the above statement, only those OrderEvent events that are both less then 30 seconds old and are the last event for the product name are considered. The number of events that the engine retains is the number of unique events per product name in the last 30 seconds (and not the number of events in the last 30 seconds).

For an intersection the engine retains the minimal number of events representing that intersection. Thus when combining a time window of 30 seconds and a last-event window, for example, the number of events retained at any time is zero or one event (and not 30 seconds of events).

When combining a batch window into an intersection with another data window the combined data window gains batching semantics: Only when the batch criteria is fulfilled does the engine provide the batch of intersecting insert stream events. Multiple batch data windows may not be combined into an intersection.

In below table we provide additional examples for data window intersections:

Table 5.3. Intersection Data Window Examples

ExampleDescription
win:time(30).std:firstunique(keys)Retains 30 seconds of events unique per keys value (first event per value).
win:firstlength(3).std:firstunique(keys)Retains the first 3 events that are also unique per keys value.
win:time_batch(N seconds).std:unique(keys)Posts a batch every N seconds that contains the last of each unique event per keys value.
win:time_batch(N seconds).std:firstunique(keys)Posts a batch every N seconds that contains the first of each unique event per keys value.
win:length_batch(N).std:unique(keys)Posts a batch of unique events (last event per value) when N unique events per keys value are encountered.
win:length_batch(N).std:firstunique(keys)Posts a batch of unique events (first event per value) when N unique events per keys value are encountered.

For advanced users and for backward compatibility, it is possible to configure Esper to allow multiple data window views without either of the retain keywords, as described in Section 14.4.11.2, “Configuring Multi-Expiry Policy Defaults”.

5.4.5. Using the Stream Name

Your from clause may assign a name to each stream. This assigned stream name can serve any of the following purposes.

First, the stream name can be used to disambiguate property names. The stream_name.property_name syntax uniquely identifies which property to select if property names overlap between streams. Here is an example:

select prod.productId, ord.productId from ProductEvent as prod, OrderEvent as ord

Second, the stream name can be used with a wildcard (*) character to select events in a join, or assign new names to the streams in a join:

// Select ProductEvent only
select prod.* from ProductEvent as prod, OrderEvent
// Assign column names 'product' and 'order' to each event
select prod.* as product, ord.* as order from ProductEvent as prod, OrderEvent as ord

Further, the stream name by itself can occur in any expression: The engine passes the event itself to that expression. For example, the engine passes the ProductEvent and the OrderEvent to the user-defined function 'checkOrder':

select prod.productId, MyFunc.checkOrder(prod, ord) 
from ProductEvent as prod, OrderEvent as ord

Last, you may invoke an instance method on each event of a stream, and pass parameters to the instance method as well. Instance method calls are allowed anywhere in an expression.

The next statement demonstrates this capability by invoking a method 'computeTotal' on OrderEvent events and a method 'getMultiplier' on ProductEvent events:

select ord.computeTotal(prod.getMultiplier()) from ProductEvent as prod, OrderEvent as ord

Instance methods may also be chained: Your EPL may invoke a method on the result returned by a method invocation.

Assume that your product event exposes a method getZone which returns a zone object. Assume that the Zone class declares a method checkZone. This example statement invokes a method chain:

select prod.getZone().checkZone("zone 1") from ProductEvent as prod

5.5. Specifying Search Conditions: the Where Clause

The where clause is an optional clause in EPL statements. Via the where clause event streams can be joined and events can be filtered.

Comparison operators =, < , > , >=, <=, !=, <>, is null, is not null and logical combinations via and and or are supported in the where clause. The where clause can also introduce join conditions as outlined in Section 5.12, “Joining Event Streams”. where clauses can also contain expressions. Some examples are listed below.

...where fraud.severity = 5 and amount > 500
...where (orderItem.orderId is null) or (orderItem.class != 10)		 
...where (orderItem.orderId = null) or (orderItem.class <> 10)		 
...where itemCount / packageCount > 10		 

5.6. Aggregates and grouping: the Group-by Clause and the Having Clause

5.6.1. Using aggregate functions

The aggregate functions are sum, avg, count, max, min, median, stddev, avedev. You can use aggregate functions to calculate and summarize data from event properties. For example, to find out the total price for all stock tick events in the last 30 seconds, type:

select sum(price) from StockTickEvent.win:time(30 sec)

Here is the syntax for aggregate functions:

aggregate_function( [all | distinct] expression)

You can apply aggregate functions to all events in an event stream window or other view, or to one or more groups of events. From each set of events to which an aggregate function is applied, Esper generates a single value.

Expression is usually an event property name. However it can also be a constant, function, or any combination of event property names, constants, and functions connected by arithmetic operators.

For example, to find out the average price for all stock tick events in the last 30 seconds if the price was doubled:

select avg(price * 2) from StockTickEvent.win:time(30 seconds)

You can use the optional keyword distinct with all aggregate functions to eliminate duplicate values before the aggregate function is applied. The optional keyword all which performs the operation on all events is the default.

You can use aggregation functions in a select clause and in a having clause. You cannot use aggregate functions in a where clause, but you can use the where clause to restrict the events to which the aggregate is applied. The next query computes the average and sum of the price of stock tick events for the symbol IBM only, for the last 10 stock tick events regardless of their symbol.

select 'IBM stats' as title, avg(price) as avgPrice, sum(price) as sumPrice
from StockTickEvent.win:length(10)
where symbol='IBM'

In the above example the length window of 10 elements is not affected by the where clause, i.e. all events enter and leave the length window regardless of their symbol. If we only care about the last 10 IBM events, we need to add filter criteria as below.

select 'IBM stats' as title, avg(price) as avgPrice, sum(price) as sumPrice
from StockTickEvent(symbol='IBM').win:length(10)
where symbol='IBM'

You can use aggregate functions with any type of event property or expression, with the following exceptions:

  1. You can use sum, avg, median, stddev, avedev with numeric event properties only

Esper ignores any null values returned by the event property or expression on which the aggregate function is operating, except for the count(*) function, which counts null values as well. All aggregate functions return null if the data set contains no events, or if all events in the data set contain only null values for the aggregated expression.

5.6.2. Organizing statement results into groups: the Group-by clause

The group by clause is optional in all EPL statements. The group by clause divides the output of an EPL statement into groups. You can group by one or more event property names, or by the result of computed expressions. When used with aggregate functions, group by retrieves the calculations in each subgroup. You can use group by without aggregate functions, but generally that can produce confusing results.

For example, the below statement returns the total price per symbol for all stock tick events in the last 30 seconds:

select symbol, sum(price) from StockTickEvent.win:time(30 sec) group by symbol

The syntax of the group by clause is:

group by aggregate_free_expression [, aggregate_free_expression] [, ...]

Esper places the following restrictions on expressions in the group by clause:

  1. Expressions in the group by cannot contain aggregate functions

  2. Event properties that are used within aggregate functions in the select clause cannot also be used in a group by expression

  3. When grouping an unbound stream, i.e. no data window is specified onto the stream providing groups, or when using output rate limiting with the ALL keyword, you should ensure your group-by expression does not return an unlimited number of values. If, for example, your group-by expression is a fine-grained timestamp, group state that accumulates for an unlimited number of groups potentially reduces available memory significantly. Use a @Hint as described below to instruct the engine when to discard group state.

You can list more then one expression in the group by clause to nest groups. Once the sets are established with group by the aggregation functions are applied. This statement posts the median volume for all stock tick events in the last 30 seconds per symbol and tick data feed. Esper posts one event for each group to statement listeners:

select symbol, tickDataFeed, median(volume) 
from StockTickEvent.win:time(30 sec) 
group by symbol, tickDataFeed

In the statement above the event properties in the select list (symbol, tickDataFeed) are also listed in the group by clause. The statement thus follows the SQL standard which prescribes that non-aggregated event properties in the select list must match the group by columns.

Esper also supports statements in which one or more event properties in the select list are not listed in the group by clause. The statement below demonstrates this case. It calculates the standard deviation for the last 30 seconds of stock ticks aggregating by symbol and posting for each event the symbol, tickDataFeed and the standard deviation on price.

select symbol, tickDataFeed, stddev(price) from StockTickEvent.win:time(30 sec) group by symbol

The above example still aggregates the price event property based on the symbol, but produces one event per incoming event, not one event per group.

Additionally, Esper supports statements in which one or more event properties in the group by clause are not listed in the select list. This is an example that calculates the mean deviation per symbol and tickDataFeed and posts one event per group with symbol and mean deviation of price in the generated events. Since tickDataFeed is not in the posted results, this can potentially be confusing.

select symbol, avedev(price) 
from StockTickEvent.win:time(30 sec) 
group by symbol, tickDataFeed

Expressions are also allowed in the group by list:

select symbol * price, count(*) from StockTickEvent.win:time(30 sec) group by symbol * price

If the group by expression resulted in a null value, the null value becomes its own group. All null values are aggregated into the same group. If you are using the count(expression) aggregate function which does not count null values, the count returns zero if only null values are encountered.

You can use a where clause in a statement with group by. Events that do not satisfy the conditions in the where clause are eliminated before any grouping is done. For example, the statement below posts the number of stock ticks in the last 30 seconds with a volume larger then 100, posting one event per group (symbol).

select symbol, count(*) from StockTickEvent.win:time(30 sec) where volume > 100 group by symbol
5.6.2.1. Hints Pertaining to Group-By

The Esper engine reclaims aggregation state agressively when it determines that a group has no data points, based on the data in the data windows. When your application data creates a large number of groups with a small or zero number of data points then performance may suffer as state is reclaimed and created anew. Esper provides the @Hint('disable_reclaim_group') hint that you can specify as part of an EPL statement text to avoid group reclaim.

When aggregating values over an unbound stream (i.e. no data window is specified onto the stream) and when your group-by expression returns an unlimited number of values, for example when a timestamp expression is used, then please note the next hint.

A sample statement that aggregates stock tick events by timestamp, assuming the event type offers a property by name timestamp that, reflects time in high resolution, for example arrival or system time:

// Note the below statement could lead to an out-of-memory problem:
select symbol, sum(price) from StockTickEvent group by timestamp

As the engine has no means of detecting when aggregation state (sums per symbol) can be discarded, you may use the following hints to control aggregation state lifetime.

The @Hint("reclaim_group_aged=age_in_seconds") hint instructs the engine to discard aggregation state that has not been updated for age_in_seconds seconds.

The optional @Hint("reclaim_group_freq=sweep_frequency_in_seconds") can be used in addition to control the frequency at which the engine sweeps aggregation state to determine aggregation state age and remove state that is older then age_in_seconds seconds. If the hint is not specified, the frequency defaults to the same value as age_in_seconds.

The updated sample statement with both hints:

// Instruct engine to remove state older then 10 seconds and sweep every 5 seconds
@Hint('reclaim_group_aged=10,reclaim_group_freq=5')
select symbol, sum(price) from StockTickEvent group by timestamp

Variables may also be used to provide values for age_in_seconds and sweep_frequency_in_seconds.

This example statement uses a variable named varAge to control how long aggregation state remains in memory, and the engine defaults the sweep frequency to the same value as the variable provides:

@Hint('reclaim_group_aged=varAge')
select symbol, sum(price) from StockTickEvent group by timestamp

5.6.3. Selecting groups of events: the Having clause

Use the having clause to pass or reject events defined by the group-by clause. The having clause sets conditions for the group by clause in the same way where sets conditions for the select clause, except where cannot include aggregate functions, while having often does.

This statement is an example of a having clause with an aggregate function. It posts the total price per symbol for the last 30 seconds of stock tick events for only those symbols in which the total price exceeds 1000. The having clause eliminates all symbols where the total price is equal or less then 1000.

select symbol, sum(price) 
from StockTickEvent.win:time(30 sec) 
group by symbol 
having sum(price) > 1000

To include more then one condition in the having clause combine the conditions with and, or or not. This is shown in the statement below which selects only groups with a total price greater then 1000 and an average volume less then 500.

select symbol, sum(price), avg(volume)
from StockTickEvent.win:time(30 sec) 
group by symbol 
having sum(price) > 1000 and avg(volume) < 500

A statement with the having clause should also have a group by clause. If you omit group-by, all the events not excluded by the where clause return as a single group. In that case having acts like a where except that having can have aggregate functions.

The having clause can also be used without group by clause as the below example shows. The example below posts events where the price is less then the current running average price of all stock tick events in the last 30 seconds.

select symbol, price, avg(price) 
from StockTickEvent.win:time(30 sec) 
having price < avg(price)

5.6.4. How the stream filter, Where, Group By and Having clauses interact

When you include filters, the where condition, the group by clause and the having condition in an EPL statement the sequence in which each clause affects events determines the final result:

  1. The event stream's filter condition, if present, dictates which events enter a window (if one is used). The filter discards any events not meeting filter criteria.

  2. The where clause excludes events that do not meet its search condition.

  3. Aggregate functions in the select list calculate summary values for each group.

  4. The having clause excludes events from the final results that do not meet its search condition.

The following query illustrates the use of filter, where, group by and having clauses in one statement with a select clause containing an aggregate function.

select tickDataFeed, stddev(price)
from StockTickEvent(symbol='IBM').win:length(10) 
where volume > 1000
group by tickDataFeed 
having stddev(price) > 0.8

Esper filters events using the filter criteria for the event stream StockTickEvent. In the example above only events with symbol IBM enter the length window over the last 10 events, all other events are simply discarded. The where clause removes any events posted by the length window (events entering the window and event leaving the window) that do not match the condition of volume greater then 1000. Remaining events are applied to the stddev standard deviation aggregate function for each tick data feed as specified in the group by clause. Each tickDataFeed value generates one event. Esper applies the having clause and only lets events pass for tickDataFeed groups with a standard deviation of price greater then 0.8.

5.6.5. Comparing Keyed Segmented Context, the Group By clause and the std:groupwin view

The keyed segmented context create context ... partition by and the group by clause as well as the built-in std:groupwin view are similar in their ability to group events but very different in their semantics. This section explains the key differences in their behavior and use.

The keyed segmented context as declared with create context ... partition by and context .... select ... creates a new context partition per key value(s). The engine maintains separate data window views as well as separate aggregations per context partition; thereby the keyed segmented context applies to both. See Section 4.2.1, “Keyed Segmented Context” for additional examples.

The group by clause works together with aggregation functions in your statement to produce an aggregation result per group. In greater detail, this means that when a new event arrives, the engine applies the expressions in the group by clause to determine a grouping key. If the engine has not encountered that grouping key before (a new group), the engine creates a set of new aggregation results for that grouping key and performs the aggregation changing that new set of aggregation results. If the grouping key points to an existing set of prior aggregation results (an existing group), the engine performs the aggregation changing the prior set of aggregation results for that group.

The std:groupwin view is a built-in view that groups events into data windows. The view is described in greater detail in Section 12.2.2, “Grouped Data Window (std:groupwin)”. Its primary use is to create a separate data window per group, or more generally to create separate instances of all its sub-views for each grouping key encountered.

The table below summarizes the point:

Table 5.4. Grouping Options

OptionDescription
Keyed Segmented Context

Separate context partition per key value.

Affects all of data windows, aggregations, patterns, etc. (except variables which are global).

Grouped Data Window (std:groupwin)

Separate data window per key value.

Affects only the data window that is declared next to it.

Group By Clause (group by)

Separate aggregation values per key value.

Affects only aggregation values.

Please review the performance section for advice related to performance or memory-use.

The next example shows queries that produce equivalent results. The query using the group by clause is generally preferable as is easier to read. The second form introduces the stat:uni view which computes univariate statistics for a given property:

select symbol, avg(price) from StockTickEvent group by symbol
// ... is equivalent to ...
select symbol, average from StockTickEvent.std:groupwin(symbol).stat:uni(price)

The next example shows two queries that are NOT equivalent as the length window is ungrouped in the first query, and grouped in the second query:

select symbol, sum(price) from StockTickEvent.win:length(10) group by symbol
// ... NOT equivalent to ...
select symbol, sum(price) from StockTickEvent.std:groupwin(symbol).win:length(10)

The key difference between the two statements is that in the first statement the length window is ungrouped and applies to all events regardless of group. While in the second query each group gets its own instance of a length window. For example, in the second query events arriving for symbol "ABC" get a length window of 10 events, and events arriving for symbol "DEF" get their own length window of 10 events.

5.7. Stabilizing and Controlling Output: the Output Clause

5.7.1. Output Clause Options

The output clause is optional in Esper and is used to control or stabilize the rate at which events are output and to suppress output events. The EPL language provides for several different ways to control output rate.

Here is the syntax for the output clause that specifies a rate in time interval or number of events:

output [after suppression_def] 
  [[all | first | last | snapshot] every output_rate [seconds | events]]
[and when terminated]

An alternate syntax specifies the time period between output as outlined in Section 5.2.1, “Specifying Time Periods” :

output [after suppression_def] 
  [[all | first | last | snapshot] every time_period]
[and when terminated]

A crontab-like schedule can also be specified. The schedule parameters follow the pattern observer parameters and are further described in Section 6.6.2.2, “timer:at” :

output [after suppression_def] 
  [[all | first | last | snapshot] at 
   (minutes, hours, days of month, months, days of week [, seconds])]
[and when terminated]

For use with contexts, in order to trigger output only when a context partition terminates, specify when terminated as further described in Section 4.5, “Output When Context Partition Ends”:

output [after suppression_def] 
  [[all | first | last | snapshot] when terminated 
  [and termination_expression]
  [then set variable_name = assign_expression [, variable_name = assign_expression [,...]]]
  ]

Last, output can be controlled by an expression that may contain variables, user-defined functions and information about the number of collected events. Output that is controlled by an expression is discussed in detail below.

The after keyword and suppression_def can appear alone or together with further output conditions and suppresses output events.

For example, the following statement outputs, every 60 seconds, the total price for all orders in the 30-minute time window:

select sum(price) from OrderEvent.win:time(30 min) output snapshot every 60 seconds

The all keyword is the default and specifies that all events in a batch should be output, each incoming row in the batch producing an output row. Note that for statements that group via the group by clause, the all keyword provides special behavior as below.

The first keyword specifies that only the first event in an output batch is to be output. Using the first keyword instructs the engine to output the first matching event as soon as it arrives, and then ignores matching events for the time interval or number of events specified. After the time interval elapsed, or the number of matching events has been reached, the next first matching event is output again and the following interval the engine again ignores matching events. For statements that group via the group by clause, the first keywords provides special behavior as below.

The last keyword specifies to only output the last event at the end of the given time interval or after the given number of matching events have been accumulated. Again, for statements that group via the group by clause the last keyword provides special behavior as below.

The snapshot keyword indicates that the engine output current computation results considering all events as per views specified and/or current aggregation results. While the other keywords control how a batch of events between output intervals is being considered, the snapshot keyword outputs all current state of a statement independent of the last batch. Its output is equivalent to the iterator method provided by a statement. The snapshot keyword requires a data window declaration if not used within a join and outputs only the last event if used without a data window.

The output_rate is the frequency at which the engine outputs events. It can be specified in terms of time or number of events. The value can be a number to denote a fixed output rate, or the name of a variable whose value is the output rate. By means of a variable the output rate can be controlled externally and changed dynamically at runtime.

Please consult the Appendix A, Output Reference and Samples for detailed information on insert and remove stream output for the various output clause keywords.

For use with contexts you may append the keywords and when terminated to trigger output at the rate defined and in addition trigger output when the context partition terminates. Please see Section 4.5, “Output When Context Partition Ends” for details.

The time interval can also be specified in terms of minutes; the following statement is identical to the first one.

select * from StockTickEvent.win:length(5) output every 1.5 minutes

A second way that output can be stabilized is by batching events until a certain number of events have been collected. The next statement only outputs when either 5 (or more) new or 5 (or more) old events have been batched.

select * from StockTickEvent.win:time(30 sec) output every 5 events

Additionally, event output can be further modified by the optional last keyword, which causes output of only the last event to arrive into an output batch.

select * from StockTickEvent.win:time(30 sec) output last every 5 events

Using the first keyword you can be notified at the start of the interval. The allows to watch for situations such as a rate falling below a threshold and only be informed every now and again after the specified output interval, but be informed the moment it first happens.

select * from TickRate.win:time(30 seconds) where rate<100 output first every 60 seconds

A sample statement using the Unix "crontab"-command schedule is shown next. See Section 6.6.2.2, “timer:at” for details on schedule syntax. Here, output occurs every 15 minutes from 8am to 5:45pm (hours 8 to 17 at 0, 15, 30 and 45 minutes past the hour):

select symbol, sum(price) from StockTickEvent group by symbol output at (*/15, 8:17, *, *, *)

5.7.1.1. Controlling Output Using an Expression

Output can also be controlled by an expression that may check variable values, use user-defined functions and query built-in properties that provide additional information. The synopsis is as follows:

output [after suppression_def] 
  [[all | first | last | snapshot] when trigger_expression 
    [then set variable_name = assign_expression [, variable_name = assign_expression [,...]]]
  [and when terminated 
    [and termination_expression]
    [then set variable_name = assign_expression [, variable_name = assign_expression [,...]]]
  ]

The when keyword must be followed by a trigger expression returning a boolean value of true or false, indicating whether to output. Use the optional then keyword to change variable values after the trigger expression evaluates to true. An assignment expression assigns a new value to variable(s).

For use with contexts you may append the keywords and when terminated to also trigger output when the context partition terminates. Please see Section 4.5, “Output When Context Partition Ends” for details. You may optionally specify a termination expression. If that expression is provided the engine evaluates the expression when the context partition terminates: The evaluation result of true means output occurs when the context partition terminates, false means no output occurs when the context partition terminates. You may specify then set followed by a list of assignments to assign variables. Assignments are executed on context partition termination regardless of the termination expression, if present.

Lets consider an example. The next statement assumes that your application has defined a variable by name OutputTriggerVar of boolean type. The statement outputs rows only when the OutputTriggerVar variable has a boolean value of true:

select sum(price) from StockTickEvent output when OutputTriggerVar = true

The engine evaluates the trigger expression when streams and data views post one or more insert or remove stream events after considering the where clause, if present. It also evaluates the trigger expression when any of the variables used in the trigger expression, if any, changes value. Thus output occurs as follows:

  1. When there are insert or remove stream events and the when trigger expression evaluates to true, the engine outputs the resulting rows.

  2. When any of the variables in the when trigger expression changes value, the engine evaluates the expression and outputs results. Result output occurs within the minimum time interval of timer resolution (100 milliseconds).

By adding a then part to the EPL, we can reset any variables after the trigger expression evaluated to true:

select sum(price) from StockTickEvent 
  output when OutputTriggerVar = true  
  then set OutputTriggerVar = false

Expressions in the when and then may, for example, use variables, user defined functions or any of the built-in named properties that are described in the below list.

The following built-in properties are available for use:

Table 5.5. Built-In Properties for Use with Output When

Built-In Property NameDescription
last_output_timestampTimestamp when the last output occurred for the statement; Initially set to time of statement creation
count_insertNumber of insert stream events
count_insert_totalNumber of insert stream events in total (not reset when output occurs).
count_removeNumber of remove stream events
count_remove_totalNumber of remove stream events in total (not reset when output occurs).

The values provided by count_insert and count_remove are non-continues: The number returned for these properties may 'jump' up rather then count up by 1. The counts reset to zero upon output.

The following restrictions apply to expressions used in the output rate clause:

  • Event property names cannot be used in the output clause.

  • Aggregation functions cannot be used in the output clause.

  • The prev previous event function and the prior prior event function cannot be used in the output clause.

5.7.1.2. Suppressing Output With After

The after keyword and its time period or number of events parameters is optional and can occur after the output keyword, either alone or with output conditions as listed above.

The synopsis of after is as follows:

output after time_period | number events [...]

When using after either alone or together with further output conditions, the engine discards all output events until the time period passed as measured from the start of the statement, or until the number of output events are reached. The discarded events are not output and do not count towards any further output conditions if any are specified.

For example, the following statement outputs every minute the total price for all orders in the 30-minute time window but only after 30 minutes have passed:

select sum(price) from OrderEvent.win:time(30 min) output after 30 min snapshot every 1 min

An example in which after occur alone is below, in a statement that outputs total price for all orders in the last minute but only after 1 minute passed, each time an event arrives or leaves the data window:

select sum(price) from OrderEvent.win:time(1 min) output after 1 min

To demonstrate after when used with an event count, this statement find pairs of orders with the same id but suppresses output for the first 5 pairs:

select * from pattern[every o=OrderEvent->p=OrderEvent(id=o.id)] output after 5 events

5.7.2. Aggregation, Group By, Having and Output clause interaction

Remove stream events can also be useful in conjunction with aggregation and the output clause: When the engine posts remove stream events for fully-aggregated queries, it presents the aggregation state before the expiring event leaves the data window. Your application can thus easily obtain a delta between the new aggregation value and the prior aggregation value.

The engine evaluates the having-clause at the granularity of the data posted by views. That is, if you utilize a time window and output every 10 events, the having clause applies to each individual event or events entering and leaving the time window (and not once per batch of 10 events).

The output clause interacts in two ways with the group by and having clauses. First, in the output every n events case, the number n refers to the number of events arriving into the group by clause. That is, if the group by clause outputs only 1 event per group, or if the arriving events don't satisfy the having clause, then the actual number of events output by the statement could be fewer than n.

Second, the last, all and first keywords have special meanings when used in a statement with aggregate functions and the group by clause:

  • When no keyword is specified, the engine produces an output row for each row in the batch or when using group-by then an output per group only for those groups present in the batch, following Section 3.7.2, “Output for Aggregation and Group-By”.

  • The all keyword (the default) specifies that the most recent data for all groups seen so far should be output, whether or not these groups' aggregate values have just been updated

  • The last keyword specifies that only groups whose aggregate values have been updated with the most recent batch of events should be output.

  • The first keyword specifies that only groups whose aggregate values have been updated with the most recent batch of events should be output following the defined frequency, keeping frequency state for each group.

Please consult the Appendix A, Output Reference and Samples for detailed information on insert and remove stream output for aggregation and group-by.

By adding an output rate limiting clause to a statement that contains a group by clause we can control output of groups to obtain one row for each group, generating an event per group at the given output frequency.

The next statement outputs total price per symbol cumulatively (no data window was used here). As it specifies the all keyword, the statement outputs the current value for all groups seen so far, regardless of whether the group was updated in the last interval. Output occurs after an interval of 5 seconds passed and at the end of each subsequent interval:

select symbol, sum(price) from StockTickEvent group by symbol output all every 5 seconds

The below statement outputs total price per symbol considering events in the last 3 minutes. When events leave the 3-minute data window output also occurs as new aggregation values are computed. The last keyword instructs the engine to output only those groups that had changes. Output occurs after an interval of 10 seconds passed and at the end of each subsequent interval:

select symbol, sum(price) from StockTickEvent.win:time(3 min)
group by symbol output last every 10 seconds

This statement also outputs total price per symbol considering events in the last 3 minutes. The first keyword instructs the engine to output as soon as there is a new value for a group. After output for a given group the engine suppresses output for the same group for 10 seconds and does not suppress output for other groups. Output occurs again for that group after the interval when the group has new value(s):

select symbol, sum(price) from StockTickEvent.win:time(3 min)
group by symbol output first every 10 seconds

5.7.3. Runtime Considerations

Output rate limiting provides output events to your application in regular intervals. Between intervals, the engine uses a buffer to hold events until the output condition is reached. If your application has high-volume streams, you may need to be mindful of the memory needs for output rates.

The output clause with the snapshot keyword does not require a buffer, all other output keywords do consume memory until the output condition is reached.

5.8. Sorting Output: the Order By Clause

The order by clause is optional. It is used for ordering output events by their properties, or by expressions involving those properties. .

For example, the following statement outputs batches of 5 or more stock tick events that are sorted first by price ascending and then by volume ascending:

select symbol from StockTickEvent.win:time(60 sec) 
output every 5 events 
order by price, volume

Here is the syntax for the order by clause:

order by expression [asc | desc] [, expression [asc | desc]] [, ...]

If the order by clause is absent then the engine still makes certain guarantees about the ordering of output:

  • If the statement is not a join, does not group via group by clause and does not declare grouped data windows via std:groupwin view, the order in which events are delivered to listeners and through the iterator pull API is the order of event arrival.

  • If the statement is a join or outer join, or groups, then the order in which events are delivered to listeners and through the iterator pull API is not well-defined. Use the order by clause if your application requires events to be delivered in a well-defined order.

Esper places the following restrictions on the expressions in the order by clause:

  1. All aggregate functions that appear in the order by clause must also appear in the select expression.

Otherwise, any kind of expression that can appear in the select clause, as well as any name defined in the select clause, is also valid in the order by clause.

By default all sort operations on string values are performed via the compare method and are thus not locale dependent. To account for differences in language or locale, see Section 14.4.19, “Engine Settings related to Language and Locale” to change this setting.

5.9. Limiting Row Count: the Limit Clause

The limit clause is typically used together with the order by and output clause to limit your query results to those that fall within a specified range. You can use it to receive the first given number of result rows, or to receive a range of result rows.

There are two syntaxes for the limit clause, each can be parameterized by integer constants or by variable names. The first syntax is shown below:

limit row_count [offset offset_count]

The required row_count parameter specifies the number of rows to output. The row_count can be an integer constant and can also be the name of the integer-type variable to evaluate at runtime.

The optional offset_count parameter specifies the number of rows that should be skipped (offset) at the beginning of the result set. A variable can also be used for this parameter.

The next sample EPL query outputs the top 10 counts per property 'uri' every 1 minute.

select uri, count(*) from WebEvent 
group by uri 
output snapshot every 1 minute
order by count(*) desc 
limit 10

The next statement demonstrates the use of the offset keyword. It outputs ranks 3 to 10 per property 'uri' every 1 minute:

select uri, count(*) from WebEvent 
group by uri 
output snapshot every 1 minute
order by count(*) desc 
limit 8 offset 2

The second syntax for the limit clause is for SQL standard compatibility and specifies the offset first, followed by the row count:

limit offset_count[, row_count]

The following are equivalent:

limit 8 offset 2
// ...equivalent to
limit 2, 8

A negative value for row_count returns an unlimited number or rows, and a zero value returns no rows. If variables are used, then the current variable value at the time of output dictates the row count and offset. A variable returning a null value for row_count also returns an unlimited number or rows.

A negative value for offset is not allowed. If your variable returns a negative or null value for offset then the value is assumed to be zero (i.e. no offset).

The iterator pull API also honors the limit clause, if present.

5.10. Merging Streams and Continuous Insertion: the Insert Into Clause

The insert into clause is optional in Esper. The clause can be specified to make the results of a statement available as an event stream for use in further statements, or to insert events into a named window. The clause can also be used to merge multiple event streams to form a single stream of events.

The syntax for the insert into clause is as follows:

insert [istream | rstream] into event_stream_name  [ (property_name [, property_name] ) ]

The istream (default) and rstream keywords are optional. If no keyword or the istream keyword is specified, the engine supplies the insert stream events generated by the statement. The insert stream consists of the events entering the respective window(s) or stream(s). If the rstream keyword is specified, the engine supplies the remove stream events generated by the statement. The remove stream consists of the events leaving the respective window(s).

The event_stream_name is an identifier that names the event stream (and also implicitly names the types of events in the stream) generated by the engine. The identifier can be used in further statements to filter and process events of that event stream. The insert into clause can consist of just an event stream name, or an event stream name and one or more property names.

The engine also allows listeners to be attached to a statement that contain an insert into clause. Listeners receive all events posted to the event stream.

To merge event streams, simply use the same event_stream_name identifier in all EPL statements that merge their result event streams. Make sure to use the same number and names of event properties and event property types match up.

Esper places the following restrictions on the insert into clause:

  1. The number of elements in the select clause must match the number of elements in the insert into clause if the clause specifies a list of event property names

  2. If the event stream name has already been defined by a prior statement or configuration, and the event property names and/or event types do not match, an exception is thrown at statement creation time.

The following sample inserts into an event stream by name CombinedEvent:

insert into CombinedEvent
select A.customerId as custId, A.timestamp - B.timestamp as latency
  from EventA.win:time(30 min) A, EventB.win:time(30 min) B
 where A.txnId = B.txnId

Each event in the CombinedEvent event stream has two event properties named "custId" and "latency". The events generated by the above statement can be used in further statements, such as shown in the next statement:

select custId, sum(latency)
  from CombinedEvent.win:time(30 min)
 group by custId

The example statement below shows the alternative form of the insert into clause that explicitly defines the property names to use.

insert into CombinedEvent (custId, latency)
select A.customerId, A.timestamp - B.timestamp 
...

The rstream keyword can be useful to indicate to the engine to generate only remove stream events. This can be useful if we want to trigger actions when events leave a window rather then when events enter a window. The statement below generates CombinedEvent events when EventA and EventB leave the window after 30 minutes.

insert rstream into CombinedEvent
select A.customerId as custId, A.timestamp - B.timestamp as latency
  from EventA.win:time(30 min) A, EventB.win:time(30 min) B
 where A.txnId = B.txnId

The insert into clause can be used in connection with patterns to provide pattern results to further statements for analysis:

insert into ReUpEvent
select linkUp.ip as ip 
from pattern [every linkDown=LinkDownEvent -> linkUp=LinkUpEvent(ip=linkDown.ip)]

5.10.1. Transposing a Property To a Stream

Sometimes your events may carry properties that are themselves event objects. Therefore EPL offers a special syntax to insert the value of a property itself as an event into a stream:

insert into stream_name select property_name.* from ...

This feature is only supported for JavaBean events and is not supported for Map or XML events. Nested property names are also not supported.

In this example, the class Summary with properties bid and ask that are of type Quote is:

public class Summary {
  private Quote bid;
  private Quote ask;
  ...

The statement to populate a stream of Quote events is thus:

insert into MyBidStream select bid.* from Summary

5.10.2. Merging Streams By Event Type

The insert into clause allows to merge multiple event streams into a event single stream. The clause names an event stream to insert into by specifing an event_stream_name. The first statement that inserts into the named stream defines the stream's event types. Further statements that insert into the same event stream must match the type of events inserted into the stream as declared by the first statement.

One approach to merging event streams specifies individual colum names either in the select clause or in the insert into clause of the statement. This approach has been shown in earlier examples.

Another approach to merging event streams specifies the wildcard (*) in the select clause (or the stream wildcard) to select the underlying event. The events in the event stream must then have the same event type as generated by the from clause.

Assume a statement creates an event stream named MergedStream by selecting OrderEvent events:

insert into MergedStream select * from OrderEvent

A statement can use the stream wildcard selector to select only OrderEvent events in a join:

insert into MergedStream select ord.* from ItemScanEvent, OrderEvent as ord

And a statement may also use an application-supplied user-defined function to convert events to OrderEvent instances:

insert into MergedStream select MyLib.convert(item) from ItemScanEvent as item

Esper specifically recognizes a conversion function: A conversion function must be the only selected column, and it must return either a Java object or java.util.Map.

5.10.3. Merging Disparate Types of Events: Variant Streams

A variant stream is a predefined stream into which events of multiple disparate event types can be inserted.

A variant stream name may appear anywhere in a pattern or from clause. In a pattern, a filter against a variant stream matches any events of any of the event types inserted into the variant stream. In a from clause including for named windows, views declared onto a variant stream may hold events of any of the event types inserted into the variant stream.

A variant stream is thus useful in problems that require different types of event to be treated the same.

Variant streams can be declared by means of create variant schema or can be predefined via runtime or initialization-time configuration as described in Section 14.4.25, “Variant Stream”. Your application may declare or predefine variant streams to carry events of a limited set of event types, or you may choose the variant stream to carry any and all types of events. This choice affects what event properties are available for consuming statements or patterns of the variant stream.

Assume that an application predefined a variant stream named OrderStream to carry only ServiceOrder and ProductOrder events. An insert into clause inserts events into the variant stream:

insert into OrderStream select * from ServiceOrder
insert into OrderStream select * from ProductOrder

Here is a sample statement that consumes the variant stream and outputs a total price per customer id for the last 30 seconds of ServiceOrder and ProductOrder events:

select customerId, sum(price) from OrderStream.win:time(30 sec) group by customerId

If your application predefines the variant stream to hold specific type of events, as the sample above did, then all event properties that are common to all specified types are visible on the variant stream, including nested, indexed and mapped properties. For access to properties that are only available on one of the types, the dynamic property syntax must be used. In the example above, the customerId and price were properties common to both ServiceOrder and ProductOrder events.

For example, here is a consuming statement that selects a service duraction property that only ServiceOrder events have, and that must therefore be casted to double and null values removed in order to aggregate:

select customerId, sum(coalesce(cast(serviceDuraction?, double), 0)) 
from OrderStream.win:time(30 sec) group by customerId

If your application predefines a variant stream to hold any type of events (the any type variance), then all event properties of the variant stream are effectively dynamic properties.

For example, an application may define an OutgoingEvents variant stream to hold any type of event. The next statement is a sample consumer of the OutgoingEvents variant stream that looks for the destination property and fires for each event in which the property exists with a value of 'email':

select * from OutgoingEvents(destination = 'email')

5.10.4. Decorated Events

Your select clause may use the '*' wildcard together with further expressions to populate a stream of events. A sample statement is:

insert into OrderStream select *, price*units as linePrice from PurchaseOrder

When using wildcard and selecting additional expression results, the engine produces what is called decorating events for the resulting stream. Decorating events add additional property values to an underlying event.

In the above example the resulting OrderStream consists of underlying PurchaseOrder events decorated by a linePrice property that is a result of the price*units expression.

In order to use insert into to insert into an existing stream of decorated events, your underlying event type must match, and all additional decorating property names and types of the select clause must also match.

5.10.5. Event as a Property

Your select clause may use the stream name to populate a stream of events in which each event has properties that are itself an event. A sample statement is:

insert into CompositeStream select order, service, order.price+service.price as totalPrice 
from PurchaseOrder.std:lastevent() as order, ServiceEvent:std:lastevent() as service

When using the stream name (or tag in patterns) in the select clause, the engine produces composite events: One or more of the properties of the composite event are events themselves.

In the above example the resulting CompositeStream consists of 3 columns: the PurchaseOrder event, the ServiceEvent event and the totalPrice property that is a result of the order.price+service.price expression.

In order to use insert into to insert into an existing stream of events in which properties are themselves events, each event column's event type must match, and all additional property names and types of the select clause must also match.

5.10.6. Instantiating and Populating an Underlying Event Object

Your insert into clause may also directly instantiate and populate application underlying event objects or Map event objects. This is described in greater detail in Section 2.11, “Event Objects Instantiated and Populated by Insert Into”.

If instead you have an expression that returns an event object, please read on to the next section.

5.10.7. Transposing an Expression Result

You can transpose an object returned as an expression result into a stream using the transpose function as described further in Section 9.4, “Select-Clause transpose Function”.

5.11. Subqueries

A subquery is a select within another statement. Esper supports subqueries in the select clause, in the where clause and in stream and pattern filter expressions. Subqueries provide an alternative way to perform operations that would otherwise require complex joins. Subqueries can also make statements more readable then complex joins.

Esper supports both simple subqueries as well as correlated subqueries. In a simple subquery, the inner query is not correlated to the outer query. Here is an example simple subquery within a select clause:

select assetId, (select zone from ZoneClosed.std:lastevent()) as lastClosed from RFIDEvent

If the inner query is dependent on the outer query, we will have a correlated subquery. An example of a correlated subquery is shown below. Notice the where clause in the inner query, where the condition involves a stream from the outer query:

select * from RfidEvent as RFID where 'Dock 1' = 
  (select name from Zones.std:unique(zoneId) where zoneId = RFID.zoneId)

The example above shows a subquery in the where clause. The statement selects RFID events in which the zone name matches a string constant based on zone id. The statement uses the view std:unique to guarantee that only the last event per zone id is held from processing by the subquery.

The next example is a correlated subquery within a select clause. In this statement the select clause retrieves the zone name by means of a subquery against the Zones set of events correlated by zone id:

select zoneId, (select name from Zones.std:unique(zoneId) 
  where zoneId = RFID.zoneId) as name from RFIDEvent

Note that when a simple or correlated subquery returns multiple rows, the engine returns a null value as the subquery result. To limit the number of events returned by a subquery consider using one of the views std:lastevent, std:unique and std:groupwin or aggregation functions or the multi-row and multi-column selects as described below.

The select clause of a subquery also allows wildcard selects, which return as an event property the underlying event object of the event type as defined in the from clause. An example:

select (select * from MarketData.std:lastevent()) as md 
  from pattern [every timer:interval(10 sec)]

The output events to the statement above contain the underlying MarketData event in a property named "md". The statement populates the last MarketData event into a property named "md" every 10 seconds following the pattern definition, or populates a null value if no MarketData event has been encountered so far.

When your subquery returns multiple rows, you must use an aggregation function in the select clause of the subselect, as a subquery can only return a single row and single value object. To return multiple values from a subquery, consider writing a custom aggregation function that returns an array or collection of values.

Aggregation functions may be used in the select clause of the subselect as this example outlines:

select * from MarketData
where price > (select max(price) from MarketData(symbol='GOOG').std:lastevent())

As the sub-select expression is evaluated first (by default), the query above actually never fires for the GOOG symbol, only for other symbols that have a price higher then the current maximum for GOOG. As a sidenote, the insert into clause can also be handy to compute aggregation results for use in multiple subqueries.

When using aggregation functions in a correlated subselect the engine computes the aggregation based on data window or named window contents matching the where-clause.

The following example compares the quantity value provided by the current order event against the total quantity of all order events in the last 1 hour for the same client.

select * from OrderEvent oe
where qty > 
  (select sum(qty) from OrderEvent.win:time(1 hour) pd 
  where pd.client = oe.client)

Filter expressions in a pattern or stream may also employ subqueries. Subqueries can be uncorrelated or can be correlated to properties of the stream or to properties of tagged events in a pattern. Subqueries may reference named windows as well.

The following example filters BarData events that have a close price less then the last moving average (field movAgv) as provided by stream SMA20Stream (an uncorrelated subquery):

select * from BarData(ticker='MSFT', closePrice < 
    (select movAgv from SMA20Stream(ticker='MSFT').std:lastevent()))

A few generic examples follow to demonstrate the point. The examples use short event and property names so they are easy to read. Assume A and B are streams and DNamedWindow is a named window, and properties a_id, b_id, d_id, a_val, b_val, d_val respectively:

// Sample correlated subquery as part of stream filter criteria
select * from A(a_val in 
  (select b_val from B.std:unique(b_val) as b where a.a_id = b.b_id)) as a
// Sample correlated subquery against a named window
select * from A(a_val in 
  (select b_val from DNamedWindow as d where a.a_id = d.d_id)) as a
// Sample correlated subquery in the filter criteria as part of a pattern, querying a named window
select * from pattern [
  a=A -> b=B(bvalue = 
    (select d_val from DNamedWindow as d where d.d_id = b.b_id and d.d_id = a.a_id))
]

Subquery state starts to accumulate as soon as a statement starts (and not only when a pattern-subexpression activates).

The following restrictions apply to subqueries:

  1. The subquery stream definition must define a data window or other view to limit subquery results, reducing the number of events held for subquery execution

  2. Subqueries can only consist of a select clause, a from clause and a where clause. The group by and having clauses, as well as joins, outer-joins and output rate limiting are not permitted within subqueries.

  3. If using aggregation functions in a subquery, note these limitations:

    1. None of the properties of the correlated stream(s) can be used within aggregation functions.

    2. The properties of the subselect stream must all be within aggregation functions.

The order of evaluation of subqueries relative to the containing statement is guaranteed: If the containing statement and its subqueries are reacting to the same type of event, the subquery will receive the event first before the containing statement's clauses are evaluated. This behavior can be changed via configuration. The order of evaluation of subqueries is not guaranteed between subqueries.

Performance of your statement containing one or more subqueries principally depends on two parameters. First, if your subquery correlates one or more columns in the subquery stream with the enclosing statement's streams, the engine automatically builds the appropriate indexes for fast row retrieval based on the key values correlated (joined). The second parameter is the number of rows found in the subquery stream and the complexity of the filter criteria (where clause), as each row in the subquery stream must evaluate against the where clause filter.

5.11.1. The 'exists' Keyword

The exists condition is considered "to be met" if the subquery returns at least one row. The not exists condition is considered true if the subquery returns no rows.

The synopsis for the exists keyword is as follows:

exists (subquery)

Let's take a look at a simple example. The following is an EPL statement that uses the exists condition:

select assetId from RFIDEvent as RFID 
  where exists (select * from Asset.std:unique(assetId) where assetId = RFID.assetId)

This select statement will return all RFID events where there is at least one event in Assets unique by asset id with the same asset id.

5.11.2. The 'in' and 'not in' Keywords

The in subquery condition is true if the value of an expression matches one or more of the values returned by the subquery. Consequently, the not in condition is true if the value of an expression matches none of the values returned by the subquery.

The synopsis for the in keyword is as follows:

expression in (subquery)

The right-hand side subquery must return exactly one column.

The next statement demonstrates the use of the in subquery condition:

select assetId from RFIDEvent
  where zone in (select zone from ZoneUpdate(status = 'closed').win:time(10 min))

The above statement demonstrated the in subquery to select RFID events for which the zone status is in a closed state.

Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of the in construct will be null, not false (or true for not-in). This is in accordance with SQL's normal rules for Boolean combinations of null values.

5.11.3. The 'any' and 'some' Keywords

The any subquery condition is true if the expression returns true for one or more of the values returned by the subquery.

The synopsis for the any keyword is as follows:

expression operator any (subquery)
expression operator some (subquery)

The left-hand expression is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. The result of any is "true" if any true result is obtained. The result is "false" if no true result is found (including the special case where the subquery returns no rows).

The operator can be any of the following values: =, !=, <>, <, <=, >, >=.

The some keyword is a synonym for any. The in construct is equivalent to = any.

The right-hand side subquery must return exactly one column.

The next statement demonstrates the use of the any subquery condition:

select * from ProductOrder as ord
  where quantity < any
    (select minimumQuantity from MinimumQuantity.win:keepall())

The above query compares ProductOrder event's quantity value with all rows from the MinimumQuantity stream of events and returns only those ProductOrder events that have a quantity that is less then any of the minimum quantity values of the MinimumQuantity events.

Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of the any construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.

5.11.4. The 'all' Keyword

The all subquery condition is true if the expression returns true for all of the values returned by the subquery.

The synopsis for the all keyword is as follows:

expression operator all (subquery)

The left-hand expression is evaluated and compared to each row of the subquery result using the given operator, which must yield a Boolean result. The result of all is "true" if all rows yield true (including the special case where the subquery returns no rows). The result is "false" if any false result is found. The result is null if the comparison does not return false for any row, and it returns null for at least one row.

The operator can be any of the following values: =, !=, <>, <, <=, >, >=.

The not in construct is equivalent to != all.

The right-hand side subquery must return exactly one column.

The next statement demonstrates the use of the all subquery condition:

select * from ProductOrder as ord
  where quantity < all
    (select minimumQuantity from MinimumQuantity.win:keepall())

The above query compares ProductOrder event's quantity value with all rows from the MinimumQuantity stream of events and returns only those ProductOrder events that have a quantity that is less then all of the minimum quantity values of the MinimumQuantity events.

5.11.5. Multi-Column Selection

Your subquery may select multiple columns in the select clause including multiple aggregated values from a data window or named window.

The following example is a correlated subquery that selects wildcard and in addition selects the bid and offer properties of the last MarketData event for the same symbol as the arriving OrderEvent:

select *,
  (select bid, offer from MarketData.std:unique(symbol) as md 
   where md.symbol = oe.symbol) as bidoffer
from OrderEvent oe

Output events for the above query contain all properties of the original OrderEvent event. In addition each output event contains a bidoffer nested property that itself contains the bid and offer properties. You may retrieve the bid and offer from output events directly via the bidoffer.bid property name syntax for nested properties.

The next example is similar to the above query but instead selects aggregations and selects from a named window by name OrderNamedWindow (creation not shown here). For each arriving OrderEvent it selects the total quantity and count of all order events for the same client, as currently held by the named window:

select *,
  (select sum(qty) as sumPrice, count(*) as countRows 
   from OrderNamedWindow as onw
   where onw.client = oe.client) as pastOrderTotals
from OrderEvent as oe

The next EPL statement computes a prorated quantity considering the maximum and minimum quantity for the last 1 minute of order events:

expression subq {
  (select max(quantity) as maxq, min(quantity) as minq from OrderEvent.win:time(1 min))
}
select (quantity - minq) / (subq().maxq  - subq().minq) as prorated
from OrderEvent

Output events for the above query contain all properties of the original OrderEvent event. In addition each output event contains a pastOrderTotals nested property that itself contains the sumPrice and countRows properties.

5.11.6. Multi-Row Selection

While a subquery cannot change the cardinality of the selected stream, a subquery can return multiple values from the selected data window or named window. This section shows examples of the window aggregation function in subselects.

Consider using an inner join, outer join or unidirectional join instead to achieve a 1-to-many cardinality in the number of output events.

The next example is an uncorrelated subquery that selects all current ZoneEvent events considering the last ZoneEvent per zone for each arriving RFIDEvent.

select assetId,
 (select window(z.*) as winzones from ZoneEvent.std:unique(zone) as z) as zones
 from RFIDEvent

Output events for the above query contain two properties: the assetId property and the zones property. The latter property is a nested property that contains the winzones property. You may retrieve the zones from output events directly via the zones.winzones property name syntax for nested properties.

In this example for a correlated subquery against a named window we assume that the OrderNamedWindow has been created and contains order events. The query returns for each MarketData event the list of order ids for orders with the same symbol:

select price,
 (select window(orderId) as winorders 
  from OrderNamedWindow onw 
  where onw.symbol = md.symbol) as orderIds
 from MarketData md

Output events for the above query contain two properties: the price property and the orderIds property. The latter property is a nested property that contains the winorders property of type array.

5.11.7. Hints Related to Subqueries

The following hints are available to tune performance and memory use of subqueries.

Use the @Hint('set_noindex') hint for a statement that utilizes one or more subqueries. It instructs the engine to always perform a full table scan. The engine does not build an implicit index or use an explicitly-created index when this hint is provided. Use of the hint may result in reduced memory use but poor statement performance.

The following hints are available to tune performance and memory use of subqueries that select from named windows.

Named windows are globally-visible data windows. As such an application may create explicit indexes as discussed in Section 5.15.10, “Explicitly Indexing Named Windows”. The engine may also elect to create implicit indexes (no create-index EPL required) for index-based lookup of rows when executing on-select, on-merge, on-update and on-delete statements and for statements that subquery a named window.

By default and without specifying a hint, each statement that subqueries a named window also maintains its own index for looking up events held by the named window. The engine maintains the index by consuming the named window insert and remove stream. When the statement is destroyed it releases that index.

Specify the @Hint('enable_window_subquery_indexshare') hint to enable subquery index sharing for named windows. When using this hint, indexes for subqueries are maintained by the named window itself (and not each statement), are shared between one or more statements and may also utilize explicit indexes. Specify the hint once as part of the create window statement.

This sample EPL statement creates a named window with subquery index sharing enabled:

@Hint('enable_window_subquery_indexshare')
create window AllOrdersNamedWindow.win:keepall() as OrderMapEventType

When subquery index sharing is enabled, performance may increase as named window stream consumption is no longer needed. You may also expect reduced memory use especially if a large number of EPL statements perform similar subqueries against a named window. Subquery index sharing may require additional short-lived object creation and may slightly increase lock held time for named windows.

The following statement performs a correlated subquery against the named window above. When a settlement event arrives it select the order detail for the same order id as provided by the settlement event:

select 
  (select * from AllOrdersNamedWindow as onw 
    where onw.orderId = se.orderId) as orderDetail
  from SettlementEvent as se

With subquery index sharing enabled the engine maintains an index of order events by order id for the named window, and shares that index between additional statements until the time all utilizing statements are destroyed.

You may disable subquery index sharing for a specific statement by specifying the @Hint('disable_window_subquery_indexshare') hint, as this example shows, causing the statement to maintain its own index:

@Hint('disable_window_subquery_indexshare')
select 
  (select * from AllOrdersNamedWindow as onw 
    where onw.orderId = se.orderId) as orderDetail
  from SettlementEvent as se

5.12. Joining Event Streams

5.12.1. Introducing Joins

Two or more event streams can be part of the from-clause and thus both (all) streams determine the resulting events. This section summarizes the important concepts. The sections that follow present more detail on each topic.

The default join is an inner join which produces output events only when there is at least one match in all streams.

Consider the sample statement shown next:

select * from TickEvent.std:lastevent(), NewsEvent.std:lastevent()

The above statement outputs the last TickEvent and the last NewsEvent in one output event when either a TickEvent or a NewsEvent arrives. If no TickEvent was received before a NewsEvent arrives, no output occurs. Similarly when no NewsEvent was received before a TickEvent arrives, no output occurs.

The where-clause lists the join conditions that Esper uses to relate events in the two or more streams.

The next example statement retains the last TickEvent and last NewsEvent per symbol, and joins the two streams based on their symbol value:

select * from TickEvent.std:unique(symbol) as t, NewsEvent.std:unique(symbol) as n
where t.symbol = n.symbol

As before, when a TickEvent arrives for a symbol that has no matching NewsEvent then there is no output event.

An outer join does not require each event in either stream to have a matching event. The full outer join is useful when output is desired when no match is found. The different outer join types (full, left, right) are explained in more detail below.

This example statement is an outer-join and also returns the last TickEvent and last NewsEvent per symbol:

select * from TickEvent.std:unique(symbol) as t
full outer join NewsEvent.std:unique(symbol) as n on t.symbol = n.symbol

In the sample statement above, when a TickEvent arrives for a symbol that has no matching NewsEvent, or when a NewsEvent arrives for a symbol that has no matching TickEvent, the statement still produces an output event with a null column value for the missing event.

Note that each of the sample queries above defines a data window. The sample queries above use the last-event data window (std:lastevent) or the unique data window (std:unique). A data window serves to indicate the subset of events to join from each stream and may be required depending on the join.

In above queries, when either a TickEvent arrives or when a NewsEvent arrives then the query evaluates and there is output. The same holds true if additional streams are added to the from-clause: Each of the streams in the from-clause trigger the join to evaluate.

The unidirectional keyword instructs the engine to evaluate the join only when an event arrives from the single stream that was marked with the unidirectional keyword. In this case no data window should be specified for the stream marked as unidirectional since the keyword implies that the current event of that stream triggers the join.

Here is the sample statement above with unidirectional keyword, so that output occurs only when a TickEvent arrives and not when a NewsEvent arrives:

select * from TickEvent as t unidirectional, NewsEvent.std:unique(symbol) as n 
where t.symbol = n.symbol

It is oftentimes the case that an aggregation (count, sum, average) only needs to be calculated in the context of an arriving event or timer. Consider using the unidirectional keyword when aggregating over joined streams.

An EPL pattern is a normal citizen also providing a stream of data consisting of pattern matches. A time pattern, for example, can be useful to evaluate a join and produce output upon each interval.

This sample statement includes a pattern that fires every 5 seconds and thus triggers the join to evaluate and produce output, computing an aggregated total quantity per symbol every 5 seconds:

select symbol, sum(qty) from pattern[every timer:interval(5 sec)] unidirectional, 
  TickEvent.std:unique(symbol) t, NewsEvent.std:unique(symbol) as n 
where t.symbol = n.symbol group by symbol

Named windows as well as reference and historical data such as stored in your relational database, and data returned by a method invocation, can also be included in joins as discussed in Section 5.13, “Accessing Relational Data via SQL” and Section 5.14, “Accessing Non-Relational Data via Method Invocation”.

Related to joins are subqueries: A subquery is a select within another statement, see Section 5.11, “Subqueries”

The engine performs extensive query analysis and planning, building internal indexes and strategies as required to allow fast evaluation of many types of queries.

5.12.2. Inner (Default) Joins

Each point in time that an event arrives to one of the event streams, the two event streams are joined and output events are produced according to the where clause when matching events are found for all joined streams.

This example joins 2 event streams. The first event stream consists of fraud warning events for which we keep the last 30 minutes. The second stream is withdrawal events for which we consider the last 30 seconds. The streams are joined on account number.

select fraud.accountNumber as accntNum, fraud.warning as warn, withdraw.amount as amount,
       max(fraud.timestamp, withdraw.timestamp) as timestamp, 'withdrawlFraud' as desc
  from com.espertech.esper.example.atm.FraudWarningEvent.win:time(30 min) as fraud,
       com.espertech.esper.example.atm.WithdrawalEvent.win:time(30 sec) as withdraw
 where fraud.accountNumber = withdraw.accountNumber

Joins can also include one or more pattern statements as the next example shows:

select * from FraudWarningEvent.win:time(30 min) as fraud,
    pattern [every w=WithdrawalEvent -> PINChangeEvent(acct=w.acct)].std:lastevent() as withdraw
 where fraud.accountNumber = withdraw.w.accountNumber

The statement above joins the last 30 minutes of fraud warnings with a pattern. The pattern consists of every withdrawal event that is followed by a PIN change event for the same account number. It joins the two event streams on account number. The last-event view instucts the join to only consider the last pattern match.

In a join and outer join, your statement must declare a data window view or other view onto each stream. Streams that are marked as unidirectional and named windows as well as database or methods in a join are an exception and do not require a view to be specified. If you are joining an event to itself via contained-event selection, views also do not need to be specified. The reason that a data window must be declared is that a data window specifies which events are considered for the join (i.e. last event, last 10 events, all events, last 1 second of events etc.).

The next example joins all FraudWarningEvent events that arrived since the statement was started, with the last 20 seconds of PINChangeEvent events:

select * from FraudWarningEvent.win:keepall() as fraud, PINChangeEvent.win:time(20 sec) as pin
 where fraud.accountNumber = pin.accountNumber

The above example employed the special keep-all view that retains all events.

5.12.3. Outer, Left and Right Joins

Esper supports left outer joins, right outer joins, full outer joins and inner joins in any combination between an unlimited number of event streams. Outer and inner joins can also join reference and historical data as explained in Section 5.13, “Accessing Relational Data via SQL”, as well as join data returned by a method invocation as outlined in Section 5.14, “Accessing Non-Relational Data via Method Invocation”.

The keywords left, right, full and inner control the type of the join between two streams. The optional on clause specifies one or more properties that join each stream. The synopsis is as follows:

...from stream_def [as name] 
  ((left|right|full outer) | inner) join stream_def 
  [on property = property [and property = property ...] ]
  [ ((left|right|full outer) | inner) join stream_def [on ...]]...

If the outer join is a left outer join, there will be an output event for each event of the stream on the left-hand side of the clause. For example, in the left outer join shown below we will get output for each event in the stream RfidEvent, even if the event does not match any event in the event stream OrderList.

select * from RfidEvent.win:time(30 sec) as rfid
       left outer join
       OrderList.win:length(10000) as orderlist
     on rfid.itemId = orderList.itemId

Similarly, if the join is a Right Outer Join, then there will be an output event for each event of the stream on the right-hand side of the clause. For example, in the right outer join shown below we will get output for each event in the stream OrderList, even if the event does not match any event in the event stream RfidEvent.

select * from RfidEvent.win:time(30 sec) as rfid
       right outer join
       OrderList.win:length(10000) as orderlist
       on rfid.itemId = orderList.itemId

For all types of outer joins, if the join condition is not met, the select list is computed with the event properties of the arrived event while all other event properties are considered to be null.

The next type of outer join is a full outer join. In a full outer join, each point in time that an event arrives to one of the event streams, one or more output events are produced. In the example below, when either an RfidEvent or an OrderList event arrive, one or more output event is produced. The next example shows a full outer join that joins on multiple properties:

select * from RfidEvent.win:time(30 sec) as rfid
       full outer join
       OrderList.win:length(10000) as orderlist
       on rfid.itemId = orderList.itemId and rfid.assetId = orderList.assetId

The last type of join is an inner join. In an inner join, the engine produces an output event for each event of the stream on the left-hand side that matches at least one event on the right hand side considering the join properties. For example, in the inner join shown below we will get output for each event in the RfidEvent stream that matches one or more events in the OrderList data window:

select * from RfidEvent.win:time(30 sec) as rfid
       inner join
       OrderList.win:length(10000) as orderlist
       on rfid.itemId = orderList.itemId and rfid.assetId = orderList.assetId

Patterns as streams in a join follow this rule: If no data window view is declared for the pattern then the pattern stream retains the last match. Thus a pattern must have matched at least once for the last row to become available in a join. Multiple rows from a pattern stream may be retained by declaring a data window view onto a pattern using the pattern [...].view_specification syntax.

This example outer joins multiple streams. Here the RfidEvent stream is outer joined to both ProductName and LocationDescription via left outer join:

select * from RfidEvent.win:time(30 sec) as rfid
      left outer join ProductName.win:keepall() as refprod
        on rfid.productId = refprod.prodId
      left outer join LocationDescription.win:keepall() as refdesc
        on rfid.location = refdesc.locId

If the optional on clause is specified, it may only employ the = equals operator and property names. Any other operators must be placed in the where-clause. The stream names that appear in the on clause may refer to any stream in the from-clause.

Your EPL may also provide no on clause. This is useful when the streams that are joined do not provide any properties to join on, for example when joining with a time-based pattern.

The next example employs a unidirectional left outer join such that the engine, every 10 seconds, outputs a count of the number of RfidEvent events in the 60-second time window.

select count(*) from
  pattern[every timer:interval(1)] unidirectional 
  left outer join
  RfidEvent.win:time(60 sec)

5.12.4. Unidirectional Joins

In a join or outer join your statement lists multiple event streams, views and/or patterns in the from clause. As events arrive into the engine, each of the streams (views, patterns) provides insert and remove stream events. The engine evaluates each insert and remove stream event provided by each stream, and joins or outer joins each event against data window contents of each stream, and thus generates insert and remove stream join results.

The direction of the join execution depends on which stream or streams are currently providing an insert or remove stream event for executing the join. A join is thus multidirectional, or bidirectional when only two streams are joined. A join can be made unidirectional if your application does not want new results when events arrive on a given stream or streams.

The unidirectional keyword can be used in the from clause to identify a single stream that provides the events to execute the join. If the keyword is present for a stream, all other streams in the from clause become passive streams. When events arrive or leave a data window of a passive stream then the join does not generate join results.

For example, consider a use case that requires us to join stock tick events (TickEvent) and news events (NewsEvent). The unidirectional keyword allows to generate results only when TickEvent events arrive, and not when NewsEvent arrive or leave the 10-second time window:

select * from TickEvent unidirectional, NewsEvent.win:time(10 sec) 
where tick.symbol = news.symbol

Aggregation functions in a unidirectional join aggregate within the context of each unidirectional event evaluation and are not cumulative. Thereby aggregation functions when used with unidirectional may evaluate faster as they do not need to consider a remove stream (data removed from data windows or named windows).

The count function in the next query returns, for each TickEvent, the number of matching NewEvent events:

select count(*) from TickEvent unidirectional, NewsEvent.win:time(10 sec) 
where tick.symbol = news.symbol

The following restrictions apply to unidirectional joins:

  1. The unidirectional keyword can only be specified for a single stream in the from clause.

  2. Receiving data from a unidirectional join via the pull API (iterator method) is not allowed. This is because the engine holds no state for the single stream that provides the events to execute the join.

  3. The stream that declares the unidirectional keyword cannot declare a data window view or other view for that stream, since remove stream events are not processed for the single stream.

5.12.5. Hints Related to Joins

When joining 3 or more streams (including any relational or non-relational sources as below) it can sometimes help to provide the query planner instructions how to best execute the join. The engine compiles a query plan for the EPL statement at statement creation time. You can output the query plan to logging (see configuration).

An outer join that specifies only inner keywords for all streams is equivalent to an default (inner) join. The following two statements are equivalent:

select * from TickEvent.std:lastevent(), 
    NewsEvent.std:lastevent() where tick.symbol = news.symbol

Equivalent to:

select * from TickEvent.std:lastevent() 
	inner join NewsEvent.std:lastevent() on tick.symbol = news.symbol

For all types of joins, the query planner determines a query graph: The term is used here for all the information regarding what properties or expressions are used to join the streams. The query graph thus includes the where-clause expressions as well as outer-join on-clauses if this statement is an outer join. The query planner also computes a dependency graph which includes information about all historical data streams (relational and non-relational as below) and their input needs.

For default (inner) joins the query planner first attempts to find a path of execution as a nested iteration. For each stream the query planner selects the best order of streams available for the nested iteration considering the query graph and dependency graph. If the full depth of the join is achievable via nested iteration for all streams without full table scan then the query planner uses that nested iteration plan. If not, then the query planner re-plans considering a merge join (Cartesian) approach instead.

Specify the @Hint('PREFER_MERGE_JOIN') to instruct the query planner to prefer a merge join plan instead of a nested iteration plan. Specify the @Hint('FORCE_NESTED_ITER') to instruct the query planner to always use a nested iteration plan.

For example, consider the below statement. Depending on the number of matching rows in OrderBookOne and OrderBookTwo (named windows in this example, and assumed to be defined elsewhere) the performance of the join may be better using the merge join plan.

@Hint('PREFER_MERGE_JOIN') 
select * from TickEvent.std:lastevent() t, 
	OrderBookOne ob1, OrderBookOne ob2
where ob1.symbol = t.symbol and ob2.symbol = t.symbol 
and ob1.price between t.buy and t.sell and ob2.price between t.buy and t.sell

For outer joins the query planner considers nested iteration and merge join (Cartesian) equally and above hints don't apply.

5.13. Accessing Relational Data via SQL

This chapter outlines how reference data and historical data that are stored in a relational database can be queried via SQL within EPL statements.

Esper can access via join and outer join as well as via iterator (poll) API all types of event streams to stored data. In order for such data sources to become accessible to Esper, some configuration is required. The Section 14.4.8, “Relational Database Access” explains the required configuration for database access in greater detail, and includes information on configuring a query result cache.

Esper does not parse or otherwise inspect your SQL query. Therefore your SQL can make use of any database-specific SQL language extensions or features that your database provides.

If you have enabled query result caching in your Esper database configuration, Esper retains SQL query results in cache following the configured cache eviction policy.

Also if you have enabled query result caching in your Esper database configuration and provide EPL where clause and/or on clause (outer join) expressions, then Esper builds indexes on the SQL query results to enable fast lookup. This is especially useful if your queries return a large number of rows. For building the proper indexes, Esper inspects the expression found in your EPL query where clause, if present. For outer joins, Esper also inspects your EPL query on clause. Esper analyzes the EPL on clause and where clause expressions, if present, looking for property comparison with or without logical AND-relationships between properties. When a SQL query returns rows for caching, Esper builds and caches the appropriate index and lookup strategies for fast row matching against indexes.

Joins or outer joins in which only SQL statements or method invocations are listed in the from clause and no other event streams are termed passive joins. A passive join does not produce an insert or remove stream and therefore does not invoke statement listeners with results. A passive join can be iterated on (polled) using a statement's safeIterator and iterator methods.

There are no restrictions to the number of SQL statements or types of streams joined. The following restrictions currently apply:

  • Sub-views on an SQL query are not allowed; That is, one cannot create a time or length window on an SQL query. However one can use the insert into syntax to make join results available to a further statement.

  • Your database software must support JDBC prepared statements that provide statement meta data at compilation time. Most major databases provide this function. A workaround is available for databases that do not provide this function.

  • JDBC drivers must support the getMetadata feature. A workaround is available as below for JDBC drivers that don't support getting metadata.

The next sections assume basic knowledge of SQL (Structured Query Language).

5.13.1. Joining SQL Query Results

To join an event stream against stored data, specify the sql keyword followed by the name of the database and a parameterized SQL query. The syntax to use in the from clause of an EPL statement is:

sql:database_name [" parameterized_sql_query "]

The engine uses the database_name identifier to obtain configuration information in order to establish a database connection, as well as settings that control connection creation and removal. Please see Section 14.4.8, “Relational Database Access” to configure an engine for database access.

Following the database name is the SQL query to execute. The SQL query can contain one or more substitution parameters. The SQL query string is placed in single brackets [ and ]. The SQL query can be placed in either single quotes (') or double quotes ("). The SQL query grammer is passed to your database software unchanged, allowing you to write any SQL query syntax that your database understands, including stored procedure calls.

Substitution parameters in the SQL query string take the form ${expression}. The engine resolves expression at statement execution time to the actual expression result by evaluating the events in the joined event stream or current variable values, if any event property references or variables occur in the expression. An expression may not contain EPL substitution parameters.

The engine determines the type of the SQL query output columns by means of the result set metadata that your database software returns for the statement. The actual query results are obtained via the getObject on java.sql.ResultSet.

The sample EPL statement below joins an event stream consisting of CustomerCallEvent events with the results of an SQL query against the database named MyCustomerDB and table Customer:

select custId, cust_name from CustomerCallEvent,
  sql:MyCustomerDB [' select cust_name from Customer where cust_id = ${custId} ']

The example above assumes that CustomerCallEvent supplies an event property named custId. The SQL query selects the customer name from the Customer table. The where clause in the SQL matches the Customer table column cust_id with the value of custId in each CustomerCallEvent event. The engine executes the SQL query for each new CustomerCallEvent encountered.

If the SQL query returns no rows for a given customer id, the engine generates no output event. Else the engine generates one output event for each row returned by the SQL query. An outer join as described in the next section can be used to control whether the engine should generate output events even when the SQL query returns no rows.

The next example adds a time window of 30 seconds to the event stream CustomerCallEvent. It also renames the selected properties to customerName and customerId to demonstrate how the naming of columns in an SQL query can be used in the select clause in the EPL query. And the example uses explicit stream names via the as keyword.

select customerId, customerName from
  CustomerCallEvent.win:time(30 sec) as cce,
  sql:MyCustomerDB ["select cust_id as customerId, cust_name as customerName from Customer 
                  where cust_id = ${cce.custId}"] as cq

Any window, such as the time window, generates insert stream (istream) events as events enter the window, and remove stream (rstream) events as events leave the window. The engine executes the given SQL query for each CustomerCallEvent in both the insert stream and the remove stream. As a performance optimization, the istream or rstream keywords in the select clause can be used to instruct the engine to only join insert stream or remove stream events, reducing the number of SQL query executions.

Since any expression may be placed within the ${...} syntax, you may use variables or user-defined functions as well.

The next example assumes that a variable by name varLowerLimit is defined and that a user-defined function getLimit exists on the MyLib imported class that takes a LimitEvent as a parameter:

select * from LimitEvent le, 
  sql:MyCustomerDB [' select cust_name from Customer where 
      amount > ${max(varLowerLimit, MyLib.getLimit(le))} ']

The example above takes the higher of the current variable value or the value returned by the user-defined function to return only those customer names where the amount exceeds the computed limit.

5.13.2. SQL Query and the EPL Where Clause

Consider using the EPL where clause to join the SQL query result to your event stream. Similar to EPL joins and outer-joins that join event streams or patterns, the EPL where clause provides join criteria between the SQL query results and the event stream (as a side note, an SQL where clause is a filter of rows executed by your database on your database server before returning SQL query results).

Esper analyzes the expression in the EPL where clause and outer-join on clause, if present, and builds the appropriate indexes from that information at runtime, to ensure fast matching of event stream events to SQL query results, even if your SQL query returns a large number of rows. Your applications must ensure to configure a cache for your database using Esper configuration, as such indexes are held with regular data in a cache. If you application does not enable caching of SQL query results, the engine does not build indexes on cached data.

The sample EPL statement below joins an event stream consisting of OrderEvent events with the results of an SQL query against the database named MyRefDB and table SymbolReference:

select symbol, symbolDesc from OrderEvent as orders,
  sql:MyRefDB ['select symbolDesc from SymbolReference'] as reference
  where reference.symbol = orders.symbol

Notice how the EPL where clause joins the OrderEvent stream to the SymbolReference table. In this example, the SQL query itself does not have a SQL where clause and therefore returns all rows from table SymbolReference.

If your application enables caching, the SQL query fires only at the arrival of the first OrderEvent event. When the second OrderEvent arrives, the join execution uses the cached query result. If the caching policy that you specified in the Esper database configuration evicts the SQL query result from cache, then the engine fires the SQL query again to obtain a new result and places the result in cache.

If SQL result caching is enabled and your EPL where clause, as show in the above example, provides the properties to join, then the engine indexes the SQL query results in cache and retains the index together with the query result in cache. Thus your application can benefit from high performance index-based lookups as long as the SQL query results are found in cache.

The SQL result caches operate on the level of all result rows for a given parameter set. For example, if your query returns 10 rows for a certain set of parameter values then the cache treats all 10 rows as a single entry keyed by the parameter values, and the expiry policy applies to all 10 rows and not to each individual row.

It is also possible to join multiple autonomous database systems in a single query, for example:

select symbol, symbolDesc from OrderEvent as orders,
  sql:My_Oracle_DB ['select symbolDesc from SymbolReference'] as reference,
  sql:My_MySQL_DB ['select orderList from orderHistory'] as history
  where reference.symbol = orders.symbol
  and history.symbol = orders.symbol 

5.13.3. Outer Joins With SQL Queries

You can use outer joins to join data obtained from an SQL query and control when an event is produced. Use a left outer join, such as in the next statement, if you need an output event for each event regardless of whether or not the SQL query returns rows. If the SQL query returns no rows, the join result populates null values into the selected properties.

select custId, custName from
  CustomerCallEvent as cce
  left outer join 
  sql:MyCustomerDB ["select cust_id, cust_name as custName 
                     from Customer where cust_id = ${cce.custId}"] as cq
  on cce.custId = cq.cust_id

The statement above always generates at least one output event for each CustomerCallEvent, containing all columns selected by the SQL query, even if the SQL query does not return any rows. Note the on expression that is required for outer joins. The on acts as an additional filter to rows returned by the SQL query.

5.13.4. Using Patterns to Request (Poll) Data

Pattern statements and SQL queries can also be applied together in useful ways. One such use is to poll or request data from a database at regular intervals or following the schedule of the crontab-like timer:at.

The next statement is an example that shows a pattern that fires every 5 seconds to query the NewOrder table for new orders:

insert into NewOrders
select orderId, orderAmount from
  pattern [every timer:interval(5 sec)],
  sql:MyCustomerDB ['select orderId, orderAmount from NewOrders']

5.13.5. Polling SQL Queries via Iterator

Usually your SQL query will take part in a join and thus be triggered by an event or pattern occurrence. Instead, your application may need to poll a SQL query and thus use Esper query execution and caching facilities and obtain event data and metadata.

Your EPL statement can specify an SQL statement without a join. Such a stand-alone SQL statement does not post new events, and may only be queried via the iterator poll API. Your EPL and SQL statement may still use variables.

The next statement assumes that a price_var variable has been declared. It selects from the relational database table named NewOrder all rows in which the price column is greater then the current value of the price_var EPL variable:

select * from sql:MyCustomerDB ['select * from NewOrder where ${price_var} > price']

Use the iterator and safeIterator methods on EPStatement to obtain results. The statement does not post events to listeners, it is strictly passive in that sense.

5.13.6. JDBC Implementation Overview

The engine translates SQL queries into JDBC java.sql.PreparedStatement statements by replacing ${name} parameters with '?' placeholders. It obtains name and type of result columns from the compiled PreparedStatement meta data when the EPL statement is created.

The engine supplies parameters to the compiled statement via the setObject method on PreparedStatement. The engine uses the getObject method on the compiled statement PreparedStatement to obtain column values.

5.13.7. Oracle Drivers and No-Metadata Workaround

Certain JDBC database drivers are known to not return metadata for precompiled prepared SQL statements. This can be a problem as metadata is required by Esper. Esper obtains SQL result set metadata to validate an EPL statement and to provide column types for output events. JDBC drivers that do not provide metadata for precompiled SQL statements require a workaround. Such drivers do generally provide metadata for executed SQL statements, however do not provide the metadata for precompiled SQL statements.

Please consult the Chapter 14, Configuration for the configuration options available in relation to metadata retrieval.

To obtain metadata for an SQL statement, Esper can alternatively fire a SQL statement which returns the same column names and types as the actual SQL statement but without returning any rows. This kind of SQL statement is referred to as a sample statement in below workaround description. The engine can then use the sample SQL statement to retrieve metadata for the column names and types returned by the actual SQL statement.

Applications can provide a sample SQL statement to retrieve metadata via the metadatasql keyword:

sql:database_name ["parameterized_sql_query" metadatasql "sql_meta_query"] 

The sql_meta_query must be an SQL statement that returns the same number of columns, the same type of columns and the same column names as the parameterized_sql_query, and does not return any rows.

Alternatively, applications can choose not to provide an explicit sample SQL statement. If the EPL statement does not use the metadatasql syntax, the engine applies lexical analysis to the SQL statement. From the lexical analysis Esper generates a sample SQL statement adding a restrictive clause "where 1=0" to the SQL statement.

Alternatively, you can add the following tag to the SQL statement: ${$ESPER-SAMPLE-WHERE}. If the tag exists in the SQL statement, the engine does not perform lexical analysis and simply replaces the tag with the SQL where clause "where 1=0". Therefore this workaround is applicable to SQL statements that cannot be correctly lexically analyzed. The SQL text after the placeholder is not part of the sample query. For example:

select mycol from sql:myDB [
  'select mycol from mytesttable ${$ESPER-SAMPLE-WHERE} where ....'], ...

If your parameterized_sql_query SQL query contains vendor-specific SQL syntax, generation of the metadata query may fail to produce a valid SQL statement. If you experience an SQL error while fetching metadata, use any of the above workarounds with the Oracle JDBC driver.

5.13.8. SQL Input Parameter and Column Output Conversion

As part of database access configuration you may optionally specify SQL type mappings. These mappings apply to all queries against the same database identified by name.

If your application must perform SQL-query-specific or EPL-statement-specific mapping or conversion between types, the facility to register a conversion callback exists as follows.

Use the @Hook instruction and HookType.SQLCOL as part of your EPL statement text to register a statement SQL parameter or column conversion hook. Implement the interface com.espertech.esper.client.hook.SQLColumnTypeConversion to perform the input parameter or column value conversion.

A sample statement with annotation is shown:

@Hook(type=HookType.SQLCOL, hook='MyDBTypeConvertor')
select * from sql:MyDB ['select * from MyEventTable]

The engine expects MyDBTypeConvertor to resolve to a class (considering engine imports) and instantiates one instance of MyDBTypeConvertor for each statement.

5.13.9. SQL Row POJO Conversion

Your application may also directly convert a SQL result row into a Java class which is an opportunity for your application to interrogate and transform the SQL row result data freely before packing the data into a Java class. Your application can additionally indicate to skip SQL result rows.

Use the @Hook instruction and HookType.SQLROW as part of your EPL statement text to register a statement SQL output row conversion hook. Implement the interface com.espertech.esper.client.hook.SQLOutputRowConversion to perform the output row conversion.

A sample statement with annotation is shown:

@Hook(type=HookType.SQLROW, hook='MyDBRowConvertor')
select * from sql:MyDB ['select * from MyEventTable]

The engine expects MyDBRowConvertor to resolve to a class (considering engine imports) and instantiates one MyDBRowConvertor instance for each statement.

5.14. Accessing Non-Relational Data via Method Invocation

Your application may need to join data that originates from a web service, a distributed cache, an object-oriented database or simply data held in memory by your application. Esper accommodates this need by allowing a method invocation (or procedure call or function) in the from clause of a statement.

The results of such a method invocation in the from clause plays the same role as a relational database table in an inner and outer join in SQL. The role is thus dissimilar to the role of a user-defined function, which may occur in any expression such as in the select clause or the where clause. Both are backed by one or more static methods provided by your class library.

Esper can join and outer join an unlimited number and all types of event streams to the data returned by your method invocation. In addition, Esper can be configured to cache the data returned by your method invocations.

Joins or outer joins in which only SQL statements or method invocations are listed in the from clause and no other event streams are termed passive joins. A passive join does not produce an insert or remove stream and therefore does not invoke statement listeners with results. A passive join can be iterated on (polled) using a statement's safeIterator and iterator methods.

The following restrictions currently apply:

  • Sub-views on a method invocations are not allowed; That is, one cannot create a time or length window on a method invocation. However one can use the insert into syntax to make join results available to a further statement.

5.14.1. Joining Method Invocation Results

The syntax for a method invocation in the from clause of an EPL statement is:

method:class_name.method_name[(parameter_expressions)]

The method keyword denotes a method invocation. It is followed by a class name and a method name separated by a dot (.) character. If you have parameters to your method invocation, these are placed in round brackets after the method name. Any expression is allowed as a parameter, and individual parameter expressions are separated by a comma. Expressions may also use event properties of the joined stream.

In the sample join statement shown next, the method 'lookupAsset' provided by class 'MyLookupLib' returns one or more rows based on the asset id (a property of the AssetMoveEvent) that is passed to the method:

select * from AssetMoveEvent, method:MyLookupLib.lookupAsset(assetId)

The following statement demonstrates the use of the where clause to join events to the rows returned by a method invocation, which in this example does not take parameters:

select assetId, assetDesc from AssetMoveEvent as asset, 
       method:MyLookupLib.getAssetDescriptions() as desc 
where asset.assetid = desc.assetid

Your method invocation may return zero, one or many rows for each method invocation. If you have caching enabled through configuration, then Esper can avoid the method invocation and instead use cached results. Similar to SQL joins, Esper also indexes cached result rows such that join operations based on the where clause or outer-join on clause can be very efficient, especially if your method invocation returns a large number of rows.

If the time taken by method invocations is critical to your application, you may configure local caches as Section 14.4.6, “Cache Settings for From-Clause Method Invocations” describes.

Esper analyzes the expression in the EPL where clause and outer-join on clause, if present, and builds the appropriate indexes from that information at runtime, to ensure fast matching of event stream events to method invocation results, even if your method invocation returns a large number of rows. Your applications must ensure to configure a cache for your method invocation using Esper configuration, as such indexes are held with regular data in a cache. If you application does not enable caching of method invocation results, the engine does not build indexes on cached data.

5.14.2. Polling Method Invocation Results via Iterator

Usually your method invocation will take part in a join and thus be triggered by an event or pattern occurrence. Instead, your application may need to poll a method invocation and thus use Esper query execution and caching facilities and obtain event data and metadata.

Your EPL statement can specify a method invocation in the from clause without a join. Such a stand-alone method invocation does not post new events, and may only be queried via the iterator poll API. Your EPL statement may still use variables.

The next statement assumes that a category_var variable has been declared. It polls the getAssetDescriptions method passing the current value of the category_var EPL variable:

select * from method:MyLookupLib.getAssetDescriptions(category_var)]

Use the iterator and safeIterator methods on EPStatement to obtain results. The statement does not post events to listeners, it is strictly passive in that sense.

5.14.3. Providing the Method

Your application must provide a Java class that exposes a public static method. The method must accept the same number and type of parameters as listed in the parameter expression list.

If your method invocation returns either no row or only one row, then the return type of the method can be a Java class or a java.util.Map. If your method invocation can return more then one row, then the return type of the method must be an array of Java class or an array of Map.

If you are using a Java class or an array of Java class as the return type, then the class must adhere to JavaBean conventions: it must expose properties through getter methods.

If you are using java.util.Map as the return type or an array of Map, then the map should have String-type keys and object values (Map<String, Object>). When using Map as the return type, your application must provide a second method that returns property metadata, as the next section outlines.

Your application method must return either of the following:

  1. A null value or an empty array to indicate an empty result (no rows).

  2. A Java object or Map to indicate a one-row result, or an array that consists of a single Java object or Map.

  3. An array of Java objects or Map instances to return multiple result rows.

As an example, consider the method 'getAssetDescriptions' provided by class 'MyLookupLib' as discussed earlier:

select assetId, assetDesc from AssetMoveEvent as asset,
       method:com.mypackage.MyLookupLib.getAssetDescriptions() as desc 
  where asset.assetid = desc.assetid

The 'getAssetDescriptions' method may return multiple rows and is therefore declared to return an array of the class 'AssetDesc'. The class AssetDesc is a POJO class (not shown here):

public class MyLookupLib {
  ...
  public static AssetDesc[] getAssetDescriptions() {
    ...
    return new AssetDesc[] {...};
  }

The example above specifies the full Java class name of the class 'MyLookupLib' class in the EPL statement. The package name does not need to be part of the EPL if your application imports the package using the auto-import configuration through the API or XML, as outlined in Section 14.4.5, “Class and package imports”.

5.14.4. Using a Map Return Type

Your application may return java.util.Map or an array of Map from method invocations. If doing so, your application must provide metadata about each row: it must declare the property name and property type of each Map entry of a row. This information allows the engine to perform type checking of expressions used within the statement.

You declare the property names and types of each row by providing a method that returns property metadata. The metadata method must follow these conventions:

  1. The method name providing the property metadata must have same method name appended by the literal Metadata.

  2. The method must have an empty parameter list and must be declared public and static.

  3. The method providing the metadata must return a Map of String property name keys and java.lang.Class property name types (Map<String, Class>).

In the following example, a class 'MyLookupLib' provides a method to return historical data based on asset id and asset code:

select assetId, location, x_coord, y_coord from AssetMoveEvent as asset,
       method:com.mypackage.MyLookupLib.getAssetHistory(assetId, assetCode) as history

A sample implementation of the class 'MyLookupLib' is shown below.

public class MyLookupLib {
  ...
  // For each column in a row, provide the property name and type
  //
  public static Map<String, Class> getAssetHistoryMetadata() {
    Map<String, Class> propertyNames = new HashMap<String, Class>();
    propertyNames.put("location", String.class);
    propertyNames.put("x_coord", Integer.class);
    propertyNames.put("y_coord", Integer.class);
    return propertyNames;
  }
... 
  // Lookup rows based on assetId and assetCode
  // 
  public static Map<String, Object>[] getAssetHistory(String assetId, String assetCode) {
    Map rows = new Map[2];	// this sample returns 2 rows
    for (int i = 0; i < 2; i++) {
      rows[i] = new HashMap();
      rows[i].put("location", "somevalue");
      rows[i].put("x_coord", 100);
      // ... set more values for each row
    }
    return rows;
  }

In the example above, the 'getAssetHistoryMetadata' method provides the property metadata: the names and types of properties in each row. The engine calls this method once per statement to determine event typing information.

The 'getAssetHistory' method returns an array of Map objects that are two rows. The implementation shown above is a simple example. The parameters to the method are the assetId and assetCode properties of the AssetMoveEvent joined to the method. The engine calls this method for each insert and remove stream event in AssetMoveEvent.

To indicate that no rows are found in a join, your application method may return either a null value or an array of size zero.

5.15. Creating and Using Named Windows

A named window is a global data window that can take part in many statement queries, and that can be inserted-into and deleted-from by multiple statements. A named window holds events of the same type or supertype, unless used with a variant stream.

The create window clause declares a new named window. The named window starts up empty unless populated from an existing named window at time of creation. Events must be inserted into the named window using the insert into clause. Events can also be deleted from a named window via the on delete clause.

Events enter the named window by means of insert into clause of a select statement. Events leave a named window either because the expiry policy of the declared data window removes events from the named window, or through statements that use the on delete clause to explicitly delete from a named window.

To query a named window, simply use the window name in the from clause of your statement, including statements that contain subqueries, joins and outer-joins.

A named window may also decorate an event to preserve original events as described in Section 5.10.4, “Decorated Events” and Section 5.15.2.1, “Named Windows Holding Decorated Events”. In addition, columns of a named window are allowed to hold events themselves, as further explained in Section 5.10.5, “Event as a Property” and Section 5.15.2.2, “Named Windows Holding Events As Property”.

To tune subquery performance when the subquery selects from a named window, consider the hints discussed in Section 5.11.7, “Hints Related to Subqueries”.

5.15.1. Creating Named Windows: the Create Window clause

The create window statement creates a named window by specifying a window name and one or more data window views, as well as the type of event to hold in the named window.

There are two syntaxes for creating a named window: The first syntax allows to model a named window after an existing event type or an existing named window. The second syntax is similar to the SQL create-table syntax and provides a list of column names and column types.

A new named window starts up empty. It must be explicitly inserted into by one or more statements, as discussed below. A named window can also be populated at time of creation from an existing named window.

If your application stops or destroys the statement that creates the named window, any consuming statements no longer receive insert or remove stream events. The named window can also not be deleted from after it was stopped or destroyed.

The create window statement posts to listeners any events that are inserted into the named window as new data. The statement posts all deleted events or events that expire out of the data window to listeners as the remove stream (old data). The named window contents can also be iterated on via the pull API to obtain the current contents of a named window.

5.15.1.1. Creation by Modelling after an Existing Type

The benefit of modelling a named window after an existing event type is that event properties can be nested, indexed, mapped or other types that your event objects may provide as properties, including the type of the underlying event itself. Also, using the wildcard (*) operator means your EPL does not need to list each individual property explicitly.

The syntax for creating a named window by modelling the named window after an existing event type, is as follows:

[context context_name] 
create window window_name.view_specifications 
    [as] [select list_of_properties from] event_type_or_windowname
    [insert [where filter_expression]]

The window_name you assign to the named window can be any identifier. The name should not already be in use as an event type or stream name.

The view_specifications are one or more data window views that define the expiry policy for removing events from the data window. Named windows must explicitly declare a data window view. This is required to ensure that the policy for retaining events in the data window is well defined. To keep all events, use the keep-all view: It indicates that the named window should keep all events and only remove events from the named window that are deleted via the on delete clause. The view specification can only list data window views, derived-value views are not allowed since these don't represent an expiry policy. Data window views are listed in Chapter 12, EPL Reference: Views. View parameterization and staggering are described in Section 5.4.3, “Specifying Views”.

The select clause and list_of_properties are optional. If present, they specify the column names and, implicitly by definition of the event type, the column types of events held by the named window. Expressions other than column names are not allowed in the select list of properties. Wildcards (*) and wildcards with additional properties can also be used.

The event_type_or_windowname is required if using the model-after syntax. It provides the name of the event type of events held in the data window, unless column names and types have been explicitly selected via select. The name of an (existing) other named window is also allowed here. Please find more details in Section 5.15.6, “Populating a Named Window from an Existing Named Window”.

Finally, the insert clause and optional filter_expression are used if the new named windows is modelled after an existing named window, and the data of the new named window is to be populated from the existing named window upon creation. The optional filter_expression can be used to exclude events.

You may refer to a context by specifying the context keyword followed by a context name. Context are described in more detail at Chapter 4, Context and Context Partitions. The effect of referring to a context is that your named window operates according to the context dimensional information as declared for the context. For usage and limitations please see the respective chapter.

The next statement creates a named window 'AllOrdersNamedWindow' for which the expiry policy is simply to keep all events. Assume that the event type 'OrderMapEventType' has been configured. The named window is to hold events of type 'OrderMapEventType':

create window AllOrdersNamedWindow.win:keepall() as OrderMapEventType

The below sample statement demonstrates the select syntax. It defines a named window in which each row has the three properties 'symbol', 'volume' and 'price'. This named window actively removes events from the window that are older then 30 seconds.

create window OrdersTimeWindow.win:time(30 sec) as 
  select symbol, volume, price from OrderEvent

In an alternate form, the as keyword can be used to rename columns, and constants may occur in the select clause as well:

create window OrdersTimeWindow.win:time(30 sec) as 
  select symbol as sym, volume as vol, price, 1 as alertId from OrderEvent

5.15.1.2. Creation By Defining Columns Names and Types

The second syntax for creating a named window is by supplying column names and types:

[context context_name] 
create window window_name.view_specifications [as] (column_name column_type 
  [,column_name column_type [,...])

The column_name is an identifier providing the event property name. The column_type is also required for each column. Valid column types are listed in Section 5.18.1, “Creating Variables: the Create Variable clause” and are the same as for variable types.

For attributes that are array-type append [] (left and right brackets).

The next statement creates a named window:

create window SecurityEvent.win:time(30 sec) 
    (ipAddress string, userId String, numAttempts int, properties String[])

Named window columns can hold events by declaring the column type as the event type name. Array-type is combination with event-type is also supported.

The next two statements declare an event type and create a named window with a column of the defined event type:

create schema SecurityData (name String, roles String[])
create window SecurityEvent.win:time(30 sec) 
    (ipAddress string, userId String, secData SecurityData, historySecData SecurityData[])

5.15.1.3. Dropping or Removing Named Windows

There is no syntax to drop or remove a named window.

The destroy method on the EPStatement that created the named window removes the named window. However the implicit event type associated with the named window remains active since further statements may continue to use that type. Therefore a named window of the same name can only be created again if the type information matches the prior declaration for a named window.

5.15.2. Inserting Into Named Windows

The insert into clause inserts events into named windows. Your application must ensure that the column names and types match the declared column names and types of the named window to be inserted into.

For inserting into a named window and for simulateously checking if the inserted row already exists in the named window or for atomic update-insert operation on a named window, consider using on-merge as described in Section 5.15.9, “Triggered Upsert using the On-Merge Clause”. On-merge is similar to the SQL merge clause and provides what is known as an "Upsert" operation: Update existing events or if no existing event(s) are found then insert a new event, all in one atomic operation provided by a single EPL statement.

In this example we first create a named window using some of the columns of an OrderEvent event type:

create window OrdersWindow.win:keepall() as select symbol, volume, price from OrderEvent

The insert into the named window selects individual columns to be inserted:

insert into OrdersWindow(symbol, volume, price) select name, count, price from FXOrderEvent

An alternative form is shown next:

insert into OrdersWindow select name as symbol, vol as volume, price from FXOrderEvent

Following above statement, the engine enters every FXOrderEvent arriving into the engine into the named window 'OrdersWindow'.

The following EPL statements create a named window for an event type backed by a Java class and insert into the window any 'OrderEvent' where the symbol value is IBM:

create window OrdersWindow.win:time(30) as com.mycompany.OrderEvent
insert into OrdersWindow select * from com.mycompany.OrderEvent(symbol='IBM')

The last example adds one column named 'derivedPrice' to the 'OrderEvent' type by specifying a wildcard, and uses a user-defined function to populate the column:

create window OrdersWindow.win:time(30) as select *, price as derivedPrice from OrderEvent
insert into OrdersWindow select *, MyFunc.func(price, percent) as derivedPrice from OrderEvent

Event representations based on Java base classes or interfaces, and subclasses or implementing classes, are compatible as these statements show:

// create a named window for the base class
create window OrdersWindow.std:unique(name) as select * from ProductBaseEvent
// The ServiceProductEvent class subclasses the ProductBaseEvent class
insert into OrdersWindow select * from ServiceProductEvent
// The MerchandiseProductEvent class subclasses the ProductBaseEvent class
insert into OrdersWindow select * from MerchandiseProductEvent

To avoid duplicate events inserted in a named window and atomically check if a row already exists, use on-merge as outlined in Section 5.15.9, “Triggered Upsert using the On-Merge Clause”. An example:

on ServiceProductEvent as spe merge OrdersWindow as win
where win.id = spe.id when not matched then insert select *

5.15.2.1. Named Windows Holding Decorated Events

Decorated events hold an underlying event and add additional properties to the underlying event, as described further in Section 5.10.4, “Decorated Events”.

Here we create a named window that decorates OrderEvent events by adding an additional property named priceTotal to each OrderEvent. A matching insert into statement is also part of the sample:

create window OrdersWindow.win:time(30) as select *, price as priceTotal from OrderEvent
insert into OrdersWindow select *, price * unit as priceTotal from ServiceOrderEvent

The property type of the additional priceTotal column is the property type of the existing price property of OrderEvent.

5.15.2.2. Named Windows Holding Events As Property

Columns in a named window may also hold an event itself. More information on the insert into clause providing event columns is in Section 5.10.5, “Event as a Property”.

The next sample creates a named window that specifies two columns: A column that holds an OrderEvent, and a column by name priceTotal. A matching insert into statement is also part of the sample:

create window OrdersWindow.win:time(30) as select this, price as priceTotal from OrderEvent
insert into OrdersWindow select order, price * unit as priceTotal  
from ServiceOrderEvent as order

Note that the this proprerty must exist on the event and must return the event class itself (JavaBean events only). The property type of the additional priceTotal column is the property type of the existing price property.

5.15.3. Selecting From Named Windows

A named window can be referred to by any statement in the from clause of the statement. Filter criteria can also be specified. Additional views may be used onto named windows however such views cannot include data window views.

A statement selecting all events from a named window 'AllOrdersNamedWindow' is shown next. The named window must first be created via the create window clause before use.

select * from AllOrdersNamedWindow

The statement as above simply receives the unfiltered insert stream of the named window and reports that stream to its listeners. The iterator method returns returns all events in the named window, if any.

If your application desires to obtain the events removed from the named window, use the rstream keyword as this statement shows:

select rstream * from AllOrdersNamedWindow

The next statement derives an average price per symbol from all events posted by a named window:

select symbol, avg(price) from AllOrdersNamedWindow group by symbol

Your application may create a consuming statement such as above on an empty named window, or your application may create the above statement on an already filled named window. The engine provides correct results in either case: At the time of statement creation the Esper engine internally initializes the consuming statement from the current named window, also taking your declared filters into consideration. Thus, your statement deriving data from a named window does not start empty if the named window already holds one or more events. A consuming statement also sees the remove stream of an already populated named window, if any.

If you require a subset of the data in the named window, you can specify one or more filter expressions onto the named window as shown here:

select symbol, avg(price) from AllOrdersNamedWindow(sector='energy') group by symbol

By adding a filter to the named window, the aggregation and grouping as well as any views that may be declared onto to the named window receive a filtered insert and remove stream. The above statement thus outputs, continuously, the average price per symbol for all orders in the named window that belong to a certain sector.

A side note on variables in filters filtering events from named windows: The engine initializes consuming statements at statement creation time and changes aggregation state continuously as events arrive. If the filter criteria contain variables and variable values changes, then the engine does not re-evaluate or re-build aggregation state. In such a case you may want to place variables in the having clause which evaluates on already-built aggregation state.

The following example further declares a view into the named window. Such a view can be a plug-in view or one of the built-in views, but cannot be a data window view (with the exception of the std:groupwin grouped-window view which is allowed).

select * from AllOrdersNamedWindow(volume>0, price>0).mycompany:mypluginview()

Data window views cannot be used onto named windows since named windows post insert and remove streams for the events entering and leaving the named window, thus the expiry policy and batch behavior are well defined by the data window declared for the named window. For example, the following is not allowed and fails at time of statement creation:

// not a valid statement
select * from AllOrdersNamedWindow.win:time(30 sec)

5.15.4. Triggered Select on Named Windows: the On Select clause

The on select clause performs a one-time, non-continuous query on a named window every time a triggering event arrives or a triggering pattern matches. The query can consider all events in the named window, or only events that match certain criteria, or events that correlate with an arriving event or a pattern of arriving events.

The syntax for the on select clause is as follows:

on event_type[(filter_criteria)] [as stream_name]
[insert into insert_into_def]
select select_list
from window_name [as stream_name]
[where criteria_expression]
[group by grouping_expression_list]
[having grouping_search_conditions]
[order by order_by_expression_list]

The event_type is the name of the type of events that trigger the query against the named window. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign an stream name. Patterns or named windows can also be specified in the on clause, see the samples in Section 5.15.8.1, “Using Patterns in the On Delete Clause”.

The insert into clause works as described in Section 5.10, “Merging Streams and Continuous Insertion: the Insert Into Clause”. The select clause is described in Section 5.3, “Choosing Event Properties And Events: the Select Clause”. For all clauses the semantics are equivalent to a join operation: The properties of the triggering event or events are available in the select clause and all other clauses.

The window_name in the from clause is the name of the named window to select events from. The as keyword is also available to assign a stream name to the named window. The as keyword is helpful in conjunction with wildcard in the select clause to select named window events via the syntax select streamname.* .

The optional where clause contains a criteria_expression that correlates the arriving (triggering) event to the events to be considered from the named window. The criteria_expression may also simply filter for events in the named window to be considered by the query.

The group by clause, the having clause and the order by clause are all optional and work as described in earlier chapters.

The similarities and differences between an on select clause and a regular or outer join are as follows:

  1. A join is evaluated when any of the streams participating in the join have new events (insert stream) or events leaving data windows (remove stream). A join is therefore bi-directional or multi-directional. However, the on select statement has one triggering event or pattern that causes the query to be evaluated and is thus uni-directional.

  2. The query within the on select statement is not continuous: It executes only when a triggering event or pattern occurs. Aggregation and groups are computed anew considering the contents of the named window at the time the triggering event arrives.

The iterator of the EPStatement object representing the on select clause returns the last batch of selected events in response to the last triggering event, or null if the last triggering event did not select any rows.

For correlated queries that correlate triggering events with events held by a named window, Esper internally creates efficient indexes to enable high performance querying of events. It analyzes the where clause to build one or more indexes for fast lookup in the named window based on the properties of the triggering event.

The next statement demonstrates the concept. Upon arrival of a QueryEvent event the statement selects all events in the 'OrdersNamedWindow' named window:

on QueryEvent select win.* from OrdersNamedWindow as win

The engine executes the query on arrival of a triggering event, in this case a QueryEvent. It posts the query results to any listeners to the statement, in a single invocation, as the new data array. By prefixing the wildcard (*) selector with the stream name, the select clause returns only events of the named window and does not also return triggering events.

The where clause filters and correlates events in the named window with the triggering event, as shown next:

on QueryEvent(volume>0) as query
select query.symbol, query.volume, win.symbol  from OrdersNamedWindow as win
where win.symbol = query.symbol

Upon arrival of a QueryEvent, if that event has a value for the volume property that is greater then zero, the engine executes the query. The query considers all events currently held by the 'OrdersNamedWindow' that match the symbol property value of the triggering QueryEvent event. The engine then posts query results to the statement's listeners.

Aggregation, grouping and ordering of results are possible as this example shows:

on QueryEvent as queryEvent
select symbol, sum(volume) from OrdersNamedWindow as win
group by symbol
having volume > 0
order by symbol

The above statement outputs the total volume per symbol for those groups where the sum of the volume is greater then zero, ordered by symbol ascending. The engine computes and posts the output based on the current contents of the 'OrdersNamedWindow' named window considering all events in the named window, since the query does not have a where clause.

When using wildcard (*) to select from streams in an on-select clause, each stream, that is the the triggering stream and the selected-upon named window, are selected, similar to a join. Therefore your wildcard select returns two columns: the triggering event and the selection result event, for each row.

on QueryEvent as queryEvent
select * from OrdersNamedWindow as win

The query above returns a queryEvent column and a win column for each event. If only a single stream's event is desired in the result, use select win.* instead.

To trigger an on-select when an update to the selected named window occurs or when the triggering event is the same event that is being inserted into the named window, specify the named window name as the event type.

The next query fires the select for every change to the named window OrdersNamedWindow:

on OrdersNamedWindow as trig 
select onw.symbol, sum(onw.volume) 
from OrdersNamedWindow as onw 
where onw.symbol = trig.symbol

5.15.5. Triggered Playback from Named Windows: the On Insert clause

The on insert clause is an on select clause as described in the prior chapter with the addition of an insert into clause.

Similar to the on select clause, the engine executes the query when a triggering event arrives. It then provides the query results as an event stream to further statements. It populates the event stream that is named in the insert into clause.

The statement below provides the query results to any consumers of the MyOrderStream, upon arrival of a QueryEvent event:

on QueryEvent as query
insert into MyOrderStream
select win.* from OrdersNamedWindow as win

Here is a sample consuming statement of the MyOrderStream. The statement further filters the events provided by the on insert statement by user id and reports a total of volume per symbol:

select symbol, sum(volume) from MyOrderStream(userId='user1') group by symbol

5.15.6. Populating a Named Window from an Existing Named Window

Your EPL statement may specify the name of an existing named window when creating a new named window, and may use the insert keyword to indicate that the new named window is to be populated from the events currently held by the existing named window.

For example, and assuming the named window OrdersNamedWindow already exists, this statement creates a new named window ScratchOrders and populates all orders in OrdersNamedWindow into the new named window:

create window ScratchOrders.win:keepall() as OrdersNamedWindow insert

The where keyword is also available to perform filtering, for example:

create window ScratchBuyOrders.win:time(10) as OrdersNamedWindow insert where side = 'buy'

5.15.7. Updating Named Windows: the On Update clause

An on update clause updates events held by a named window. The clause can be used to update all events, or only events that match certain criteria, or events that correlate with an arriving event or a pattern of arriving events.

For updating a named window and for simulateously checking if the updated row exists in the named window or for atomic update-insert operation on a named window, consider using on-merge as described in Section 5.15.9, “Triggered Upsert using the On-Merge Clause”. On-merge is similar to the SQL merge clause and provides what is known as an "Upsert" operation: Update existing events or if no existing event(s) are found then insert a new event, all in one atomic operation provided by a single EPL statement.

The syntax for the on update clause is as follows:

on event_type[(filter_criteria)] [as stream_name]
update window_name [as stream_name]
set property_name = expression [, property_name = expression [,...]]
[where criteria_expression]

The event_type is the name of the type of events that trigger an update of rows in a named window. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign a name for use in expressions and the where clause. Patterns and named windows can also be specified in the on clause.

The window_name is the name of the named window to update events. The as keyword is also available to assign a name to the named window.

The comma-separated list of property names and expressions set the value of one or more properties. Subqueries may by part of expressions however aggregation functions and the prev or prior function may not be used in expressions.

The optional where clause contains a criteria_expression that correlates the arriving (triggering) event to the events to be updated in the named window. The criteria_expression may also simply filter for events in the named window to be updated.

The iterator of the EPStatement object representing the on update clause can also be helpful: It returns the last batch of updated events in response to the last triggering event, in any order, or null if the last triggering event did not update any rows.

Statements that reference the named window receive the new event in the insert stream and the event prior to the update in the remove stream.

Let's look at a couple of examples. In the simplest form, this statement updates all events in the named window 'AllOrdersNamedWindow' when any 'UpdateOrderEvent' event arrives, setting the price property to zero for all events currently held by the named window:

on UpdateOrderEvent update AllOrdersNamedWindow set price = 0

This example adds a where clause to the example above. Upon arrival of a triggering 'ZeroVolumeEvent', the statement updates prices on any orders that have a volume of zero or less:

on ZeroVolumeEvent update AllOrdersNamedWindow set price = 0 where volume <= 0

The next example shows a more complete use of the syntax, and correlates the triggering event with events held by the named window:

on NewOrderEvent(volume>0) as myNewOrders
update AllOrdersNamedWindow as myNamedWindow 
set price = myNewOrders.price
where myNamedWindow.symbol = myNewOrders.symbol

In the above sample statement, only if a 'NewOrderEvent' event with a volume greater then zero arrives does the statement trigger. Upon triggering, all events in the named window that have the same value for the symbol property as the triggering 'NewOrderEvent' event are then updated (their price property is set to that of the arriving event). The statement also showcases the as keyword to assign a name for use in the where expression.

For correlated queries (as above) that correlate triggering events with events held by a named window, Esper internally creates efficient indexes to enable high performance update of events.

Your application can subscribe a listener to your on update statements to determine update events. The statement post any events that are updated to all listeners attached to the statement as new data, and the events prior to the update as old data. Upon iteration, the statement provides the last update event, if any.

The following example shows the use of tags and a pattern. It sets the price value of orders to that of either a 'FlushOrderEvent' or 'OrderUpdateEvent' depending on which arrived:

on pattern [every ord=OrderUpdateEvent(volume>0) or every flush=FlushOrderEvent] 
update AllOrdersNamedWindow as win
set price = case when ord.price is null then flush.price else ord.price end
where ord.id = win.id or flush.id = win.id

When updating indexed properties use the syntax propertyName[index] = value with the index value being an integer number. When updating mapped properties use the syntax propertyName(key) = value with the key being a string value.

The following restrictions apply:

  1. Each property to be updated must be writable.

  2. For underlying event representations that are Java objects, a event object class must implement the java.io.Serializable interface as discussed in Section 5.20.1, “Immutability and Updates” and must provide setter methods for updated properties.

  3. When using an XML underlying event type, event properties in the XML document representation are not available for update.

  4. Nested properties are not supported for update. Revision event types and variant streams may also not be updated.

5.15.8. Deleting From Named Windows: the On Delete clause

An on delete clause removes events from a named window. The clause can be used to remove all events, or only events that match certain criteria, or events that correlate with an arriving event or a pattern of arriving events.

The syntax for the on delete clause is as follows:

on event_type[(filter_criteria)] [as stream_name]
delete from window_name [as stream_name]
[where criteria_expression]

The event_type is the name of the type of events that trigger removal from the named window. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign a name for use in the where clause. Patterns and named windows can also be specified in the on clause as described in the next section.

The window_name is the name of the named window to delete events from. The as keyword is also available to assign a name to the named window.

The optional where clause contains a criteria_expression that correlates the arriving (triggering) event to the events to be removed from the named window. The criteria_expression may also simply filter for events in the named window to be removed from the named window.

The iterator of the EPStatement object representing the on delete clause can also be helpful: It returns the last batch of deleted events in response to the last triggering event, in any order, or null if the last triggering event did not remove any rows.

Let's look at a couple of examples. In the simplest form, this statement deletes all events from the named window 'AllOrdersNamedWindow' when any 'FlushOrderEvent' arrives:

on FlushOrderEvent delete from AllOrdersNamedWindow

This example adds a where clause to the example above. Upon arrival of a triggering 'ZeroVolumeEvent', the statement removes from the named window any orders that have a volume of zero or less:

on ZeroVolumeEvent delete from AllOrdersNamedWindow where volume <= 0

The next example shows a more complete use of the syntax, and correlates the triggering event with events held by the named window:

on NewOrderEvent(volume>0) as myNewOrders
delete from AllOrdersNamedWindow as myNamedWindow 
where myNamedWindow.symbol = myNewOrders.symbol

In the above sample statement, only if a 'NewOrderEvent' event with a volume greater then zero arrives does the statement trigger. Upon triggering, all events in the named window that have the same value for the symbol property as the triggering 'NewOrderEvent' event are then removed from the named window. The statement also showcases the as keyword to assign a name for use in the where expression.

For correlated queries (as above) that correlate triggering events with events held by a named window, Esper internally creates efficient indexes to enable high performance removal of events especially from named windows that hold large numbers of events.

Your application can subscribe a listener to your on delete statements to determine removed events. The statement post any events that are deleted from a named window to all listeners attached to the statement as new data. Upon iteration, the statement provides the last deleted event, if any.

5.15.8.1. Using Patterns in the On Delete Clause

By means of patterns the on delete clause and on select clause (described below) can look for more complex conditions to occur, possibly involving multiple events or the passing of time. The syntax for on delete with a pattern expression is show next:

on pattern [pattern_expression] [as stream_name]
delete from window_name [as stream_name]
[where criteria_expression]

The pattern_expression is any pattern that matches zero or more arriving events. Tags can be used to name events in the pattern and can occur in the optional where clause to correlate to events to be removed from a named window.

In the next example the triggering pattern fires every 10 seconds. The effect is that every 10 seconds the statement removes from 'MyNamedWindow' all rows:

on pattern [every timer:interval(10 sec)] delete from MyNamedWindow

The following example shows the use of tags in a pattern:

on pattern [every ord=OrderEvent(volume>0) or every flush=FlushOrderEvent] 
delete from OrderWindow as win
where ord.id = win.id or flush.id = win.id

The pattern above looks for OrderEvent events with a volume value greater then zero and tags such events as 'ord'. The pattern also looks for FlushOrderEvent events and tags such events as 'flush'. The where clause deletes from the 'OrderWindow' named window any events that match in the value of the 'id' property either of the arriving events.

5.15.9. Triggered Upsert using the On-Merge Clause

The on merge clause is similar to the SQL merge clause. It provides what is known as an "Upsert" operation: Update existing events or if no existing event(s) are found then insert a new event, all in an atomic operation provided by a single EPL statement.

The syntax for the on merge clause is as follows:

on event_type[(filter_criteria)] [as stream_name]
merge [into] window_name [as stream_name]
[where criteria_expression]
  when [not] matched [and search_condition]
    then [
      insert [into streamname]
          [ (property_name [, property_name] [,...]) ] 
          select select_expression [, select_expression[,...]]
          [where filter_expression]
      |
      update set property_name = expression [, property_name = expression [,...]]
          [where filter_expression]
      |
      delete
          [where filter_expression]
    ]
    [then [insert|update|delete]] [,then ...]
  [when ...  then ... [...]] 

The event_type is the name of the type of events that trigger the merge. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign a name for use in the where clause. Patterns and named windows can also be specified in the on clause as described in prior sections.

The window_name is the name of the named window to insert, update or delete events. The as keyword is also available to assign a name to the named window.

The optional where clause contains a criteria_expression that correlates the arriving (triggering) event to the events to be considered of the named window. We recommend specifying a criteria expression that is as specific as possible.

Following the where clause is one or more when matched or when not matched clauses in any order. Each may have an additional search condition associated.

After each when [not] matched follow one or more then clauses that each contain the action to take: Either an insert, update or delete keyword.

After when not matched only insert action(s) are available. After when matched any insert, update and delete action(s) are available.

After insert follows, optionally, the into keyword followed by the stream name or named window to insert-into. If no into and stream name is specified, the insert applies to the current named window. It follows an optional list of columns inserted. It follows the required select keyword and one or more select clause expressions. The wildcard (*) is available in the select clause as well. It follows an optional where-clause that may return Boolean false to indicate that the action should not be applied.

After update follows the set keyword and one or more assignment pairs. It follows an optional where-clause that may return Boolean false to indicate that the action should not be applied.

After delete follows an optional where-clause that may return Boolean false to indicate that the action should not be applied.

When according to the where-clause criteria_expression the engine finds no events in the named window that match the condition, the engine evaluates each when not matched clause. If the optional search condition returns true or no search condition was provided then the engine performs all of the actions listed after each then.

When according to the where-clause criteria_expression the engine finds one or more events in the named window that match the condition, the engine evaluates each when matched clause. If the optional search condition returns true or no search condition was provided the engine performs all of the actions listed after each then.

The engine executes when matched and when not matched in the order specified. If the optional search condition returns true or no search condition was specified then the engine takes the associated action (or multiple actions for multiple then keywords). When the block of actions completed the engine proceeds to the next matching event, if any. After completing all matching events the engine continues to the next triggering event if any.

In the first example we declare a schema that provides a product id and that holds a total price:

create schema ProductTotalRec as (productId string, totalPrice double)

We create a named window that holds a row for each unique product:

create window ProductWindow.std:unique(productId) as ProductTotalRec

The events for this example are order events that hold an order id, product id, price, quantity and deleted-flag declared by the next schema:

create schema OrderEvent as (orderId string, productId string, price double, 
    quantity int, deletedFlag boolean)

The following EPL statement utilizes on-merge to total up the price for each product based on arriving order events:

on OrderEvent oe
  merge ProductWindow pw
  where pw.productId = oe.productId
  when matched
    then update set totalPrice = totalPrice + oe.price
  when not matched 
    then insert select productId, price as totalPrice

In the above example, when an order event arrives, the engine looks up in the product named window the matching row or rows for the same product id as the arriving event. In this example the engine always finds no row or one row as the product named window is declared with a unique data window based on product id. If the engine finds a row in the named window, it performs the update action adding up the price as defined under when matched. If the engine does not find a row in the named window it performs the insert action as defined under when not matched, inserting a new row.

The insert keyword may be followed by a list of columns as shown in this EPL snippet:

// equivalent to the insert shown in the last 2 lines in above EPL
...when not matched 
    then insert(productId, totalPrice) select productId, price

The second example demonstrates the use of a select-clause with wildcard, a search condition and the delete keyword. It creates a named window that holds order events and employs on-merge to insert order events for which no corresponding order id was found, update quantity to the quantity provided by the last arriving event and delete order events that are marked as deleted:

create window OrderWindow.win:keepall() as OrderEvent
on OrderEvent oe
  merge OrderWindow pw
  where pw.orderId = oe.orderId
  when not matched 
    then insert select *
  when matched and oe.deletedFlag=true
    then delete
  when matched
    then update set pw.quantity = oe.quantity, pw.price = oe.price

In the above example the oe.deletedFlag=true search condition instructs the engine to take the delete action only if the deleted-flag is set.

You may specify multiple actions by providing multiple then keywords each followed by an action. Each of the insert, update and delete actions can itself have a where-clause as well. If a where-clause exists for an action, the engine evaluates the where-clause and applies the action only if the where-clause returns Boolean true.

This example specifies two update actions and uses the where-clause to trigger different update behavior depending on whether the order event price is less then zero:

on OrderEvent oe
  merge OrderWindow pw
  where pw.orderId = oe.orderId
  when matched
    then update set pw.quantity = 0, pw.price = 0 where oe.price < 0
    then update set pw.quantity = oe.quantity, pw.price = oe.price where oe.price >= 0

To insert events into another stream and not the named window, use insert into streamname.

In the next example each matched-clause contains two actions, one action to insert a log event and a second action to insert, delete or update:

on OrderEvent oe
  merge OrderWindow pw
  where pw.orderId = oe.orderId
  when not matched 
    then insert into LogEvent select 'this is an insert' as name
    then insert select *
  when matched and oe.deletedFlag=true
    then insert into LogEvent select 'this is a delete' as name
    then delete
  when matched
    then insert into LogEvent select 'this is a update' as name
    then update set pw.quantity = oe.quantity, pw.price = oe.price

While the engine evaluates and executes all actions listed under the same matched-clause in order, you may not rely on updated field values of an earlier action to trigger the where-clause of a later action. Similarly you should avoid simultaneous update and delete actions for the same match: the engine does not guarantee whether the update or the delete take final affect.

For correlated queries (as above) that correlate triggering events with events held by a named window, Esper internally creates efficient indexes to enable high performance update and removal of events especially from named windows that hold large numbers of events.

Your application can subscribe a listener to on merge statements to determine inserted, updated and removed events. Statements post any events that are inserted to, updated or deleted from a named window to all listeners attached to the statement as new data and removed data. Upon iteration, the statement provides the last inserted events, if any.

5.15.10. Explicitly Indexing Named Windows

You may explicitly create an index on a named window. When executing on-demand (fire-and-forget) queries as described in Section 13.4.3, “On-Demand Snapshot Query Execution” using prepared or unprepared queries, an explicit index can help speed query performance.

You do not need to explicitly create an index unless using on-demand (fire-and-forget) queries. Therefore you do not need to explicitly create an index for on-select, on-merge, on-update, on-delete as well as subqueries or consuming statements against named windows. However if the engine finds an explicitly-created index matching the requirements of such on-query or subqueries it uses that index instead.

Please use the following syntax to create an explicit index on a named window:

create index index_name on named_window_name (property [hash|btree] 
    [, property] [hash|btree] [,...] )

The index_name is the name assigned to the index. The name uniquely identifies the index and is used in engine logging.

The named_window_name is the name of an existing named window. If the named window has data already, the engine builds an index for the data in the named window.

The list of property names are the properties of events within the named window to include in the index. Following each property name you may specify the optional hash or btree keyword.

If you specify no keyword or the hash keyword for a property, the index will be a hash-based (unsorted) index in respect to that property. If you specify the btree keyword, the index will be a binary-tree-based sorted index in respect to that property. You may combine hash and btree properties for the same index. Specify btree for a property if you expect to perform numerical or string comparison using relational operators (<, >, >=, <=), the between or the in keyword for ranges and inverted ranges.

We list a few example EPL statements next that create a named window and create a single index:

// create a named window
create window UserProfileWindow.win:time(1 hour) select * from UserProfile
// create an index for the user id property only
create index UserProfileIndex on UserProfileWindow(userId)

Next, execute an on-demand fire-and-forget query as shown below, herein we use the prepared version to demonstrate:

String query = "select * from UserProfileWindow where userId='Joe'";
EPOnDemandPreparedQuery prepared = epRuntime.prepareQuery(query);
// query performance excellent in the face of large number of rows
EPOnDemandQueryResult result = prepared.execute();
// ...later ...
prepared.execute();	// execute a second time

By default, the engine builds a hash code -based index useful for direct comparison via equals (=). Filter expressions that look for ranges or use in, between do not benefit from the hash-based index and should use the btree keyword.

The next example creates a composite index over two fields symbol and buyPrice:

// create a named window
create window TickEventWindow.win:time(1 hour) as (symbol string, buyPrice double)
// create an index 
create index idx1 on TickEventWindow(symbol hash, buyPrice btree)

A sample fire-and-forget query is shown below (this time the API calls are not shown):

// query performance excellent in the face of large number of rows
select * from TickEventWindow where symbol='GE' and buyPrice between 10 and 20

5.15.11. Versioning and Revision Event Type Use with Named Windows

As outlined in Section 2.9, “Updating, Merging and Versioning Events”, revision event types process updates or new versions of events held by a named window.

A revision event type is simply one or more existing pre-configured event types whose events are related, as configured by static configuration, by event properties that provide same key values. The purpose of key values is to indicate that arriving events are related: An event amends, updates or adds properties to an earlier event that shares the same key values. No additional EPL is needed when using revision event types for merging event data.

Revision event types can be useful in these situations:

  1. Some of your events carry only partial information that is related to a prior event and must be merged together.

  2. Events arrive that add additional properties or change existing properties of prior events.

  3. Events may carry properties that have null values or properties that do no exist (for example events backed by Map or XML), and for such properties the earlier value must be used instead.

To better illustrate, consider a revision event type that represents events for creation and updates to user profiles. Lets assume the user profile creation events carry the user id and a full profile. The profile update events indicate only the user id and the individual properties that actually changed. The user id property shall serve as a key value relating profile creation events and update events.

A revision event type must be configured to instruct the engine which event types participate and what their key properties are. Configuration is described in Section 14.4.24, “Revision Event Type” and is not shown here.

Assume that an event type UserProfileRevisions has been configured to hold profile events, i.e. creation and update events related by user id. This statement creates a named window to hold the last 1 hour of current profiles per user id:

create window UserProfileWindow.win:time(1 hour) select * from UserProfileRevisions
insert into UserProfileWindow select * from UserProfileCreation
insert into UserProfileWindow select * from UserProfileUpdate

In revision event types, the term base event is used to describe events that are subject to update. Events that update, amend or add additional properties to base events are termed delta events. In the example, base events are profile creation events and delta events are profile update events.

Base events are expected to arrive before delta events. In the case where a delta event arrives and is not related by key value to a base event or a revision of the base event currently held by the named window the engine ignores the delta event. Thus, considering the example, profile update events for a user id that does not have an existing profile in the named window are not applied.

When a base or delta event arrives, the insert and remove stream output by the named window are the current and the prior version of the event. Let's come back to the example. As creation events arrive that are followed by update events or more creation events for the same user id, the engine posts the current version of the profile as insert stream (new data) and the prior version of the profile as remove stream (old data).

Base events are also implicitly delta events. That is, if multiple base events of the same key property values arrive, then each base event provides a new version. In the example, if multiple profile creation events arrive for the same user id then new versions of the current profile for that user id are output by the engine for each base event, as it does for delta events.

The expiry policy as specified by view definitions applies to each distinct key value, or multiple distinct key values for composite keys. An expiry policy re-evaluates when new versions arrive. In the example, user profile events expire from the time window when no creation or update event for a given user id has been received for 1 hour.

Several strategies are available for merging or overlaying events as the configuration chapter describes in greater detail.

Any of the Map, XML and JavaBean event representations as well as plug-in event representations may participate in a revision event type. For example, profile creation events could be JavaBean events, while profile update events could be java.util.Map events.

Delta events may also add properties to the revision event type. For example, one could add a new event type with security information to the revision event type and such security-related properties become available on the resulting revision event type.

The following restrictions apply to revision event types:

  • Nested properties are only supported for the JavaBean event representation. Nested properties are not individually versioned; they are instead versioned by the containing property.

  • Dynamic, indexed and mapped properties are only supported for nested properties and not as properties of the revision event type itself.

5.16. Declaring an Event Type: Create Schema

EPL allows declaring an event type via the create schema clause and also by means of the static or runtime configuration API addEventType functions. The term schema and event type has the same meaning in EPL.

Your application can declare an event type by providing the property names and types or by providing a class name. Your application may also declare a variant stream schema.

When using the create schema syntax to declare an event type, the engine automatically removes the event type when there are no started statements referencing the event type, including the statement that declared the event type. When using the configuration API, the event type stays cached even if there are no statements that refer to the event type and until explicitly removed via the runtime configuration API.

5.16.1. Declare an Event Type by Providing Names and Types

The synopsis of the create schema syntax providing property names and types is:

create schema schema_name [as] (property_name property_type [,property_name property_type [,...])
  [inherits inherited_event_type[, inherited_event_type] [,...]]
  [starttimestamp timestamp_property_name]
  [endtimestamp timestamp_property_name]
  [copyfrom copy_type_name [, copy_type_name] [,...]]

The property_name is an identifier providing the event property name. The property_type is also required for each property. Valid property types are listed in Section 5.18.1, “Creating Variables: the Create Variable clause” and in addition include:

  1. Any Java class name, fully-qualified or the simple class name if imports are configured.

  2. Add left and right square brackets [] to any type to denote an array-type event property.

  3. Use an event type name as a property type.

The optional inherits keywords is followed by a comma-separated list of event type names that are the supertypes to the declared type.

The optional starttimestamp keyword is followed by a property name. Use this to tell the engine that your event has a timestamp. The engine checks that the property name exists on the declared type and returns a date-time value. Declare a timestamp property if you want your events to implicitly carry a timestamp value for convenient use with interval algebra methods as a start timestamp.

The optional endtimestamp keyword is followed by a property name. Use this together with starttimestamp to tell the engine that your event has a duration. The engine checks that the property name exists on the declared type and returns a date-time value. Declare an endtimestamp property if you want your events to implicitly carry a duration value for convenient use with interval algebra methods.

The optional copyfrom keyword is followed by a comma-separate list of event type names. For each event type listed, the engine looks up that type and adds all event property definitions to the newly-defined type, in addition to those listed explicitly (if any).

A few example event type declarations follow:

// Declare type SecurityEvent
create schema SecurityEvent as (ipAddress string, userId String, numAttempts int)
			
// Declare type AuthorizationEvent with the roles property being an array of String 
// and the hostinfo property being a POJO object
create schema AuthorizationEvent(group String, roles String[], hostinfo com.mycompany.HostNameInfo)

// Declare type CompositeEvent in which the innerEvents property is an array of SecurityEvent
create schema CompositeEvent(group String, innerEvents SecurityEvent[])

// Declare type WebPageVisitEvent that inherits all properties from PageHitEvent
create schema WebPageVisitEvent(userId String) inherits PageHitEvent

// Declare a type with start and end timestamp (i.e. event with duration).
create schema RoboticArmMovement (robotId string, startts long, endts long) 
  starttimestamp startts endtimestamp endts
  
// Create a type that has all properties of SecurityEvent plus a userName property
create schema ExtendedSecurityEvent (userName string) copyfrom SecurityEvent

// Create a type that has all properties of SecurityEvent 
create schema SimilarSecurityEvent () copyfrom SecurityEvent

// Create a type that has all properties of SecurityEvent and WebPageVisitEvent plus a userName property
create schema WebSecurityEvent (userName string) copyfrom SecurityEvent, WebPageVisitEvent

To elaborate on the inherits keyword, consider the following two schema definitions:

create schema Foo as (string prop1)
create schema Bar() inherits Foo

Following above schema, Foo is a supertype or Bar and therefore any Bar event also fulfills Foo and matches where Foo matches. An EPL statement such as select * from Foo returns any Foo event as well as any event that is a subtype of Foo such as all Bar events. When your EPL queries don't use any Foo events there is no cost, thus inherits is generally an effective way to share properties between types.

The optional copyfrom keyword is for defining a schema based on another schema. This keyword causes the engine to copy property definitions: There is no inherits, extends, supertype or subtype relationship between the types listed.

To define an event type Bar that has the same properties as Foo:

create schema Foo as (string prop1)
create schema Bar() copyfrom Foo

To define an event type Bar that has the same properties as Foo and that adds its own property prop2:

create schema Foo as (string prop1)
create schema Bar(string prop2) copyfrom Foo

5.16.2. Declare an Event Type by Providing a Class Name

When using Java classes as the underlying event representation your application may simply provide the class name:

create schema schema_name [as] class_name
  [starttimestamp timestamp_property_name]
  [endtimestamp timestamp_property_name]

The class_name must be a fully-qualified class name (including the package name) if imports are not configured. If you application configures imports then the simple class name suffices without package name.

The optional starttimestamp and endtimestamp keywords have a meaning as defined earlier.

The next example statements declare an event type based on a class:

// Shows the use of a fully-qualified class name to declare the LoginEvent event type
create schema LoginEvent as com.mycompany.LoginValue

// When the configuration includes imports, the declaration does not need a package name
create schema LogoutEvent as SignoffValue

5.16.3. Declare a Variant Stream

A variant stream is a predefined stream into which events of multiple disparate event types can be inserted. Please see Section 5.10.3, “Merging Disparate Types of Events: Variant Streams” for rules regarding property visibility and additional information.

The synopsis is:

create variant schema schema_name [as] eventtype_name|* [, eventtype_name|*] [,...]

Provide the variant keyword to declare a variant stream.

The '*' wildcard character declares a variant stream that accepts any type of event inserted into the variant stream.

Provide eventtype_name if the variant stream should hold events of the given type only. When using insert into to insert into the variant stream the engine checks to ensure the inserted event type or its supertypes match the required event type.

A few examples are shown below:

// Create a variant stream that accepts only LoginEvent and LogoutEvent event types
create variant schema SecurityVariant as LoginEvent, LogoutEvent

// Create a variant stream that accepts any event type
create variant schema AnyEvent as *

5.17. Splitting and Duplicating Streams

EPL offers a convenient syntax to splitting, routing or duplicating events into multiple streams, and for receiving unmatched events among a set of filter criteria.

For splitting a single event that acts as a container and expose child events as a property of itself consider the contained-event syntax as described in Section 5.19, “Contained-Event Selection”.

You may define a triggering event or pattern in the on-part of the statement followed by multiple insert into, select and where clauses.

The synopsis is:

[context context_name]
on event_type[(filter_criteria)] [as stream_name]
insert into insert_into_def select select_list [where condition]
[insert into insert_into_def select select_list [where condition]]
[insert into...]
[output first | all]

The event_type is the name of the type of events that trigger the split stream. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign a stream name. Patterns and named windows can also be specified in the on clause.

Following the on-clause is one or more insert into clauses as described in Section 5.10, “Merging Streams and Continuous Insertion: the Insert Into Clause” and select clauses as described in Section 5.3, “Choosing Event Properties And Events: the Select Clause”.

Each select clause may be followed by a where clause containing a condition. If the condition is true for the event, the engine transforms the event according to the select clause and inserts it into the corresponding stream.

At the end of the statement can be an optional output clause. By default the engine inserts into the first stream for which the where clause condition matches if one was specified, starting from the top. If you specify the output all keywords, then the engine inserts into each stream (not only the first stream) for which the where clause condition matches or that do not have a where clause.

If, for a given event, none of the where clause conditions match, the statement listener receives the unmatched event. The statement listener only receives unmatched events and does not receive any transformed or inserted events. The iterator method to the statement returns no events.

You may specify an optional context name to the effect that the split-stream operates according to the context dimensional information as declared for the context. See Chapter 4, Context and Context Partitions for more information.

In the below sample statement, the engine inserts each OrderEvent into the LargeOrders stream if the order quantity is 100 or larger, or into the SmallOrders stream if the order quantity is smaller then 100:

on OrderEvent 
  insert into LargeOrders select * where orderQty >= 100
  insert into SmallOrders select *

The next example statement adds a new stream for medium-sized orders. The new stream receives orders that have an order quantity between 20 and 100:

on OrderEvent 
  insert into LargeOrders select orderId, customer where orderQty >= 100
  insert into MediumOrders select orderId, customer where orderQty between 20 and 100
  insert into SmallOrders select orderId, customer where orderQty > 0

As you may have noticed in the above statement, orders that have an order quantity of zero don't match any of the conditions. The engine does not insert such order events into any stream and the listener to the statement receives these unmatched events.

By default the engine inserts into the first insert into stream without a where clause or for which the where clause condition matches. To change the default behavior and insert into all matching streams instead (including those without a where clause), the output all keywords may be added to the statement.

The sample statement below shows the use of the output all keywords. The statement populates both the LargeOrders stream with large orders as well as the VIPCustomerOrders stream with orders for certain customers based on customer id:

on OrderEvent 
  insert into LargeOrders select * where orderQty >= 100
  insert into VIPCustomerOrders select * where customerId in (1001, 1002)
  output all

Since the output all keywords are present, the above statement inserts each order event into either both streams or only one stream or none of the streams, depending on order quantity and customer id of the order event. The statement delivers order events not inserted into any of the streams to the listeners and/or subscriber to the statement.

The following limitations apply to split-stream statements:

  1. Aggregation functions and the prev and prior operators are not available in conditions and the select-clause.

5.18. Variables and Constants

A variable is a scalar, object or event value that is available for use in all statements including patterns. Variables can be used in an expression anywhere in a statement as well as in the output clause for output rate limiting.

Variables must first be declared or configured before use, by defining each variable's type and name. Variables can be created via the create variable syntax or declared by runtime or static configuration. Variables can be assigned new values by using the on set syntax or via the setVariableValue methods on EPRuntime. The EPRuntime also provides method to read variable values.

A variable can be declared constant. A constant variable always has the initial value and cannot be assigned a new value. A constant variable can be used like any other variable and can be used wherever a constant is required. By declaring a variable constant you enable the Esper engine to optimize and perform query planning knowing that the variable value cannot change.

When declaring a class-type or an event type variable you may read or set individual properties within the same variable.

The engine guarantees consistency and atomicity of variable reads and writes on the level of context partition (this is a soft guarantee, see below). Variables are optimized for fast read access and are also multithread-safe.

Variables can also be removed, at runtime, by destroying all referencing statements including the statement that created the variable, or by means of the runtime configuration API.

5.18.1. Creating Variables: the Create Variable clause

The create variable syntax creates a new variable by defining the variable type and name. In alternative to the syntax, variables can also be declared in the runtime and engine configuration options.

The synopsis for creating a variable is as follows:

create [constant] variable variable_type variable_name [ = assignment_expression ]

Specify the optional constant keyword when the variable is a constant whose associated value cannot be altered. Your EPL design should prefer constant variables over non-constant variables.

The variable_type can be any of the following:

variable_type
	:  string
	|  char 
	|  character
	|  bool 
	|  boolean
	|  byte
	|  short 
	|  int 
	|  integer 
	|  long 
	|  double
	|  float
	|  object
	|  class_name
	|  event_type_name

All variable types accept null values. The object type is for an untyped variable that can be assigned any value. You can provide a fully-qualified class name to declare a variable of that Java class type. You can also supply the name of an event type to declare a variable that holds an event of that type.

The variable_name is an identifier that names the variable. The variable name should not already be in use by another variable.

The assignment_expression is optional. Without an assignment expression the initial value for the variable is null. If present, it supplies the initial value for the variable.

The EPStatement object of the create variable statement provides access to variable values. The pull API methods iterator and safeIterator return the current variable value. Listeners to the create variable statement subscribe to changes in variable value: the engine posts new and old value of the variable to all listeners when the variable value is updated by an on set statement.

The example below creates a variable that provides a threshold value. The name of the variable is var_threshold and its type is long. The variable's initial value is null as no other value has been assigned:

create variable long var_threshold

This statement creates an integer-type variable named var_output_rate and initializes it to the value ten (10):

create variable integer var_output_rate = 10

The next statement declares a constant string-type variable:

create constant variable string const_filter_symbol = 'GE'

In addition to creating a variable via the create variable syntax, the runtime and engine configuration API also allows adding variables. The next code snippet illustrates the use of the runtime configuration API to create a string-typed variable:

epService.getEPAdministrator().getConfiguration()
  .addVariable("myVar", String.class, "init value");

The engine removes the variable if the statement that created the variable is destroyed and all statements that reference the variable are also destroyed. The getVariableNameUsedBy and the removeVariable methods, both part of the runtime ConfigurationOperations API, provide use information and can remove a variable. If the variable was added via configuration, it can only be removed via the configuration API.

5.18.2. Setting Variable Values: the On Set clause

The on set statement assigns a new value to one or more variables when a triggering event arrives or a triggering pattern occurs. Use the setVariableValue methods on EPRuntime to assign variable values programmatically.

The synopsis for setting variable values is:

on event_type[(filter_criteria)] [as stream_name]
  set variable_name = expression [, variable_name = expression [,...]]

The event_type is the name of the type of events that trigger the variable assignments. It is optionally followed by filter_criteria which are filter expressions to apply to arriving events. The optional as keyword can be used to assign an stream name. Patterns and named windows can also be specified in the on clause.

The comma-separated list of variable names and expressions set the value of one or more variables. Subqueries may by part of expressions however aggregation functions and the prev or prior function may not be used in expressions.

All new variable values are applied atomically: the changes to variable values by the on set statement become visible to other statements all at the same time. No changes are visible to other processing threads until the on set statement completed processing, and at that time all changes become visible at once.

The EPStatement object provides access to variable values. The pull API methods iterator and safeIterator return the current variable values for each of the variables set by the statement. Listeners to the statement subscribe to changes in variable values: the engine posts new variable values of all variables to any listeners.

In the following example, a variable by name var_output_rate has been declared previously. When a NewOutputRateEvent event arrives, the variable is updated to a new value supplied by the event property 'rate':

on NewOutputRateEvent set var_output_rate = rate

The next example shows two variables that are updated when a ThresholdUpdateEvent arrives:

on ThresholdUpdateEvent as t 
  set var_threshold_lower = t.lower,
      var_threshold_higher = t.higher

The sample statement shown next counts the number of pattern matches using a variable. The pattern looks for OrderEvent events that are followed by CancelEvent events for the same order id within 10 seconds of the OrderEvent:

on pattern[every a=OrderEvent -> (CancelEvent(orderId=a.orderId) where timer:within(10 sec))]
  set var_counter = var_counter + 1

5.18.3. Using Variables

A variable name can be used in any expression and can also occur in an output rate limiting clause. This section presents examples and discusses performance, consistency and atomicity attributes of variables.

The next statement assumes that a variable named 'var_threshold' was created to hold a total price threshold value. The statement outputs an event when the total price for a symbol is greater then the current threshold value:

select symbol, sum(price) from TickEvent 
group by symbol 
having sum(price) > var_threshold

In this example we use a variable to dynamically change the output rate on-the-fly. The variable 'var_output_rate' holds the current rate at which the statement posts a current count to listeners:

select count(*) from TickEvent output every var_output_rate seconds

Variables are optimized towards high read frequency and lower write frequency. Variable reads do not incur locking overhead (99% of the time) while variable writes do incur locking overhead.

The engine softly guarantees consistency and atomicity of variables when your statement executes in response to an event or timer invocation. Variables acquire a stable value (implemented by versioning) when your statement starts executing in response to an event or timer invocation, and variables do not change value during execution. When one or more variable values are updated via on set statements, the changes to all updated variables become visible to statements as one unit and only when the on set statement completes successfully.

The atomicity and consistency guarantee is a soft guarantee. If any of your application statements, in response to an event or timer invocation, execute for a time interval longer then 15 seconds (default interval length), then the engine may use current variable values after 15 seconds passed, rather then then-current variable values at the time the statement started executing in response to an event or timer invocation.

The length of the time interval that variable values are held stable for the duration of execution of a given statement is by default 15 seconds, but can be configured via engine default settings.

5.18.4. Object-Type Variables

A variable of type object (or java.lang.Object via the API) can be assigned any value including null. When using an object-type variable in an expression, your statement may need to cast the value to the desired type.

The following sample EPL creates a variable by name varobj of type object:

create variable object varobj

5.18.5. Class and Event-Type Variables

The create variable syntax and the API accept a fully-qualified class name or alternatively the name of an event type. This is useful when you want a single variable to have multiple property values to read or set.

The next statement assumes that the event type PageHitEvent is declared:

create variable PageHitEvent varPageHitZero

These example statements show two ways of assigning to the variable:

// You may assign the complete event
on PageHitEvent(ip='0.0.0.0') pagehit set varPageHitZero = pagehit
// Or assign individual properties of the event
on PageHitEvent(ip='0.0.0.0') pagehit set varPageHitZero.userId = pagehit.userId

Similarly statements may use properties of class or event-type variables as this example shows:

select * from FirewallEvent(userId=varPageHitZero.userId)

When using class or event-type variables, in order for the engine to assign property values, the underlying event type must allow writing property values. If using JavaBean event classes the class must have setter methods and a default constructor.

5.19. Contained-Event Selection

Contained-event selection is for use when an event contains properties that are themselves events, or more generally when your application needs to split an event into multiple events. One example is when application events are coarse-grained structures and you need to perform bulk operations on the rows of the property graph in an event.

Use the contained-event selection syntax in a filter expression such as in a pattern, from clause, subselect, on-select and on-delete. This section provides the synopsis and examples.

To review, in the from clause a contained_selection may appear after the event stream name and filter criteria, and before any view specifications.

The synopsis for contained_selection is as follows:

[select select_expressions from] 
  contained_expression [@type(eventtype_name)] [as alias_name]
  [where filter_expression]

The select clause and select_expressions are optional and may be used to select specific properties of contained events.

The contained_expression is required and returns individual events. The expression can, for example, be an event property name that returns an event fragment, i.e. a property that can itself be represented as an event by the underlying event representation. Simple values such as integer or string are not fragments. The expression can also be any other expression such as a single-row function or a script that returns either an array or a java.util.Collection of events.

Provide the @type(name) annotation after the contained expression to name the event type of events returned by the expression. The annotation is optional and not needed when the contained-expression is an event property that returns a class or other event fragment.

The alias_name can be provided to assign a name to the expression result value rows.

The where clause and filter_expression is optional and may be used to filter out properties.

As an example event, consider a media order. A media order consists of order items as well as product descriptions. A media order event can be represented as an object graph (POJO event representation), or a structure of nested Maps (Map event representation) or a XML document (XML DOM or Axiom event representation) or other custom plug-in event representation.

To illustrate, a sample media order event in XML event representation is shown below. Also, a XML event type can optionally be strongly-typed with an explicit XML XSD schema that we don't show here. Note that Map and POJO representation can be considered equivalent for the purpose of this example.

Let us now assume that we have declared the event type MediaOrder as being represented by the root node <mediaorder> of such XML snip:

<mediaorder>
  <orderId>PO200901</orderId>
  <items>
    <item>
      <itemId>100001</itemId>
      <productId>B001</productId>
      <amount>10</amount>
      <price>11.95</price>
    </item>
  </items>
  <books>
    <book>
      <bookId>B001</bookId>
      <author>Heinlein</author>
      <review>
        <reviewId>1</reviewId>
        <comment>best book ever</comment>
      </review>
    </book>
    <book>
      <bookId>B002</bookId>
      <author>Isaac Asimov</author>
    </book>
  </books>
</mediaorder>

The next query utilizes the contained-event selection syntax to return each book:

select * from MediaOrder[books.book]

The result of the above query is one event per book. Output events contain only the book properties and not any of the mediaorder-level properties.

Note that, when using listeners, the engine delivers multiple results in one invocation of each listener. Therefore listeners to the above statement can expect a single invocation passing all book events within one media order event as an array.

To better illustrate the position of the contained-event selection syntax in a statement, consider the next two queries:

select * from MediaOrder(orderId='PO200901')[books.book]

The above query the returns each book only for media orders with a given order id. This query illustrates a contained-event selection and a view:

select count(*) from MediaOrder[books.book].std:unique(bookId)

The sample above counts each book unique by book id.

Contained-event selection can be staggered. When staggering multiple contained-event selections the staggered contained-event selection is relative to its parent.

This example demonstrates staggering contained-event selections by selecting each review of each book:

select * from MediaOrder[books.book][review]

Listeners to the query above receive a row for each review of each book. Output events contain only the review properties and not the book or media order properties.

The following is not valid:

// not valid
select * from MediaOrder[books.book.review]

The book property in an indexed property (an array or collection) and thereby requires an index in order to determine which book to use. The expression books.book[1].review is valid and means all reviews of the second (index 1) book.

The contained-event selection syntax is part of the filter expression and may therefore occur in patterns and anywhere a filter expression is valid.

A pattern example is below. The example assumes that a Cancel event type has been defined that also has an orderId property:

select * from pattern [c=Cancel -> books=MediaOrder(orderId = c.orderId)[books.book] ]

When used in a pattern, a filter with a contained-event selection returns an array of events, similar to the match-until clause in patterns. The above statement returns, in the books property, an array of book events.

5.19.1. Select Clause in a Contained-Event Selection

The optional select clause provides control over which fields are available in output events. The expressions in the select clause apply only to the properties available underneath the property in the from clause, and the properties of the enclosing event.

When no select is specified, only the properties underneath the selected property are available in output events.

In summary, the select clause may contain:

  1. Any expressions, wherein properties are resolved relative to the property in the from clause.

  2. Use the wildcard (*) to provide all properties that exist under the property in the from clause.

  3. Use the alias_name.* syntax to provide all properties that exist under a property in the from clause.

The next query's select clause selects each review for each book, and the order id as well as the book id of each book:

select * from MediaOrder[select orderId, bookId from books.book][select * from review]
// ... equivalent to ...
select * from MediaOrder[select orderId, bookId from books.book][review]]

Listeners to the statement above receive an event for each review of each book. Each output event has all properties of the review row, and in addition the bookId of each book and the orderId of the order. Thus bookId and orderId are found in each result event, duplicated when there are multiple reviews per book and order.

The above query uses wildcard (*) to select all properties from reviews. As has been discussed as part of the select clause, the wildcard (*) and property_alias.* do not copy properties for performance reasons. The wildcard syntax instead specifies the underlying type, and additional properties are added onto that underlying type if required. Only one wildcard (*) and property_alias.* (unless used with a column rename) may therefore occur in the select clause list of expressions.

All the following queries produce an output event for each review of each book. The next sample queries illustrate the options available to control the fields of output events.

The output events produced by the next query have all properties of each review and no other properties available:

select * from MediaOrder[books.book][review]

The following query is not a valid query, since the order id and book id are not part of the contained-event selection:

// Invalid select clause: orderId and bookId not produced.
select orderId, bookId from MediaOrder[books.book][review]

This query is valid. Note that output events carry only the orderId and bookId properties and no other data:

select orderId, bookId from MediaOrder[books.book][select orderId, bookId from review]
//... equivalent to ...
select * from MediaOrder[select orderId, bookId from books.book][review]

This variation produces output events that have all properties of each book and only reviewId and comment for each review:

select * from MediaOrder[select * from books.book][select reviewId, comment from review]
// ... equivalent to ...
select * from MediaOrder[books.book as book][select book.*, reviewId, comment from review]

The output events of the next EPL have all properties of the order and only bookId and reviewId for each review:

select * from MediaOrder[books.book as book]
    [select mediaOrder.*, bookId, reviewId from review] as mediaOrder

This EPL produces output events with 3 columns: a column named mediaOrder that is the order itself, a column named book for each book and a column named review that holds each review:

insert into ReviewStream select * from MediaOrder[books.book as book]
  [select mo.* as mediaOrder, book.* as book, review.* as review from review as review] as mo
// .. and a sample consumer of ReviewStream...
select mediaOrder.orderId, book.bookId, review.reviewId from ReviewStream

Please note these limitations:

  1. Sub-selects, aggregation functions and the prev and prior operators are not available in contained-event selection.

  2. Expressions in the select and where clause of a contained-event selection can only reference properties relative to the current event and property.

5.19.2. Where Clause in a Contained-Event Selection

The optional where clause may be used to filter out properties at the same level that the where-clause occurs.

The properties in the filter expression must be relative to the property in the from clause or the enclosing event.

This query outputs all books with a given author:

select * from MediaOrder[books.book where author = 'Heinlein']

This query outputs each review of each book where a review comment contains the word 'good':

select * from MediaOrder[books.book][review where comment like 'good']

5.19.3. Contained-Event Selection and Joins

This section discusses contained-event selection in joins.

When joining within the same event it is not required that views are specified. Recall, in a join or outer join there must be views specified that hold the data to be joined. For self-joins, no views are required and the join executes against the data returned by the same event.

This query inner-joins items to books where book id matches the product id:

select book.bookId, item.itemId 
from MediaOrder[books.book] as book, 
      MediaOrder[items.item] as item 
where productId = bookId

Query results for the above query when sending the media order event as shown earlier are:

book.bookIditem.itemId
B001100001

The next example query is a left outer join. It returns all books and their items, and for books without item it returns the book and a null value:

select book.bookId, item.itemId 
from MediaOrder[books.book] as book 
  left outer join 
    MediaOrder[items.item] as item 
  on productId = bookId

Query results for the above query when sending the media order event as shown earlier are:

book.bookIditem.itemId
B001100001
B002null

A full outer join combines the results of both left and right outer joins. The joined table will contain all records from both tables, and fill in null values for missing matches on either side.

This example query is a full outer join, returning all books as well as all items, and filling in null values for book id or item id if no match is found:

select orderId, book.bookId,item.itemId 
from MediaOrder[books.book] as book 
  full outer join 
     MediaOrder[select orderId, * from items.item] as item 
  on productId = bookId 
order by bookId, item.itemId asc

As in all other continuous queries, aggregation results are cumulative from the time the statement was created.

The following query counts the cumulative number of items in which the product id matches a book id:

select count(*) 
from MediaOrder[books.book] as book, 
      MediaOrder[items.item] as item 
where productId = bookId

The unidirectional keyword in a join indicates to the query engine that aggregation state is not cumulative. The next query counts the number of items in which the product id matches a book id for each event:

select count(*) 
from MediaOrder[books.book] as book unidirectional, 
      MediaOrder[items.item] as item 
where productId = bookId

5.19.4. Sentence and Word Example

The next example splits an event representing a sentence into multiple events in which each event represents a word. It represents all events and the logic to split events into contained events as Java code. The next chapter has additional examples that use Map-type events and put contained-event logic into a separate expression or script.

The sentence event in this example is represented by a class declared as follows:

public class SentenceEvent {
  private final String sentence;

  public SentenceEvent(String sentence) {
    this.sentence = sentence;
  }

  public WordEvent[] getWords() {
    String[] split = sentence.split(" ");
    WordEvent[] words = new WordEvent[split.length];
    for (int i = 0; i < split.length; i++) {
      words[i] = new WordEvent(split[i]);
    }
    return words;
  }
}

The sentence event as above provides an event property words that returns each word event.

The declaration of word event is also a class:

public class WordEvent {
  private final String word;

  public WordEvent(String word) {
    this.word = word;
  }

  public String getWord() {
    return word;
  }
}

The EPL statement to populate a stream of words from a sentence event is:

insert into WordStream select * from SentenceEvent[words]

Finally, the API call to send a sentence event to the engine is shown here:

epService.getEPRuntime().sendEvent(new SentenceEvent("Hello Word Contained Events"));

5.19.5. More Examples

The examples herein are not based on the POJO events of the prior example. They are meant to demonstrate different types of contained-event expressions and the use of @type(type_name) to identify the event type of the return values of the contained-event expression.

The example first defines a few sample event types:

create schema SentenceEvent(sentence String)
create schema WordEvent(word String)
create schema CharacterEvent(char String)

The following EPL assumes that your application defined a plug-in single-row function by name splitSentence that returns an array of Map, producting output events that are WordEvent events:

insert into WordStream select * from SentenceEvent[splitSentence(sentence)@type(WordEvent)]

The example EPL shown next invokes a JavaScript function which returns some events of type WordEvent:

expression Collection js:splitSentenceJS(sentence) [
importPackage(java.util);
var words = new ArrayList();
words.add(Collections.singletonMap('word', 'wordOne'));
words.add(Collections.singletonMap('word', 'wordTwo'));
words;
]
select * from SentenceEvent[splitSentenceJS(sentence)@type(WordEvent)]

In the next example the sentence event first gets split into words and then each word event gets split into character events via an additional splitWord single-row function, producing events of type CharacterEvent:

select * from SentenceEvent
  [splitSentence(sentence)@type(WordEvent)]
  [splitWord(word)@type(CharacterEvent)]

5.20. Updating an Insert Stream: the Update IStream Clause

The update istream statement allows declarative modification of event properties of events entering a stream. Update is a pre-processing step to each new event, modifying an event before the event applies to any statements.

The synopsis of update istream is as follows:

update istream event_type [as stream_name]
  set property_name = set_expression [, property_name = set_expression] [,...]
  [where where_expression]

The event_type is the name of the type of events that the update applies to. The optional as keyword can be used to assign a name to the event type for use with subqueries, for example. Following the set keyword is a comma-separated list of property names and expressions that provide the event properties to change and values to set.

The optional where clause and expression can be used to filter out events to which to apply updates.

Listeners to an update statement receive the updated event in the insert stream (new data) and the event prior to the update in the remove stream (old data). Note that if there are multiple update statements that all apply to the same event then the engine will ensure that the output events delivered to listeners or subscribers are consistent with the then-current updated properties of the event (if necessary making event copies, as described below, in the case that listeners are attached to update statements). Iterating over an update statement returns no events.

As an example, the below statement assumes an AlertEvent event type that has properties named severity and reason:

update istream AlertEvent 
  set severity = 'High'
  where severity = 'Medium' and reason like '%withdrawal limit%'

The statement above changes the value of the severity property to "High" for AlertEvent events that have a medium severity and contain a specific reason text.

Update statements apply the changes to event properties before other statements receive the event(s) for processing, e.g. "select * from AlertEvent" receives the updated AlertEvent. This is true regardless of the order in which your application creates statements.

When multiple update statements apply to the same event, the engine executes updates in the order in which update statements are created. We recommend the @Priority EPL annotation to define a deterministic order of processing updates, especially in the case where update statements get created and destroyed dynamically or multiple update statements update the same fields. The update statement with the highest @Priority value applies last.

The update clause can be used on streams populated via insert into, as this example utilizing a pattern demonstrates:

insert into DoubleWithdrawalStream 
select a.id, b.id, a.account as account, 0 as minimum 
from pattern [a=Withdrawal -> b=Withdrawal(id = a.id)]
update istream DoubleWithdrawalStream set minimum = 1000 where account in (10002, 10003)

When using update with named windows, any changes to event properties apply before an event enters the named window.

Consider the next example (shown here with statement names in @Name EPL annotation, multiple EPL statements):

@Name("CreateWindow") create window MyWindow.win:time(30 sec) as AlertEvent

@Name("UpdateStream") update istream MyWindow set severity = 'Low' where reason = '%out of paper%'

@Name("InsertWindow") insert into MyWindow select * from AlertEvent

@Name("SelectWindow") select * from MyWindow

The UpdateStream statement specifies an update clause that applies to all events entering the named window. Note that update does not apply to events already in the named window at the time an application creates the UpdateStream statement, it only applies to new events entering the named window (after an application created the update statement).

Therefore, in the above example listeners to the SelectWindow statement as well as the CreateWindow statement receive the updated event, while listeners to the InsertWindow statement receive the original AlertEvent event (and not the updated event).

Subqueries can also be used in all expressions including the optional where clause.

This example demonstrates a correlated subquery in an assignment expression and also demonstrates the optional as keyword. It assigns the phone property of an AlertEvent event a new value based on the lookup within all unique PhoneEvent events (according to an empid property) correlating the AlertEvent property reporter with the empid property of PhoneEvent:

update istream AlertEvent as ae
  set phone = 
    (select phone from PhoneEvent.std:unique(empid) where empid = ae.reporter)

When updating indexed properties use the syntax propertyName[index] = value with the index value being an integer number. When updating mapped properties use the syntax propertyName(key) = value with the key being a string value.

When using update, please note these limitations:

  1. Expressions may not use aggregation functions.

  2. The prev and prior functions may not be used.

  3. For underlying event representations that are Java objects, a event object class must implement the java.io.Serializable interface as discussed below.

  4. When using an XML underlying event type, event properties in the XML document representation are not available for update.

  5. Nested properties are not supported for update. Revision event types and variant streams may also not be updated.

5.20.1. Immutability and Updates

When updating event objects the engine maintains consistency across statements. The engine ensures that an update to an event does not impact the results of statements that look for or retain the original un-updated event. As a result the engine may need to copy an event object to maintain consistency.

In the case your application utilizes Java objects as the underlying event representation and an update statement updates properties on an object, then in order to maintain consistency across statements it is necessary for the engine to copy the object before changing properties (and thus not change the original object).

For Java application objects, the copy operation is implemented by serialization. Your event object must therefore implement the java.io.Serializable interface to become eligible for update. As an alternative to serialization, you may instead configure a copy method as part of the event type configuration via ConfigurationEventTypeLegacy.

5.21. Controlling Event Delivery : The For Clause

The engine delivers all result events of a given statement to the statement's listeners and subscriber (if any) in a single invocation of each listener and subscriber's update method passing an array of result events. For example, a statement using a time-batch view may provide many result events after a time period passes, a pattern may provide multiple matching events or in a join the join cardinality could be multiple rows.

For statements that typically post multiple result events to listeners the for keyword controls the number of invocations of the engine to listeners and subscribers and the subset of all result events delivered by each invocation. This can be useful when your application listener or subscriber code expects multiple invocations or expects that invocations only receive events that belong together by some additional criteria.

The for keyword is a reserved keyword. It is followed by either the grouped_delivery keyword for grouped delivery or the discrete_delivery keyword for discrete delivery. The for clause is valid after any EPL select statement.

The synopsis for grouped delivery is as follows:

... for grouped_delivery (group_expression [, group_expression] [,...])

The group_expression expression list provides one or more expressions to apply to result events. The engine invokes listeners and subscribers once for each distinct set of values returned by group_expression expressions passing only the events for that group.

The synopsis for discrete delivery is as follows:

... for discrete_delivery

With discrete delivery the engine invokes listeners and subscribers once for each result event passing a single result event in each invocation.

Consider the following example without for-clause. The time batch data view collects RFIDEvent events for 10 seconds and posts an array of result events:

select * from RFIDEvent.win:time_batch(10 sec)

Let's consider an example event sequence as follows:

Table 5.6. Sample Sequence of Events for For Keyword

Event
RFIDEvent(id:1, zone:'A')
RFIDEvent(id:2, zone:'B')
RFIDEvent(id:3, zone:'A')

Without for-clause and after the 10-second time period passes, the engine delivers an array of 3 events in a single invocation to listeners and the subscriber.

The next example specifies the for-clause and grouped delivery by zone:

select * from RFIDEvent.win:time_batch(10 sec) for grouped_delivery (zone)

With grouped delivery and after the 10-second time period passes, the above statement delivers result events in two invocations to listeners and the subscriber: The first invocation delivers an array of two events that contains zone A events with id 1 and 3. The second invocation delivers an array of 1 event that contains a zone B event with id 2.

The next example specifies the for-clause and discrete delivery:

select * from RFIDEvent.win:time_batch(10 sec) for discrete_delivery

With discrete delivery and after the 10-second time period passes, the above statement delivers result events in three invocations to listeners and the subscriber: The first invocation delivers an array of 1 event that contains the event with id 1, the second invocation delivers an array of 1 event that contains the event with id 2 and the third invocation delivers an array of 1 event that contains the event with id 3.

Remove stream events are also delivered in multiple invocations, one for each group, if your statement selects remove stream events explicitly via irstream or rstream keywords.

The insert into for inserting events into a stream is not affected by the for-clause.

The delivery order respects the natural sort order or the explicit sort order as provided by the order by clause, if present.

The following are known limitations:

  1. The engine validates group_expression expressions against the output event type, therefore all properties specified in group_expression expressions must occur in the select clause.

Chapter 6. EPL Reference: Patterns

6.1. Event Pattern Overview

Event patterns match when an event or multiple events occur that match the pattern's definition. Patterns can also be time-based.

Pattern expressions consist of pattern atoms and pattern operators:

  1. Pattern atoms are the basic building blocks of patterns. Atoms are filter expressions, observers for time-based events and plug-in custom observers that observe external events not under the control of the engine.

  2. Pattern operators control expression lifecycle and combine atoms logically or temporally.

The below table outlines the different pattern atoms available:

Table 6.1. Pattern Atoms

Pattern AtomExample
Filter expressions specify an event to look for.
StockTick(symbol='ABC', price > 100)
Time-based event observers specify time intervals or time schedules.
timer:interval(10 seconds)
timer:at(*, 16, *, *, *)
Custom plug-in observers can add pattern language syntax for observing application-specific events.
myapplication:myobserver("http://someResource")

There are 4 types of pattern operators:

  1. Operators that control pattern sub-expression repetition: every, every-distinct, [num] and until

  2. Logical operators: and, or, not

  3. Temporal operators that operate on event order: -> (followed-by)

  4. Guards are where-conditions that control the lifecycle of subexpressions. Examples are timer:within, timer:withinmax and while-expression. Custom plug-in guards may also be used.

Pattern expressions can be nested arbitrarily deep by including the nested expression(s) in () round parenthesis.

Underlying the pattern matching is a state machine that transitions between states based on arriving events and based on time advancing. A single event or advancing time may cause a reaction in multiple parts of your active pattern state.

6.2. How to use Patterns

6.2.1. Pattern Syntax

This is an example pattern expression that matches on every ServiceMeasurement events in which the value of the latency event property is over 20 seconds, and on every ServiceMeasurement event in which the success property is false. Either one or the other condition must be true for this pattern to match.

every (spike=ServiceMeasurement(latency>20000) or error=ServiceMeasurement(success=false))

In the example above, the pattern expression starts with an every operator to indicate that the pattern should fire for every matching events and not just the first matching event. Within the every operator in round brackets is a nested pattern expression using the or operator. The left hand of the or operator is a filter expression that filters for events with a high latency value. The right hand of the operator contains a filter expression that filters for events with error status. Filter expressions are explained in Section 6.4, “Filter Expressions In Patterns”.

The example above assigned the tags spike and error to the events in the pattern. The tags are important since the engine only places tagged events into the output event(s) that a pattern generates, and that the engine supplies to listeners of the pattern statement. The tags can further be selected in the select-clause of an EPL statement as discussed in Section 5.4.2, “Pattern-based Event Streams”.

Patterns can also contain comments within the pattern as outlined in Section 5.2.2, “Using Comments”.

Pattern statements are created via the EPAdministrator interface. The EPAdministrator interface allows to create pattern statements in two ways: Pattern statements that want to make use of the EPL select clause or any other EPL constructs use the createEPL method to create a statement that specifies one or more pattern expressions. EPL statements that use patterns are described in more detail in Section 5.4.2, “Pattern-based Event Streams”. Use the syntax as shown in below example.

EPAdministrator admin = EPServiceProviderManager.getDefaultProvider().getEPAdministrator();

String eventName = ServiceMeasurement.class.getName();

EPStatement myTrigger = admin.createEPL("select * from pattern [" +
  "every (spike=" + eventName + "(latency>20000) or error=" + eventName + "(success=false))]");

Pattern statements that do not need to make use of the EPL select clause or any other EPL constructs can use the createPattern method, as in below example.

EPStatement myTrigger = admin.createPattern(
  "every (spike=" + eventName + "(latency>20000) or error=" + eventName + "(success=false))");

6.2.2. Patterns in EPL

A pattern may appear anywhere in the from clause of an EPL statement including joins and subqueries. Patterns may therefore be used in combination with the where clause, group by clause, having clause as well as output rate limiting and insert into.

In addition, a data window view can be declared onto a pattern. A data window declared onto a pattern only serves to retain pattern matches. A data window declared onto a pattern does not limit, cancel, remove or delete intermediate pattern matches of the pattern when pattern matches leave the data window.

This example statement demonstrates the idea by selecting a total price per customer over pairs of events (ServiceOrder followed by a ProductOrder event for the same customer id within 1 minute), occuring in the last 2 hours, in which the sum of price is greater than 100, and using a where clause to filter on name:

select a.custId, sum(a.price + b.price)
from pattern [every a=ServiceOrder -> 
    b=ProductOrder(custId = a.custId) where timer:within(1 min)].win:time(2 hour) 
where a.name in ('Repair', b.name)
group by a.custId
having sum(a.price + b.price) > 100

6.2.3. Subscribing to Pattern Events

When a pattern fires it publishes one or more events to any listeners to the pattern statement. The listener interface is the com.espertech.esper.client.UpdateListener interface.

The example below shows an anonymous implementation of the com.espertech.esper.client.UpdateListener interface. We add the anonymous listener implementation to the myPattern statement created earlier. The listener code simply extracts the underlying event class.

myPattern.addListener(new UpdateListener() {
  public void update(EventBean[] newEvents, EventBean[] oldEvents) {
    ServiceMeasurement spike = (ServiceMeasurement) newEvents[0].get("spike");
    ServiceMeasurement error = (ServiceMeasurement) newEvents[0].get("error");
    ... // either spike or error can be null, depending on which occurred
    ... // add more logic here
  }
});

Listeners receive an array of EventBean instances in the newEvents parameter. There is one EventBean instance passed to the listener for each combination of events that matches the pattern expression. At least one EventBean instance is always passed to the listener.

The properties of each EventBean instance contain the underlying events that caused the pattern to fire, if events have been named in the filter expression via the name=eventType syntax. The property name is thus the name supplied in the pattern expression, while the property type is the type of the underlying class, in this example ServiceMeasurement.

6.2.4. Pulling Data from Patterns

Data can also be obtained from pattern statements via the safeIterator() and iterator() methods on EPStatement (the pull API). If the pattern had fired at least once, then the iterator returns the last event for which it fired. The hasNext() method can be used to determine if the pattern had fired.

if (myPattern.iterator().hasNext()) {
	ServiceMeasurement event = (ServiceMeasurement) view.iterator().next().get("alert");
    ... // some more code here to process the event
}
else {
    ... // no matching events at this time
}

Further, if a data window is defined onto a pattern, the iterator returns the pattern matches according to the data window expiry policy.

This pattern specifies a length window of 10 elements that retains the last 10 matches of A and B events, for use via iterator or for use in a join or subquery:

select * from pattern [every (A or B).win:length(10)

6.2.5. Pattern Error Reporting

While the pattern compiler analyzes your pattern and verifies its integrity, it may not detect certain pattern errors that may occur at runtime. Sections of this pattern documentation point out common cases where the pattern engine will log a runtime error. We recommend turning on the log warning level at project development time to inspect and report on warnings logged. If a statement name is assigned to a statement then the statement name is logged as well.

6.3. Operator Precedence

The operators at the top of this table take precedence over operators lower on the table.

Table 6.2. Pattern Operator Precedence

PrecedenceOperatorDescriptionExample
1unaryevery, not, every-distinct
every MyEvent
timer:interval(5 min) and not MyEvent
2guard postfixwhere timer:within and while (expression) (incl. withinmax and plug-in pattern guard)
MyEvent where timer:within(1 sec)
a=MyEvent while (a.price between 1 and 10)
3repeat[num], until
[5] MyEvent
[1..3] MyEvent until MyOtherEvent
4andand
every (MyEvent and MyOtherEvent)
5oror
every (MyEvent or MyOtherEvent)
6followed-by->
every (MyEvent -> MyOtherEvent)

If you are not sure about the precedence, please consider placing parenthesis () around your subexpressions. Parenthesis can also help make expressions easier to read and understand.

The following table outlines sample equivalent expressions, with and without the use of parenthesis for subexpressions.

Table 6.3. Equivalent Pattern Expressions

ExpressionEquivalentReason
every A or B(every A) or BThe every operator has higher precedence then the or operator.
every A -> B or C(every A) -> (B or C)The or operator has higher precedence then the followed-by operator.
A -> B or B -> AA -> (B or B) -> AThe or operator has higher precedence then the followed-by operator, specify as (A -> B) or (B -> A) instead.
A and B or C(A and B) or CThe and operator has higher precedence then the or operator.
A -> B until C -> DA -> (B until C) -> DThe until operator has higher precedence then the followed-by operator.
[5] A or B ([5] A) or BThe [num] repeat operator has higher precedence then the or operator.

6.4. Filter Expressions In Patterns

The simplest form of filter is a filter for events of a given type without any conditions on the event property values. This filter matches any event of that type regardless of the event's properties. The example below is such a filter. Note that this event pattern would stop firing as soon as the first RfidEvent is encountered.

com.mypackage.myevents.RfidEvent

To make the event pattern fire for every RfidEvent and not just the first event, use the every keyword.

every com.mypackage.myevents.RfidEvent

The example above specifies the fully-qualified Java class name as the event type. Via configuration, the event pattern above can be simplified by using the name that has been defined for the event type.

every RfidEvent

Interfaces and superclasses are also supported as event types. In the below example IRfidReadable is an interface class, and the statement matches any event that implements this interface:

every org.myorg.rfid.IRfidReadable

The filtering criteria to filter for events with certain event property values are placed within parenthesis after the event type name:

RfidEvent(category="Perishable")

All expressions can be used in filters, including static method invocations that return a boolean value:

RfidEvent(com.mycompany.MyRFIDLib.isInRange(x, y) or (x<0 and y < 0))

Filter expressions can be separated via a single comma ','. The comma represents a logical AND between expressions:

RfidEvent(zone=1, category=10)
...is equivalent to...
RfidEvent(zone=1 and category=10)

The following set of operators are highly optimized through indexing and are the preferred means of filtering high-volume event streams:

  • equals =

  • not equals !=

  • comparison operators < , > , >=, <=

  • ranges

    • use the between keyword for a closed range where both endpoints are included

    • use the in keyword and round () or square brackets [] to control how endpoints are included

    • for inverted ranges use the not keyword and the between or in keywords

  • list-of-values checks using the in keyword or the not in keywords followed by a comma-separated list of values

At compile time as well as at run time, the engine scans new filter expressions for subexpressions that can be indexed. Indexing filter values to match event properties of incoming events enables the engine to match incoming events faster. The above list of operators represents the set of operators that the engine can best convert into indexes. The use of comma or logical and in filter expressions does not impact optimizations by the engine.

For more information on filters please see Section 5.4.1, “Filter-based Event Streams”. Contained-event selection on filters in patterns is further described in Section 5.19, “Contained-Event Selection”.

Filter criteria can also refer to events matching prior named events in the same expression. Below pattern is an example in which the pattern matches once for every RfidEvent that is preceded by an RfidEvent with the same asset id.

every e1=RfidEvent -> e2=RfidEvent(assetId=e1.assetId)

The syntax shown above allows filter criteria to reference prior results by specifying the event name tag of the prior event, and the event property name. The tag names in the above example were e1 and e2. This syntax can be used in all filter operators or expressions including ranges and the in set-of-values check:

every e1=RfidEvent -> 
  e2=RfidEvent(MyLib.isInRadius(e1.x, e1.y, x, y) and zone in (1, e1.zone))

An arriving event changes the truth value of all expressions that look for the event. Consider the pattern as follows:

every (RfidEvent(zone > 1) and RfidEvent(zone < 10))

The pattern above is satisfied as soon as only one event with zone in the interval [2, 9] is received.

6.4.1. Controlling Event Consumption

An arriving event applies to all filter expressions for which the event matches. In other words, an arriving event is not consumed by any specify filter expression(s) but applies to all active filter expressions of all pattern sub-expressions.

You may provide the @consume annotation as part of a filter expression to control consumption of an arriving event. If an arriving event matches the filter expression marked with @consume it is no longer available to other filter expressions of the same pattern that also match the arriving event.

The @consume can include a level number in parenthesis. A higher level number consumes the event first. The default level number is 1. Multiple filter expressions with the same level number for @consume all match the event.

Consider the next sample pattern:

a=RfidEvent(zone='Z1') and b=RfidEvent(assetId='0001')

This pattern fires when a single RfidEvent event arrives that has zone 'Z1' and assetId '0001'. The pattern also matches when two RfidEvent events arrive, in any order, wherein one has zone 'Z1' and the other has assetId '0001'.

Mark a filter expression with @consume to indicate that if an arriving event matches multiple filter expressions that the engine prefers the marked filter expression and does not match any other filter expression.

This updated pattern statement uses @consume to indicate that a match against zone is preferred:

a=RfidEvent(zone='Z1')@consume and b=RfidEvent(assetId='0001')

This pattern no longer fires when a single RfidEvent arrives that has zone 'Z1' and assetId '0001', because when the first filter expression matches the pattern engine consumes the event. The pattern only matches when two RfidEvent events arrive in any order. One event must have zone 'Z1' and the other event must have a zone other then 'Z1' and an assetId '0001'.

The next sample pattern provides a level number for each @consume:

a=RfidEvent(zone='Z1')@consume(2) 
  or b=RfidEvent(assetId='0001')@consume(1) 
  or c=RfidEvent(category='perishable'))

The pattern fires when an RfidEvent arrives with zone 'Z1'. In this case the output event populates property 'a' but not properties 'b' and 'c'. The pattern also fires when an RfidEvent arrives with a zone other then 'Z1' and an asset id of '0001'. In this case the output event populates property 'b' but not properties 'a' and 'c'. The pattern also fires when an RfidEvent arrives with a zone other then 'Z1' and an asset id other then '0001' and a category of 'perishable'. In this case the output event populates property 'c' but not properties 'a' and 'b'.

6.5. Pattern Operators

6.5.1. Every

The every operator indicates that the pattern sub-expression should restart when the subexpression qualified by the every keyword evaluates to true or false. Without the every operator the pattern sub-expression stops when the pattern sub-expression evaluates to true or false.

As a side note, please be aware that a single invocation to the UpdateListener interface may deliver multiple events in one invocation, since the interface accepts an array of values.

Thus the every operator works like a factory for the pattern sub-expression contained within. When the pattern sub-expression within it fires and thus quits checking for events, the every causes the start of a new pattern sub-expression listening for more occurrences of the same event or set of events.

Every time a pattern sub-expression within an every operator turns true the engine starts a new active subexpression looking for more event(s) or timing conditions that match the pattern sub-expression. If the every operator is not specified for a subexpression, the subexpression stops after the first match was found.

This pattern fires when encountering an A event and then stops looking.

A

This pattern keeps firing when encountering A events, and doesn't stop looking.

every A

When using every operator with the -> followed-by operator, each time the every operator restarts it also starts a new subexpression instance looking for events in the followed-by subexpression.

Let's consider an example event sequence as follows.

A1   B1   C1   B2   A2   D1   A3   B3   E1   A4   F1   B4

Table 6.4. 'Every' operator examples

ExampleDescription
every ( A -> B )

Detect an A event followed by a B event. At the time when B occurs the pattern matches, then the pattern matcher restarts and looks for the next A event.

  1. Matches on B1 for combination {A1, B1}

  2. Matches on B3 for combination {A2, B3}

  3. Matches on B4 for combination {A4, B4}

every A -> B

The pattern fires for every A event followed by a B event.

  1. Matches on B1 for combination {A1, B1}

  2. Matches on B3 for combination {A2, B3} and {A3, B3}

  3. Matches on B4 for combination {A4, B4}

A -> every B

The pattern fires for an A event followed by every B event.

  1. Matches on B1 for combination {A1, B1}.

  2. Matches on B2 for combination {A1, B2}.

  3. Matches on B3 for combination {A1, B3}

  4. Matches on B4 for combination {A1, B4}

every A -> every B

The pattern fires for every A event followed by every B event.

  1. Matches on B1 for combination {A1, B1}.

  2. Matches on B2 for combination {A1, B2}.

  3. Matches on B3 for combination {A1, B3} and {A2, B3} and {A3, B3}

  4. Matches on B4 for combination {A1, B4} and {A2, B4} and {A3, B4} and {A4, B4}

The examples show that it is possible that a pattern fires for multiple combinations of events that match a pattern expression. Each combination is posted as an EventBean instance to the update method in the UpdateListener implementation.

Let's consider the every operator in conjunction with a subexpression that matches 3 events that follow each other:

every (A -> B -> C)

The pattern first looks for A events. When an A event arrives, it looks for a B event. After the B event arrives, the pattern looks for a C event. Finally, when the C event arrives the pattern fires. The engine then starts looking for an A event again.

Assume that between the B event and the C event a second A2 event arrives. The pattern would ignore the A2 event entirely since it's then looking for a C event. As observed in the prior example, the every operator restarts the subexpression A -> B -> C only when the subexpression fires.

In the next statement the every operator applies only to the A event, not the whole subexpression:

every A -> B -> C

This pattern now matches for each A event that is followed by a B event and then a C event, regardless of when the A event arrives. Note that for each A event that arrives the pattern engine starts a new subexpression looking for a B event and then a C event, outputting each combination of matching events.

6.5.1.1. Limiting Subexpression Lifetime

As the introduction of the every operator states, the operator starts new subexpression instances and can cause multiple matches to occur for a single arriving event.

New subexpressions also take a very small amount of system resources and thereby your application should carefully consider when subexpressions must end when designing patterns. Use the timer:within construct and the and not constructs to end active subexpressions. The data window onto a pattern stream does not serve to limit pattern sub-expression lifetime.

Lets look at a concrete example. Consider the following sequence of events arriving:

A1   A2   B1  

This pattern matches on arrival of B1 and outputs two events (an array of length 2 if using a listener). The two events are the combinations {A1, B1} and {A2, B1}:

every a=A -> b=B

The and not operators are used to end an active subexpression.

The next pattern matches on arrival of B1 and outputs only the last A event which is the combination {A2, B1}:

every a=A -> (b=B and not A)

The and not operators cause the subexpression looking for {A1, B?} to end when A2 arrives.

Similarly, in the pattern below the engine starts a new subexpression looking for a B event every 1 second. After 5 seconds there are 5 subexpressions active looking for a B event and 5 matches occur at once if a B event arrives after 5 seconds.

every timer:interval(1 sec) -> b=B

Again the and not operators can end subexpressions that are not intended to match any longer:

every timer:interval(1 sec) -> (b=B and not timer:interval(1 sec))
// equivalent to
every timer:interval(1 sec) -> (b=B where timer:within(1 sec))

6.5.1.2. Every Operator Example

In this example we consider a generic pattern in which the pattern must match for each A event followed by a B event and followed by a C event, in which both the B event and the C event must arrive within 1 hour of the A event. The first approach to the pattern is as follows:

every A  -> (B -> C) where timer:within(1 hour)

Consider the following sequence of events arriving:

A1   A2   B1   C1   B2   C2

First, the pattern as above never stops looking for A events since the every operator instructs the pattern to keep looking for A events.

When A1 arrives, the pattern starts a new subexpression that keeps A1 in memory and looks for any B event. At the same time, it also keeps looking for more A events.

When A2 arrives, the pattern starts a new subexpression that keeps A2 in memory and looks for any B event. At the same time, it also keeps looking for more A events.

After the arrival of A2, there are 3 subexpressions active:

  1. The first active subexpression with A1 in memory, looking for any B event.

  2. The second active subexpression with A2 in memory, looking for any B event.

  3. A third active subexpression, looking for the next A event.

In the pattern above, we have specified a 1-hour lifetime for subexpressions looking for B and C events. Thus, if no B and no C event arrive within 1 hour after A1, the first subexpression goes away. If no B and no C event arrive within 1 hour after A2, the second subexpression goes away. The third subexpression however stays around looking for more A events.

The pattern as shown above thus matches on arrival of C1 for combination {A1, B1, C1} and for combination {A2, B1, C1}, provided that B1 and C1 arrive within an hour of A1 and A2.

You may now ask how to match on {A1, B1, C1} and {A2, B2, C2} instead, since you may need to correlate on a given property.

The pattern as discussed above matches every A event followed by the first B event followed by the next C event, and doesn't specifically qualify the B or C events to look for based on the A event. To look for specific B and C events in relation to a given A event, the correlation must use one or more of the properties of the A event, such as the "id" property:

every a=A -> (B(id=a.id -> C(id=a.id)) where timer:within(1 hour)

The pattern as shown above thus matches on arrival of C1 for combination {A1, B1, C1} and on arrival of C2 for combination {A2, B2, C2}.

6.5.1.3. Sensor Example

This example looks at temperature sensor events named Sample. The pattern detects when 3 sensor events indicate a temperature of more then 50 degrees uninterrupted within 90 seconds of the first event, considering events for the same sensor only.

every sample=Sample(temp > 50) ->
( (Sample(sensor=sample.sensor, temp > 50) and not Sample(sensor=sample.sensor, temp <= 50))   
  ->
  (Sample(sensor=sample.sensor, temp > 50) and not Sample(sensor=sample.sensor, temp <= 50))   
 ) where timer:within(90 seconds))

The pattern starts a new subexpression in the round braces after the first followed-by operator for each time a sensor indicated more then 50 degrees. Each subexpression then lives a maximum of 90 seconds. Each subexpression ends if a temperature of 50 degress or less is encountered for the same sensor. Only if 3 temperature events in a row indicate more then 50 degrees, and within 90 seconds of the first event, and for the same sensor, does this pattern fire.

6.5.2. Every-Distinct

Similar to the every operator in most aspects, the every-distinct operator indicates that the pattern sub-expression should restart when the subexpression qualified by the every-distinct keyword evaluates to true or false. In addition, the every-distinct eliminates duplicate results received from an active subexpression according to its distinct-value expressions.

The synopsis for the every-distinct pattern operator is:

every-distinct(distinct_value_expr [, distinct_value_exp[...][, expiry_time_period])

Within parenthesis are one or more distinct_value_expr expressions that return the values by which to remove duplicates.

You may optionally specify an expiry_time_period time period. If present, the pattern engine expires and removes distinct key values that are older then the time period, removing their associated memory and allowing such distinct values to match again. When your distinct value expressions return an unlimited number of values, for example when your distinct value is a timestamp or auto-increment column, you should always specify an expiry time period.

When specifying properties in the distinct-value expression list, you must ensure that the event types providing properties are tagged. Only properties of event types within filter expressions that are sub-expressions to the every-distinct may be specified.

For example, this pattern keeps firing for every A event with a distinct value for its aprop property:

every-distinct(a.aprop) a=A

Note that the pattern above assigns the a tag to the A event and uses a.prop to identify the prop property as a value of the a event A.

A pattern that returns the first Sample event for each sensor, assuming sensor is a field that returns a unique id identifying the sensor that originated the Sample event, is:

every-distinct(s.sensor) s=Sample

The next pattern looks for pairs of A and B events and returns only the first pair for each combination of aprop of an A event and bprop of a B event:

every-distinct(a.aprop, b.bprop) (a=A and b=B)

The following pattern looks for A events followed by B events for which the value of the aprop of an A event is the same value of the bprop of a B event but only for each distinct value of aprop of an A event:

every-distinct(a.aprop) a=A -> b=B(bprop = a.aprop)

When specifying properties as part of distinct-value expressions, properties must be available from tagged event types in sub-expressions to the every-distinct.

The following patterns are not valid:

// Invalid: event type in filter not tagged
every-distinct(aprop) A
			
// Invalid: property not from a sub-expression of every-distinct
a=A -> every-distinct(a.aprop) b=B

When an active subexpression to every-distinct becomes permanently false, the distinct-values seen from the active subexpression are removed and the sub-expression within is restarted.

For example, the below pattern detects each A event distinct by the value of aprop.

every-distinct(a.aprop) (a=A and not B)

In the pattern above, when a B event arrives, the subexpression becomes permanently false and is restarted anew, detecting each A event distinct by the value of aprop without considering prior values.

When your distinct key is a timestamp or other non-unique property, specify an expiry time period.

The following example returns every distinct A event according to the timestamp property on the A event, retaining each timestamp value for 10 seconds:

every-distinct(a.timestamp, 10 seconds) a=A

In the example above, if for a given A event and its timestamp value the same timestamp value occurs again for another A event before 10 seconds passed, the A event is not a match. If 10 seconds passed the pattern indicates a second match.

You may not use every-distinct with a timer-within guard to expire keys: The expiry time notation as above is the recommended means to expire keys.

// This is not the same as above; It does not expire transaction ids and is not recommended
every-distinct(a.timestamp) a=A where timer:within(10 sec)

6.5.3. Repeat

The repeat operator fires when a pattern sub-expression evaluates to true a given number of times. The synopsis is as follows:

[match_count] repeating_subexpr

The repeat operator is very similar to the every operator in that it restarts the repeating_subexpr pattern sub-expression up to a given number of times.

match_count is a positive number that specifies how often the repeating_subexpr pattern sub-expression must evaluate to true before the repeat expression itself evaluates to true, after which the engine may indicate a match.

For example, this pattern fires when the last of five A events arrives:

[5] A

Parenthesis must be used for nested pattern sub-expressions. This pattern fires when the last of a total of any five A or B events arrives:

[5] (A or B)

Without parenthesis the pattern semantics change, according to the operator precedence described earlier. This pattern fires when the last of a total of five A events arrives or a single B event arrives, whichever happens first:

[5] A or B

Tags can be used to name events in filter expression of pattern sub-expressions. The next pattern looks for an A event followed by a B event, and a second A event followed by a second B event. The output event provides indexed and array properties of the same name:

[2] (a=A -> b=B)

Using tags with repeat is further described in Section 6.5.4.6, “Tags and the Repeat Operator”.

Consider the following pattern that demonstrates the behavior when a pattern sub-expression becomes permanently false:

[2] (a=A and not C)

In the case where a C event arrives before 2 A events arrive, the pattern above becomes permanently false.

Lets add an every operator to restart the pattern and thus keep matching for all pairs of A events that arrive without a C event in between each pair:

every [2] (a=A and not C)

Since pattern matches return multiple A events, your select clause should use tag a as an array, for example:

select a[0].id, a[1].id from pattern [every [2] (a=A and not C)]

6.5.4. Repeat-Until

The repeat until operator provides additional control over repeated matching.

The repeat until operator takes an optional range, a pattern sub-expression to repeat, the until keyword and a second pattern sub-expression that ends the repetition. The synopsis is as follows:

[range] repeated_pattern_expr until end_pattern_expr

Without a range, the engine matches the repeated_pattern_expr pattern sub-expression until the end_pattern_expr evaluates to true, at which time the expression turns true.

An optional range can be used to indicate the minimum number of times that the repeated_pattern_expr pattern sub-expression must become true.

The optional range can also specify a maximum number of times that repeated_pattern_expr pattern sub-expression evaluates to true and retains tagged events. When this number is reached, the engine stops the repeated_pattern_expr pattern sub-expression.

The until keyword is always required when specifying a range and is not required if specifying a fixed number of repeat as discussed in the section before.

6.5.4.1. Unbound Repeat

In the unbound repeat, without a range, the engine matches the repeated_pattern_expr pattern sub-expression until the end_pattern_expr evaluates to true, at which time the expression turns true. The synopsis is:

repeated_pattern_expr until end_pattern_expr

This is a pattern that keeps looking for A events until a B event arrives:

A until B

Nested pattern sub-expressions must be placed in parenthesis since the until operator has precedence over most operators. This example collects all A or B events for 10 seconds and places events received in indexed properties 'a' and 'b':

(a=A or b=B) until timer:interval(10 sec)

6.5.4.2. Bound Repeat Overview

The synopsis for the optional range qualifier is:

[ [low_endpoint] : [high_endpoint] ]

low_endpoint is an optional number that appears on the left of a colon (:), after which follows an optional high_endpoint number.

A range thus consists of a low_endpoint and a high_endpoint in square brackets and separated by a colon (:) characters. Both endpoint values are optional but either one or both must be supplied. The low_endpoint can be omitted to denote a range that starts at zero. The high_endpoint can be omitted to denote an open-ended range.

Some examples for valid ranges might be:

[3 : 10]
[:3]    // range starts at zero
[2:]    // open-ended range

The low_endpoint, if specified, defines the minimum number of times that the repeated_pattern_expr pattern sub-expression must become true in order for the expression to become true.

The high_endpoint, if specified, is the maximum number of times that the repeated_pattern_expr pattern sub-expression becomes true. If the number is reached, the engine stops the repeated_pattern_expr pattern sub-expression.

In all cases, only at the time that the end_pattern_expr pattern sub-expression evaluates to true does the expression become true. If end_pattern_expr pattern sub-expression evaluates to false, then the expression evaluates to false.

6.5.4.3. Bound Repeat - Open Ended Range

An open-ended range specifies only a low endpoint and not a high endpoint.

Consider the following pattern which requires at least three A events to match:

[3:] A until B

In the pattern above, if a B event arrives before 3 A events occurred, the expression ends and evaluates to false.

6.5.4.4. Bound Repeat - High Endpoint Range

A high-endpoint range specifies only a high endpoint and not a low endpoint.

In this sample pattern the engine will be looking for a maximum of 3 A events. The expression turns true as soon as a single B event arrives regardless of the number of A events received:

[:3] A until B

The next pattern matches when a C or D event arrives, regardless of the number of A or B events that occurred:

[:3] (a=A or b=B) until (c=C or d=D)

In the pattern above, if more then 3 A or B events arrive, the pattern stops looking for additional A or B events. The 'a' and 'b' tags retain only the first 3 (combined) matches among A and B events. The output event contains these tagged events as indexed properties.

6.5.4.5. Bound Repeat - Bounded Range

A bounded range specifies a low endpoint and a high endpoint.

The next pattern matches after at least one A event arrives upon the arrival of a single B event:

[1:3] a=A until B

If a B event arrives before the first A event, then the pattern does not match. Only the first 3 A events are returned by the pattern.

6.5.4.6. Tags and the Repeat Operator

The tags assigned to events in filter subexpressions within a repeat operator are available for use in filter expressions and also in any EPL clause.

This sample pattern matches 2 A events followed by a B event. Note the filter on B events: only a B event that has a value for the "beta" property that equals any of the "id" property values of the two A events is considered:

[2] A -> B(beta in (a[0].id, a[1].id))

The next EPL statement returns pairs of A events:

select a, a[0], a[0].id, a[1], a[1].id
from pattern [ every [2] a=A ] 

The select clause of the statement above showcases different ways of accessing tagged events:

  • The tag itself can be used to select an array of underlying events. For example, the 'a' expression above returns an array of underlying events of event type A.

  • The tag as an indexed property returns the underlying event at that index. For instance, the 'a[0]' expression returns the first underlying A event, or null if no such A event was matched by the repeat operator.

  • The tag as a nested, indexed property returns a property of the underlying event at that index. For example, the 'a[1].id' expression returns the 'id' property value of the second A event, or null if no such second A event was matched by the repeat operator.

You may not use indexed tags defined in the sub-expression to the repeat operator in the same subexpression. For example, in the following pattern the subexpression to the repeat operator is (a=A() -> b=B(id=a[0].id)) and the tag a cannot be used in its indexed form in the filter for event B:

// invalid
every [2] (a=A() -> b=B(id=a[0].id))

You can use tags without an index:

// valid
every [2] (a=A() -> b=B(id=a.id))

6.5.5. And

Similar to the Java && operator the and operator requires both nested pattern expressions to turn true before the whole expression turns true (a join pattern).

This pattern matches when both an A event and a B event arrive, at the time the last of the two events arrive:

A and B

This pattern matches on any sequence of an A event followed by a B event and then a C event followed by a D event, or a C event followed by a D and an A event followed by a B event:

(A -> B) and (C -> D)

Note that in an and pattern expression it is not possible to correlate events based on event property values. For example, this is an invalid pattern:

// This is NOT valid
a=A and B(id = a.id)

The above expression is invalid as it relies on the order of arrival of events, however in an and expression the order of events is not specified and events fulfill an and condition in any order. The above expression can be changed to use the followed-by operator:

// This is valid
a=A -> B(id = a.id)
// another example using 'and'...
a=A -> (B(id = a.id) and C(id = a.id))

Consider a pattern that looks for the same event:

A and A

The pattern above fires when a single A event arrives. The first arriving A event triggers a state transition in both the left and the right hand side expression.

In order to match after two A events arrive in any order, there are two options to express this pattern. The followed-by operator is one option and the repeat operator is the second option, as the next two patterns show:

A -> A
// ... or ...
[2] A

6.5.6. Or

Similar to the Java “||” operator the or operator requires either one of the expressions to turn true before the whole expression turns true.

Look for either an A event or a B event. As always, A and B can itself be nested expressions as well.

A or B

Detect all stock ticks that are either above or below a threshold.

every (StockTick(symbol='IBM', price < 100) or StockTick(symbol='IBM', price > 105)

6.5.7. Not

The not operator negates the truth value of an expression. Pattern expressions prefixed with not are automatically defaulted to true upon start, and turn permanently false when the expression within turns true.

The not operator is generally used in conjunction with the and operator or subexpressions as the below examples show.

This pattern matches only when an A event is encountered followed by a B event but only if no C event was encountered before either an A event and a B event, counting from the time the pattern is started:

(A -> B) and not C

Assume we'd like to detect when an A event is followed by a D event, without any B or C events between the A and D events:

A -> (D and not (B or C))

It may help your understanding to discuss a pattern that uses the or operator and the not operator together:

a=A -> (b=B or not C)

In the pattern above, when an A event arrives then the engine starts the subexpression B or not C. As soon as the subexpression starts, the not operator turns to true. The or expression turns true and thus your listener receives an invocation providing the A event in the property 'a'. The subexpression does not end and continues listening for B and C events. Upon arrival of a B event your listener receives a second invocation. If instead a C event arrives, the not turns permanently false however that does not affect the or operator (but would end an and operator).

To test for absence of an event, use timer:interval together with and not operators. The sample statement reports each 10-second interval during which no A event occurred:

every (timer:interval(10 sec) and not A)

In many cases the not operator, when used alone, does not make sense. The following example is invalid and will log a warning when the engine is started:

// not a sensible pattern
(not a=A) -> B(id=a.id)

6.5.8. Followed-by

The followed by -> operator specifies that first the left hand expression must turn true and only then is the right hand expression evaluated for matching events.

Look for an A event and if encountered, look for a B event. As always, A and B can itself be nested event pattern expressions.

A -> B

This is a pattern that fires when 2 status events indicating an error occur one after the other.

StatusEvent(status='ERROR') -> StatusEvent(status='ERROR')

A pattern that takes all A events that are not followed by a B event within 5 minutes:

every A -> (timer:interval(5 min) and not B)

A pattern that takes all A events that are not preceded by B within 5 minutes:

every (timer:interval(5 min) and not B -> A)

6.5.8.1. Limiting Sub-Expression Count

The followed-by -> operator can optionally be provided with an expression that limits the number of sub-expression instances of the right-hand side pattern sub-expression.

The synopsis for the followed-by operator with limiting expression is:

lhs_expression -[limit_expression]> rhs_expression

Each time the lhs_expression pattern sub-expression turns true the pattern engine starts a new rhs_expression pattern sub-expression. The limit_expression returns an integer value that defines a maximum number of pattern sub-expression instances that can simultaneously be present for the same rhs_expression.

When the limit is reached the pattern engine issues a com.espertech.esper.client.hook.ConditionPatternSubexpressionMax notification object to any condition handlers registered with the engine as described in Section 13.11, “Condition Handling” and does not start a new pattern sub-expression instance for the right-hand side pattern sub-expression.

For example, consider the following pattern which returns for every A event the first B event that matches the id field value of the A event:

every a=A -> b=B(id = a.id)

In the above pattern, every time an A event arrives (lhs) the pattern engine starts a new pattern sub-expression (rhs) consisting of a filter for the first B event that has the same value for the id field as the A event.

In some cases your application may want to limit the number of right-hand side sub-expressions because of memory concerns or to reduce output. You may add a limit expression returning an integer value as part of the operator.

This example employs the followed-by operator with a limit expression to indicate that maximally 2 filters for B events (the right-hand side pattern sub-expression) may be active at the same time:

every a=A -[2]> b=B(id = a.id)

Note that the limit expression in the example above is not a limit per value of id field, but a limit counting all right-hand side pattern sub-expression instances that are managed by that followed-by sub-expression instance.

If your followed-by operator lists multiple sub-expressions with limits, each limit applies to the immediate right-hand side. For example, the pattern below limits the number of filters for B events to 2 and the number of filters for C events to 3:

every a=A -[2]> b=B(id = a.id) -[3]> c=C(id = a.id)

6.5.8.2. Limiting Engine-wide Sub-Expression Count

Esper allows setting a maximum number of pattern sub-expressions in the configuration, applicable to all followed-by operators of all statements.

If your application has patterns in multiple EPL statements and all such patterns should count towards a total number of pattern sub-expression counts, you may consider setting a maximum number of pattern sub-expression instances, engine-wide, via the configuration described in Section 14.4.14.1, “Followed-By Operator Maximum Subexpression Count”.

When the limit is reached the pattern engine issues a notification object to any condition handlers registered with the engine as described in Section 13.11, “Condition Handling”. Depending on your configuration the engine can prevent the start of a new pattern sub-expression instance for the right-hand side pattern sub-expression, until pattern sub-expression instances end or statements are stopped or destroyed.

The notification object issued to condition handlers is an instance of com.espertech.esper.client.hook.ConditionPatternEngineSubexpressionMax. The notification object contains information which statement triggered the limit and the pattern counts per statement for all statements.

For information on static and runtime configuration, please consult Section 14.4.14.1, “Followed-By Operator Maximum Subexpression Count”. The limit can be changed and disabled or enabled at runtime via the runtime configuration API.

6.5.9. Pattern Guards

Guards are where-conditions that control the lifecycle of subexpressions. Custom guard functions can also be used. The section Chapter 16, Integration and Extension outlines guard plug-in development in greater detail.

The pattern guard where-condition has no relationship to the EPL where clause that filters sets of events.

Take as an example the following pattern expression:

MyEvent where timer.within(10 sec)

In this pattern the timer:within guard controls the subexpression that is looking for MyEvent events. The guard terminates the subexpression looking for MyEvent events after 10 seconds after start of the pattern. Thus the pattern alerts only once when the first MyEvent event arrives within 10 seconds after start of the pattern.

The every keyword requires additional discussion since it also controls subexpression lifecycle. Let's add the every keyword to the example pattern:

every MyEvent where timer.within(10 sec)

The difference to the pattern without every is that each MyEvent event that arrives now starts a new subexpression, including a new guard, looking for a further MyEvent event. The result is that, when a MyEvent arrives within 10 seconds after pattern start, the pattern execution will look for the next MyEvent event to arrive within 10 seconds after the previous one.

By placing parentheses around the every keyword and its subexpression, we can have the every under the control of the guard:

(every MyEvent) where timer.within(10 sec)

In the pattern above, the guard terminates the subexpression looking for all MyEvent events after 10 seconds after start of the pattern. This pattern alerts for all MyEvent events arriving within 10 seconds after pattern start, and then stops.

Guards do not change the truth value of the subexpression of which the guard controls the lifecycle, and therefore do not cause a restart of the subexpression when used with the every operator. For example, the next pattern stops returning matches after 10 seconds unless a match occurred within 10 seconds after pattern start:

every ( (A and B) where timer.within(10 sec) )

6.5.9.1. The timer:within Pattern Guard

The timer:within guard acts like a stopwatch. If the associated pattern expression does not turn true within the specified time period it is stopped and permanently false.

The synopsis for timer:within is as follows:

timer:within(time_period_expression)

The time_period_expression is a time period (see Section 5.2.1, “Specifying Time Periods”) or an expression providing a number of seconds as a parameter. The interval expression may contain references to properties of prior events in the same pattern as well as variables and substitution parameters.

This pattern fires if an A event arrives within 5 seconds after statement creation.

A where timer:within (5 seconds)

This pattern fires for all A events that arrive within 5 seconds. After 5 seconds, this pattern stops matching even if more A events arrive.

(every A) where timer:within (5 seconds)

This pattern matches for any one A or B event in the next 5 seconds.

( A or B ) where timer:within (5 sec)

This pattern matches for any 2 errors that happen 10 seconds within each other.

every (StatusEvent(status='ERROR') -> StatusEvent(status='ERROR') where timer:within (10 sec))

The following guards are equivalent:

timer:within(2 minutes 5 seconds)
timer:within(125 sec)
timer:within(125)

6.5.9.2. The timer:withinmax Pattern Guard

The timer:withinmax guard is similar to the timer:within guard and acts as a stopwatch that additionally has a counter that counts the number of matches. It ends the subexpression when either the stopwatch ends or the match counter maximum value is reached.

The synopsis for timer:withinmax is as follows:

timer:withinmax(time_period_expression, max_count_expression)

The time_period_expression is a time period (see Section 5.2.1, “Specifying Time Periods”) or an expression providing a number of seconds.

The max_count_expression provides the maximum number of matches before the guard ends the subexpression.

Each parameter expression may also contain references to properties of prior events in the same pattern as well as variables and substitution parameters.

This pattern fires for every A event that arrives within 5 seconds after statement creation but only up to the first two A events:

(every A) where timer:withinmax (5 seconds, 2)

If the result of the max_count_expression is 1, the guard ends the subexpression after the first match and indicates the first match.

This pattern fires for the first A event that arrives within 5 seconds after statement creation:

(every A) where timer:withinmax (5 seconds, 1)

If the result of the max_count_expression is zero, the guard ends the subexpression upon the first match and does no indicate any matches.

This example receives every A event followed by every B event (as each B event arrives) until the 5-second subexpression timer ends or X number of B events have arrived (assume X was declared as a variable):

every A -> (every B) where timer:withinmax (5 seconds, X)

6.5.9.3. The while Pattern Guard

The while guard is followed by an expression that the engine evaluates for every match reported by the guard pattern sub-expression. When the expression returns false the pattern sub-expression ends.

The synopsis for while is as follows:

while (guard_expression)

The guard_expression is any expression that returns a boolean true or false. The expression may contain references to properties of prior events in the same pattern as well as variables and substitution parameters.

Each time the subexpression indicates a match, the engine evaluates guard_expression and if true, passes the match and when false, ends the subexpression.

This pattern fires for every A event until an A event arrives that has a value of zero or less for its size property (assuming A events have an integer size property).

(every a=A) while (a.size > 0)

Note the parenthesis around the every subexpression. They ensure that, following precedence rules, the guard applies to the every operator as well.

6.5.9.4. Guard Time Interval Expressions

The timer:within and timer:withinmax guards may be parameterized by an expression that contains one or more references to properties of prior events in the same pattern.

As a simple example, this pattern matches every A event followed by a B event that arrives within delta seconds after the A event:

every a=A -> b=B where timer:within (a.delta seconds)

Herein A event is assumed to have a delta property that provides the number of seconds to wait for B events. Each arriving A event may have a different value for delta and the guard is therefore parameterized dynamically based on the prior A event received.

When multiple events accumulate, for example when using the match-until or repeat pattern elements, an index must be provided:

[2] a=A -> b=B where timer:within (a[0].delta + a[1].delta)

The above pattern matches after 2 A events arrive followed by a B event within a time interval after the A event that is defined by the sum of the delta properties of both A events.

6.5.9.5. Combining Guard Expressions

You can combine guard expression by using parenthesis around each subexpression.

The below pattern matches for each A event while A events of size greater then zero arrive and only within the first 20 seconds:

((every a=A) while (a.size > 0)) where timer:within(20)

6.6. Pattern Atoms

6.6.1. Filter Atoms

Filter atoms have been described in section Section 6.4, “Filter Expressions In Patterns”.

6.6.2. Time-based Observer Atoms

Observers observe time-based events for which the thread-of-control originates by the engine timer or external timer event. Custom observers can also be developed that observe timer events or other engine-external application events such as a file-exists check. The section Chapter 16, Integration and Extension outlines observer plug-in development in greater detail.

6.6.2.1. timer:interval

The timer:interval observer waits for the defined time before the truth value of the observer turns true. The observer takes a time period (see Section 5.2.1, “Specifying Time Periods”) as a parameter, or an expression that returns the number of seconds.

The observer may be parameterized by an expression that contains one or more references to properties of prior events in the same pattern, or may also reference variables, substitution parameters or any other expression returning a numeric value.

After an A event arrived wait 10 seconds then indicate that the pattern matches.

A -> timer:interval(10 seconds) 

The pattern below fires every 20 seconds.

every timer:interval(20 sec)

The next example pattern fires for every A event that is not followed by a B event within 60 seconds after the A event arrived. The B event must have the same "id" property value as the A event.

every a=A -> (timer:interval(60 sec) and not B(id=a.id)) 

Consider the next example, which assumes that the A event has a property waittime:

every a=A -> (timer:interval(a.waittime + 2) and not B(id=a.id))

In the above pattern the logic waits for 2 seconds plus the number of seconds provided by the value of the waittime property of the A event.

6.6.2.2. timer:at

The timer:at observer is similar in function to the Unix “crontab” command. At a specified time the expression turns true. The at operator can also be made to pattern match at regular intervals by using an every operator in front of the timer:at operator.

The syntax is: timer:at (minutes, hours, days of month, months, days of week [, seconds]).

The value for seconds is optional. Each element allows wildcard * values. Ranges can be specified by means of lower bounds then a colon ‘:’ then the upper bound. The division operator */x can be used to specify that every xth value is valid. Combinations of these operators can be used by placing these into square brackets([]).

The timer:at observer may also be parameterized by an expression that contains one or more references to properties of prior events in the same pattern, or may also reference variables, substitution parameters or any other expression returning a numeric value. The frequency division operator */x and parameters lists within brackets([]) are an exception: they may only contain variables, substitution parameters or numeric values.

This expression pattern matches every 5 minutes past the hour.

every timer:at(5, *, *, *, *)

The below timer:at pattern matches every 15 minutes from 8am to 5:45pm (hours 8 to 17 at 0, 15, 30 and 45 minutes past the hour) on even numbered days of the month as well as on the first day of the month.

timer:at (*/15, 8:17, [*/2, 1], *, *)

The below table outlines the fields, valid values and keywords available for each field:

Table 6.5. Properties offered by sample statement aggregating price

Field NameMandatory?Allowed ValuesAdditional Keywords
Minutesyes0 - 59 
Hoursyes0 - 23 
Days Of Monthyes1 - 31last, weekday, lastweekday
Monthsyes1 - 12 
Days Of Weekyes0 (Sunday) - 6 (Saturday)last
Secondsno0 - 59 

The keyword last used in the days-of-month field means the last day of the month (current month). To specify the last day of another month, a value for the month field has to be provided. For example: timer:at(*, *, last,2,*) is the last day of February.

The last keyword in the day-of-week field by itself simply means Saturday. If used in the day-of-week field after another value, it means "the last xxx day of the month" - for example "5 last" means "the last friday of the month". So the last Friday of the current month will be: timer:at(*, *, *, *, 5 last). And the last Friday of June: timer:at(*, *, *, 6, 5 last).

The keyword weekday is used to specify the weekday (Monday-Friday) nearest the given day. Variant could include month like in: timer:at(*, *, 30 weekday, 9, *) which is Friday September 28th (no jump over month).

The keyword lastweekday is a combination of two parameters, the last and the weekday keywords. A typical example could be: timer:at(*, *, *, lastweekday, 9, *) which will define Friday September 28th (example year is 2007).

6.6.2.2.1. timer:at and the every Operator

When using timer:at with the every operator the crontab-like timer computes the next time at which the timer should fire based on the specification and the current time. When using every, the current time is the time the timer fired or the statement start time if the timer has not fired once.

For example, this pattern fires every 1 minute starting at 1:00pm and ending at 1:59pm, every day:

every timer:at(*, 13, *, *, *)

Assume the above statement gets started at 1:05pm and 20 seconds. In such case the above pattern fires every 1 minute starting at 1:06pm and ending at 1:59pm for that day and 1:00pm to 1:59pm every following day.

To get the pattern to fire only once at 1pm every day, explicitly specify the minute to start. The pattern below fires every day at 1:00pm:

every timer:at(0, 13, *, *, *)

By specifying a second resolution the timer can be made to fire every second, for instance:

every timer:at(*, *, *, *, *, *)

Chapter 7. EPL Reference: Match Recognize

7.1. Overview

Using match recognize patterns are defined in the familiar syntax of regular expressions.

The match recognize syntax presents an alternative way to specify pattern detection as compared to the EPL pattern language described in the previous chapter. A comparison of match recognize and EPL patterns is below.

The match recognize syntax is a proposal for incorporation into the SQL standard. It is thus subject to change as the standard evolves and finalizes (it has not finalized yet). Please consult row-pattern-recogniton-11-public for further information.

You may be familiar with regular expressions in the context of finding text of interest in a string, such as particular characters, words, or patterns of characters. Instead of matching characters, match recognize matches sequences of events of interest.

Esper can apply match-recognize patterns in real-time upon arrival of new events in a stream of events (also termed incrementally, streaming or continuous). Esper can also match patterns on-demand via the iterator pull-API, if specifying a named window or data window on a stream.

7.2. Comparison of Match Recognize and EPL Patterns

This section compares pattern detection via match recognize and via the EPL pattern language.

Table 7.1. Comparison Match Recognize to EPL Patterns

CategoryEPL PatternsMatch Recognize
PurposePattern detection in sequences of events.Same.
StandardsNot standardized, similar to Rapide pattern language.Proposal for incorporation into the SQL standard.
Real-time ProcessingYes.Yes.
On-Demand query via IteratorNo.Yes.
LanguageNestable expressions consisting of boolean AND, OR, NOT and time or arrival-based constructs such as -> (followed-by), timer:within and timer:interval.Regular expression consisting of variables each representing conditions on events.
Event TypesAn EPL pattern may react to multiple different types of events.The input is a single type of event (unless used with variant streams).
Data Window InteractionDisconnected, i.e. an event leaving a data window does not change pattern state.Connected, i.e. an event leaving a data window removes the event from match selection.
Semantic EvaluationTruth-value based: A EPL pattern such as (A and B) can fire when a single event arrives that satisfies both A and B conditions.Sequence-based: A regular expression (A B) requires at least two events to match.
Time Relationship Between EventsThe timer:within, timer:interval and NOT operator can expressively search for absence of events or other more complex timing relationships.Some support for detecting absence of events using the interval clause.
ExtensibilityCustom pattern objects, user-defined functions.User-defined functions, custom aggregation functions.
Memory UseLikely between 500 bytes to 2k per open sequence, depends on pattern.Likely between 100 bytes to 1k per open sequence, depends on pattern.

7.3. Syntax

The synopsis is as follows:

match_recognize (
  [ partition by partition_expression [, partition_expression] [,...]  ]
  measures  measure_expression as col_name [, measure_expression as col_name ] [,...]
  [ all matches ]
  [ after match skip (past last row | to next row | to current row) ]
  pattern ( variable_regular_expr [, variable_regular_expr] [,...] )
  [ interval time_period ]
  define  variable as variable_condition [, variable as variable_condition]  [,...] 
)

The match_recognize keyword starts the match recognize definition and occurs right after the from clause in an EPL select statement. It is followed by parenthesis that surround the match recognize definition.

Partition by is optional and may be used to specify that events are to be partitioned by one or more event properties or expressions. If there is no Partition by then all rows of the table constitute a single partition. The regular expression applies to events in the same partition and not across partitions.

The measures clause defines columns that contain expressions over the pattern variables. The expressions can reference partition columns, singleton variables, aggregates as well as indexed properties on the group variables. Each measure_expression expression must be followed by the as keyword and a col_name column name.

The all matches keywords are optional and instructs the engine to find all possible matches. By default matches are ranked and the engine returns a single match following an algorithm to eliminate duplicate matches, as described below. When specifying all matches, matches may overlap and may start at the same row.

The after match skip keywords are optional and serve to determine the resumption point of pattern matching after a match has been found. By default the behavior is after match skip past last row. This means that after eliminating duplicate matches, the logic skips to resume pattern matching at the next event after the last event of the current match.

The pattern component is used to specify a regular expression. The regular expression is built from variable names, and may use the operators such as *, +, ?, *?, +?, ?? quantifiers and | alteration (concatenation is indicated by the absence of any operator sign between two successive items in a pattern).

With the optional interval keyword and time period you can control how long the engine should wait for further events to arrive that may be part of a matching event sequence, before indicating a match (or matches) (not applicable to on-demand pattern matching).

Define is a mandatory component, used to specify the boolean condition that defines a variable name that is declared in the pattern. A variable name does not require a definition and if there is no definition, the default is a predicate that is always true. Such a variable name can be used to match any row.

7.3.1. Syntax Example

For illustration, the examples in this chapter use the TemperatureSensorEvent event. Each event has 3 properties: the id property is a unique event id, the device is a sensor device number and the temp property is a temperature reading. An event described as "id=E1, device=1, temp=100" is a TemperatureSensorEvent event with id "E1" for device 1 with a reading of 100.

This example statement looks for two TemperatureSensorEvent events from the same device, directly following each other, that indicate a jump in temperature of 10 or more between the two events:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, B.id as b_id, A.temp as a_temp, B.temp as b_temp
  pattern (A B)
  define 
    B as Math.abs(B.temp - A.temp) >= 10
)

The partition by ensures that the regular expression applies to sequences of events from the same device.

The measures clause provides a list of properties or expressions to be selected from matching events. Each property name must be prefixed by the variable name.

In the pattern component the statement declares two variables: A and B. As a matter of convention, variable names are uppercase characters.

The define clause specifies no condition for variable A. This means that A defaults to true and any event matches A. Therefore, the pattern can start at any event.

The pattern A B indicates to look for a pattern in which an event that fulfills the condition for variable A is immediately followed by an event that fulfills the condition for variable B. A pattern thus defines the sequence (or sequences) of conditions that must be met for the pattern to fire.

Below table is an example sequence of events and output of the pattern:

Table 7.2. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=50 
2000id=E2, device=1, temp=55 
3000id=E3, device=1, temp=60 
4000id=E4, device=1, temp=70a_id = E3, b_id = E4, a_temp = 60, b_temp = 70
5000id=E5, device=1, temp=85 
6000id=E6, device=1, temp=85 
7000id=E7, device=2, temp=100 

At time 4000 when event with id E4 (or event E4 or just E4 for short) arrives the pattern matches and produces an output event. Matching then skips past the last event of the current match (E4) and begins at event E5 (the default skip clause is past last row). Therefore events E4 and E5 do not constitute a match.

At time 3000, events E1 and E3 do not constitute a match as E3 does not immediately follow E, since there is E2 in between.

At time 7000, event E7 does not constitute a match as it is from device 2 and thereby not in the same partition as prior events.

7.4. Pattern and Pattern Operators

The pattern specifies the pattern to be recognized in the ordered sequence of events in a partition using regular expression syntax. Each variable name in a pattern corresponds to a boolean condition, which is specified later using the define component of the syntax. Thus the pattern can be regarded as implicitly declaring one or more variable names; the definition of those variable names appears later in the syntax. If a variable is not defined the variable defaults to true.

It is permitted for a variable name to occur more than once in a pattern, for example pattern (A B A).

7.4.1. Operator Precedence

The operators at the top of this table take precedence over operators lower on the table.

Table 7.3. Match Recognize Regular Expression Operator Precedence

PrecedenceOperatorDescriptionExample
1Grouping()
(A B)
2Single-character duplication* + ?
A* B+ C?
3Concatenation(no operator)
A B
4Alternation|
A | B

If you are not sure about the precedence, please consider placing parenthesis () around your groups. Parenthesis can also help make expressions easier to read and understand.

7.4.2. Concatenation

The concatenation is indicated by the absence of any operator sign between two successive items in a pattern.

In below pattern the two items A and B have no operator between them. The pattern matches for any event immediately followed by an event from the same device that indicates a jump in temperature over 10:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, B.id as b_id, A.temp as a_temp, B.temp as b_temp
  pattern (A B)
  define 
    B as Math.abs(B.temp - A.temp) >= 10
)

Please see the Section 7.3.1, “Syntax Example” for a sample event sequence.

7.4.3. Alternation

The alternation operator is a vertical bar ( | ).

The alternation operator has the lowest precedence of all operators. It tells the engine to match either everything to the left of the vertical bar, or everything to the right of the vertical bar. If you want to limit the reach of the alternation, you will need to use round brackets for grouping.

This example pattern looks for a sequence of an event with a temperature over 50 followed immediately by either an event with a temperature less then 45 or an event that indicates the temperature jumped by 10 (all for the same device):

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, B.id as b_id, C.id as c.id
  pattern (A (B | C))
  define 
    A as A.temp >= 50,
    B as B.temp <= 45,
    C as Math.abs(C.temp - A.temp) >= 10)

Below table is an example sequence of events and output of the pattern:

Table 7.4. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=50 
2000id=E2, device=1, temp=45a_id=E1, b_id=E2, c_id=null
3000id=E3, device=1, temp=46 
4000id=E4, device=1, temp=48 
5000id=E5, device=1, temp=50 
6000id=E6, device=1, temp=60a_id = E5, b_id = null, c_id=E6

7.4.4. Quantifiers Overview

Quantifiers are postfix operators with the following choices:

Table 7.5. Quantifiers

QuantifierMeaning
*Zero or more matches (greedy).
+One or more matches (greedy).
?Zero or one match (greedy).
*?Zero or more matches (reluctant).
+?One or more matches (reluctant).
??Zero or one match (reluctant).

7.4.5. Variables Can be Singleton or Group

A singleton variable is a variable in a pattern that does not have a quantifier or has a zero-or-one quantifier (? or ??) and occurs only once in the pattern (except with alteration). In the measures clause a singleton variable can be selected as:

variableName.propertyName

Variables with a zero-or-more or one-or-more quantifier, or variables that occur multiple places in a pattern (except when using alteration), may match multiple events and are group variables. In the measures clause a group variable must be selected either by providing an index or via any of the aggregation functions, such as first, last, count and sum:

variableName[index].propertyName
last(variableName.propertyName)

Please find examples of singleton and group variables and example measures clauses below.

7.4.5.1. Additional Aggregation Functions

For group variables all existing aggregation functions can be used and in addition the following aggregation functions may be used:

Table 7.6. Syntax and results of aggregate functions

Aggregate FunctionResult
first([all|distinct] expression)

Returns the first value.

last([all|distinct] expression)

Returns the last value.

7.4.6. Eliminating Duplicate Matches

The execution of match recognize is continuous and real-time by default. This means that every arriving event, or batch of events if using batching, evaluates against the pattern and matches are immediately indicated. Elimination of duplicate matches occurs between all matches of the arriving events (or batch of events) at a given time.

As an alternative, and if your application does not require continuous pattern evaluation, you may use the iterator API to perform on-demand matching of the pattern. For the purpose of indicating to the engine to not generate continuous results, specify the @Hint('iterate_only') hint.

When using one-or-more, zero-or-more or zero-or-one quantifiers (?, +, *, ??, +?, *?), the output of the real-time continuous query can differ from the output of the on-demand iterator execution: The continuous query will output a match (or multiple matches) as soon as matches are detected at a given time upon arrival of events (not knowing what further events may arrive). The on-demand execution, since it knows all possible events in advance, can determine the longest match(es). Thus elimination of duplicate matches can lead to different results between real-time and on-demand use.

If the all matches keywords are specified, then all matches are returned as the result and no elimination of duplicate matches as below occurs.

Otherwise matches to a pattern in a partition are ordered by preferment. Preferment is given to matches based on the following priorities:

  1. A match that begins at an earlier row is preferred over a match that begins at a later row.

  2. Of two matches matching a greedy quantifier, the longer match is preferred.

  3. Of two matches matching a reluctant quantifier, the shorter match is preferred.

After ranking matches by preferment, matches are chosen as follows:

  1. The first match by preferment is taken.

  2. The pool of matches is reduced as follows based on the SKIP TO clause: If SKIP PAST LAST ROW is specified, all matches that overlap the first match are discarded from the pool. If SKIP TO NEXT ROW is specified, then all matches that overlap the first row of the first match are discarded. If SKIP TO CURRENT ROW is specified, then no matches are discarded.

  3. The first match by preferment of the ones remaining is taken.

  4. Step 2 is repeated to remove more matches from the pool.

  5. Steps 3 and 4 are repeated until there are no remaining matches in the pool.

7.4.7. Greedy Or Reluctant

Reluctant quantifiers are indicated by an additional question mark (*?, +?, ??,). Reluctant quantifiers try to match as few rows as possible, whereas non-reluctant quantifiers are greedy and try to match as many rows as possible.

Greedy and reluctant come into play only for match selection among multiple possible matches. When specifying all matches there is no difference between greedy and reluctant quantifiers.

Consider the below example. The conditions may overlap: an event with a temperature reading of 105 and over matches both A and B conditions:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, B.id as b_id
  pattern (A?? B?)
  define 
    A as A.temp >= 100
    B as B.temp >= 105)

A sample sequence of events and pattern matches:

Table 7.7. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=99 
2000id=E2, device=2, temp=106a_id=null, b_id=E2
3000id=E3, device=1, temp=100a_id=E3, b_id=null

As the ? qualifier on condition B is greedy, event E2 matches the pattern and is indicated as a B event by the measure clause (and not as an A event therefore a_id is null).

7.4.8. Quantifier - One Or More (+ and +?)

The one-or-more quantifier (+) must be matched one or more times by events. The operator is greedy and the reluctant version is +?.

In the below example with pattern (A+ B+) the pattern consists of two variable names, A and B, each of which is quantified with +, indicating that they must be matched one or more times.

The pattern looks for one or more events in which the temperature is over 100 followed by one or more events indicating a higher temperature:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures first(A.id) as first_a, last(A.id) as last_a, B[0].id as b0_id, B[1].id as b1_id
  pattern (A+ B+)
  define 
	A as A.temp >= 100,
	B as B.temp > A.temp)

An example sequence of events that matches the pattern above is:

Table 7.8. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=99 
2000id=E2, device=1, temp=100 
3000id=E3, device=1, temp=100 
4000id=E4, device=1, temp=101first_a = E2, last_a = E3, b0_id = E4, b1_id = null
5000id=E5, device=1, temp=102 

Note that for continuous queries, there is no match that includes event E5 since after the pattern matches for E4 the pattern skips to start fresh at E5 (by default skip clause). When performing on-demand matching via iterator, event E5 gets included in the match and the output is first_a = E2, last_a = E3, b0_id = E4, b1_id = E5.

7.4.9. Quantifier - Zero Or More (* and *?)

The zero-or-more quantifier (*) must be matched zero or more times by events. The operator is greedy and the reluctant version is *?.

The pattern looks for a sequence of events in which the temperature starts out below 50 and then stays between 50 and 60 and finally comes over 60:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, count(B.id) as count_b, C.id as c_id
  pattern (A B* C)
  define 
	A as A.temp < 50,
	B as B.temp between 50 and 60,
	C as C.temp > 60)

An example sequence of events that matches the pattern above is:

Table 7.9. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=55 
2000id=E2, device=1, temp=52 
3000id=E3, device=1, temp=49 
4000id=E4, device=1, temp=51 
5000id=E5, device=1, temp=55 
6000id=E5, device=1, temp=61a_id=E3, count_b=2, c_id=E6

7.4.10. Quantifier - Zero Or One (? and ??)

The zero-or-one quantifier (?) must be matched zero or one time by events. The operator is greedy and the reluctant version is ??.

The pattern looks for a sequence of events in which the temperature is below 50 and then dips to over 50 and then to under 50 before indicating a value over 55:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id, B.id as b_id, C.id as c_id, D.id as d_id
  pattern (A B? C? D)
  define 
	A as A.temp < 50,
	B as B.temp > 50,
	C as C.temp < 50,
	D as D.temp > 55)

An example sequence of events that matches the pattern above is:

Table 7.10. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=44 
2000id=E2, device=1, temp=49 
3000id=E3, device=1, temp=51 
4000id=E4, device=1, temp=49 
5000id=E5, device=1, temp=56a_id=E2, b_id=E3, c_id=E4, d_id=E5
6000id=E5, device=1, temp=61 

7.5. Define Clause

Within define are listed the boolean conditions that defines a variable name that is declared in the pattern.

A variable name does not require a definition and if there is no definition, the default is a predicate that is always true. Such a variable name can be used to match any row.

The definitions of variable names may reference the same or other variable names as prior examples have shown.

If a variable in your condition expression is a singleton variable, then only individual columns may be referenced. If the variable is not matched by an event, a null value is returned.

If a variable in your condition expression is a group variable, then only indexed columns may be referenced. If the variable is not matched by an event, a null value is returned.

Aggregation functions are not allowed within expressions of the define clause.

7.5.1. The Prev Operator

The prev function may be used in a define expression to access columns of the previous row of a variable name. If there is no previous row, the null value is returned.

The prev function can accept an optional non-negative integer argument indicating the offset to the previous rows. That argument must be a constant. In this case, the engine returns the property from the N-th row preceding the current row, and if the row doesn’t exist, it returns null.

This function can access variables currently defined, for example:

Y as Y.price < prev(Y.price, 2)

It is not legal to use prev with another variable then the one being defined:

// not allowed
Y as Y.price < prev(X.price, 2)

The prev function returns properties of events in the same partition. Also, it returns properties of events according to event order-of-arrival. When using data windows or deleting events from a named window, the remove stream does not remove events from the prev function.

The pattern looks for an event in which the temperature is greater or equal 100 and that, relative to that event, has an event preceding it by 2 events that also had a temperature greater or equal 100:

select * from TemperatureSensorEvent
match_recognize (
  partition by device
  measures A.id as a_id
  pattern (A)
  define 
	A as A.temp > 100 and prev(A.temp, 2) > 100)

An example sequence of events that matches the pattern above is:

Table 7.11. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=98 
2000id=E2, device=1, temp=101 
3000id=E3, device=1, temp=101 
4000id=E4, device=1, temp=99 
5000id=E5, device=1, temp=101a_id=E5

7.6. Measure Clause

The measures clause defines exported columns that contain expressions over the pattern variables. The expressions can reference partition columns, singleton variables and any aggregation functions including last and first on the group variables.

Expressions in the measures clause must use the as keyword to assign a column name.

If a variable is a singleton variable then only individual columns may be referenced, not aggregates. If the variable is not matched by an event, a null value is returned.

If a variable is a group variable and used in an aggregate, then the aggregate is performed over all rows that have matched the variable. If a group variable is not in an aggregate function, its variable name must be post-fixed with an index. See Section 7.4.5, “Variables Can be Singleton or Group” for more information.

7.7. Datawindow-Bound

When using match recognize with a named window or stream bound by a data window, all events removed from the named window or data window also removed the match-in-progress that includes the event(s) removed.

The next example looks for four sensor events from the same device immediately following each other and indicating a rising temperature, but only events that arrived in the last 10 seconds are considered:

select * from TemperatureSensorEvent.win:time(10 sec)
match_recognize (
partition by device
measures A.id as a_id
pattern (A B C D)
define 
B as B.temp > A.temp,
C as C.temp > B.temp,
D as D.temp > C.temp)

An example sequence of events that matches the pattern above is:

Table 7.12. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=80 
2000id=E2, device=1, temp=81 
3000id=E3, device=1, temp=82 
4000id=E4, device=1, temp=81 
7000id=E5, device=1, temp=82 
9000id=E6, device=1, temp=83 
13000id=E7, device=1, temp=84a_id=E4, a_id=E5, a_id=E6, a_id=E7
15000id=E8, device=1, temp=84 
20000id=E9, device=1, temp=85 
21000id=E10, device=1, temp=86 
26000id=E11, device=1, temp=87 

Note that E8, E9, E10 and E11 doe not constitute a match since E8 leaves the data window at 25000.

7.8. Interval

With the optional interval keyword and time period you can control how long the engine should wait for further events to arrive that may be part of a matching event sequence, before indicating a match (or matches). This is not applicable to on-demand pattern matching.

The interval timer starts are the arrival of the first event matching a sequence for a partition. When the time interval passes and an event sequence matches, duplicate matches are eliminated and output occurs.

The next example looks for sensor events indicating a temperature of over 100 waiting for any number of additional events with a temperature of over 100 for 10 seconds before indicating a match:

select * from TemperatureSensorEvent
match_recognize (
partition by device
measures A.id as a_id, count(B.id) as count_b, first(B.id) as first_b, last(B.id) as last_b
pattern (A B*)
interval 5 seconds
define 
  A as A.temp > 100,
  B as B.temp > 100)

An example sequence of events that matches the pattern above is:

Table 7.13. Example

Arrival TimeTupleOutput Event (if any)
1000id=E1, device=1, temp=98 
2000id=E2, device=1, temp=101 
3000id=E3, device=1, temp=102 
4000id=E4, device=1, temp=104 
5000id=E5, device=1, temp=104 
7000 a_id=E2, count_b=3, first_b=E3, last_b=E5

Notice that the engine waits 5 seconds (5000 milliseconds) after the arrival time of the first event E2 of the match at 2000, to indicate the match at 7000.

7.9. Use with Different Event Types

You may match different types of events using match-recognize by following any of these strategies:

  1. Declare a variant stream.

  2. Declare a supertype for your event types in the create schema syntax.

  3. Have you event classes implement a common interface or extend a common base class.

A short example that demonstrates variant streams and match-recognize is listed below:

// Declare one sample type
create schema S0 as (col string)
// Declare second sample type
create schema S1 as (col string)
// Declare variant stream holding either type
create variant schema MyVariantStream as S0, S1
// Populate variant stream
insert into MyVariantStream select * from S0
// Populate variant stream
insert into MyVariantStream select * from S1
// Simple pattern to match S0 S1 pairs
select * from MyVariantType.win:time(1 min)
match_recognize (
  measures A.id? as a, B.id? as b
  pattern (A B)
  define
    A as typeof(A) = 'S0',
    B as typeof(B) = 'S1'
)

7.10. Limitations

Please note the following limitations:

  1. Subqueries are not allowed in expressions within match_recognize.

  2. Joins and outer joins are not allowed in the same statement as match_recognize.

  3. match_recognize may not be used within on-select or on-insert statements.

  4. When using match_recognize on unbound streams (no data window provided) the iterator pull API returns no rows.

  5. A Statement Object Model API for match_recognize is not yet available.

Chapter 8. EPL Reference: Operators

Esper arithmetic and logical operator precedence follows Java standard arithmetic and logical operator precedence.

8.1. Arithmetic Operators

The below table outlines the arithmetic operators available.

Table 8.1. Syntax and results of arithmetic operators

OperatorDescription
+, -

As unary operators they denote a positive or negative expression. As binary operators they add or subtract.

*, /

Multiplication and division are binary operators.

%

Modulo binary operator.

8.2. Logical And Comparison Operators

The below table outlines the logical and comparison operators available.

Table 8.2. Syntax and results of logical and comparison operators

OperatorDescription
NOT

Returns true if the following condition is false, returns false if it is true.

OR

Returns true if either component condition is true, returns false if both are false.

AND

Returns true if both component conditions are true, returns false if either is false.

=, !=, <, > <=, >=, is, is not

Comparison.

8.2.1. Null-Value Comparison Operators

The null value is a special value, see http://en.wikipedia.org/wiki/Null_(SQL) (source:Wikipedia) for more information.

Thereby the following expressions all return null:

2 != null
null = null
2 != null or 1 = 2
2 != null and 2 = 2

Use the is and is not operators for comparing values that can be null.

The following expressions all return true:

2 is not null
null is not 2
null is null
2 is 2

The engine allows is and is not with any expression, not only in connection with the null constant.

8.3. Concatenation Operators

The below table outlines the concatenation operators available.

Table 8.3. Syntax and results of concatenation operators

OperatorDescription
||

Concatenates character strings

8.4. Binary Operators

The below table outlines the binary operators available.

Table 8.4. Syntax and results of binary operators

OperatorDescription
&

Bitwise AND if both operands are numbers; conditional AND if both operands are boolean.

|

Bitwise OR if both operands are numbers; conditional OR if both operands are boolean.

^

Bitwise exclusive OR (XOR).

8.5. Array Definition Operator

The { and } curly braces are array definition operators following the Java array initialization syntax. Arrays can be useful to pass to user-defined functions or to select array data in a select clause.

Array definitions consist of zero or more expressions within curly braces. Any type of expression is allowed within array definitions including constants, arithmetic expressions or event properties. This is the syntax of an array definition:

{ [expression [,expression...]] }

Consider the next statement that returns an event property named actions. The engine populates the actions property as an array of java.lang.String values with a length of 2 elements. The first element of the array contains the observation property value and the second element the command property value of RFIDEvent events.

select {observation, command} as actions from RFIDEvent

The engine determines the array type based on the types returned by the expressions in the array definiton. For example, if all expressions in the array definition return integer values then the type of the array is java.lang.Integer[]. If the types returned by all expressions are compatible number types, such as integer and double values, the engine coerces the array element values and returns a suitable type, java.lang.Double[] in this example. The type of the array returned is Object[] if the types of expressions cannot be coerced or return object values. Null values can also be used in an array definition.

Arrays can come in handy for use as parameters to user-defined functions:

select * from RFIDEvent where Filter.myFilter(zone, {1,2,3})

8.6. Dot Operator

You can use the dot operator to invoke a method on the result of an expression. The dot operator uses the dot (.) or period character.

The dot-operator is relevant with enumeration methods: Enumeration methods perform tasks such as transformation, filtering, aggregation, sequence-matching, sorting and others on subquery results, named windows, event properties or inputs that are or can be projected to a collection of events, scalar values or objects. See Chapter 10, EPL Reference: Enumeration Methods

Further the dot-operator is relevant to date-time methods. Date-time methods work on date-time values to add or subtract time periods, set or round calendar fields or query fields, among other tasks. See Chapter 11, EPL Reference: Date-Time Methods.

This section only describes the dot-operator in relation to property instance methods, the special get and size indexed-property methods and duck typing.

The synopsis for the dot operator is as follows

expression.method([parameter [,...]])[.method(...)][...]

The expression to evaluate by the dot operator is in parenthesis. After the dot character follows the method name and method parameters in parenthesis.

You may use the dot operator when your expression returns an object that you want to invoke a method on. The dot operator allows duck typing and convenient array and collection access methods.

This example statement invokes the getZones method of the RFID event class by referring to the stream name assigned in the from-clause:

select rfid.getZones() from RFIDEvent as rfid

The size() method can be used to return the array length or collection size. Use the get method to return the value at a given index for an array or collection.

The next statement selects array size and returns the last array element:

select arrayproperty.size() as arraySize, 
  arrayproperty.get((arrayproperty).size - 1) as lastInArray 
  from ProductEvent

8.6.1. Duck Typing

Duck typing is when the engine checks at runtime for the existence of a method regardless of object class inheritance hierarchies. This can be useful, for example, when a dynamic property returns an object which may or may not provide a method to return the desired value.

Duck typing is disabled in the default configuration to consistently enforce strong typing. Please enable duck typing via engine expression settings as described in Section 14.4.20, “Engine Settings related to Expression Evaluation”.

The statement below selects a dynamic property by name productDesc and invokes the getCounter() method if that method exists on the property value, or returns the null value if the method does not exist for the dynamic property value of if the dynamic property value itself is null:

select (productDesc?).getCounter() as arraySize from ProductEvent

8.7. The 'in' Keyword

The in keyword determines if a given value matches any value in a list. The syntax of the keyword is:

test_expression [not] in (expression [,expression...] )

The test_expression is any valid expression. The keyword is followed by a list of expressions to test for a match. The optional not keyword specifies that the result of the predicate be negated.

The result of an in expression is of type Boolean. If the value of test_expression is equal to any expression from the comma-separated list, the result value is true. Otherwise, the result value is false.

The next example shows how the in keyword can be applied to select certain command types of RFID events:

select * from RFIDEvent where command in ('OBSERVATION', 'SIGNAL')

The statement is equivalent to:

select * from RFIDEvent where command = 'OBSERVATION' or command = 'SIGNAL'

Expression may also return an array, a java.util.Collection or a java.util.Map. Thus event properties that are lists, sets or maps may provide values to compare against test_expression.

All expressions must be of the same type or a compatible type to test_expression. The in keyword may coerce number values to compatible types. If expression returns an array, then the component type of the array must be compatible, unless the component type of the array is Object.

If expression returns an array of component type Object, the operation compares each element of the array, applying equals semantics.

If expression returns a Collection, the operation determines if the collection contains the value returned by test_expression, applying contains semantics.

If expression returns a Map, the operation determines if the map contains the key value returned by test_expression, applying containsKey semantics.

Constants, arrays, Collection and Map expressions or event properties can be used combined.

For example, and assuming a property named 'mySpecialCmdList' exists that contains a list of command strings:

select * from RFIDEvent where command in ( 'OBSERVATION', 'SIGNAL', mySpecialCmdList)

When using prepared statements and substitution parameters with the in keyword, make sure to retain the parenthesis. Substitution values may also be arrays, Collection and Map values:

test_expression [not] in (? [,?...] )

Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of the any construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.

8.7.1. 'in' for Range Selection

The in keyword can be used to specify ranges, including open, half-closed, half-open and inverted ranges.

Ranges come in the following 4 varieties. The round () or square [] bracket indicate whether an endpoint is included or excluded. The low point and the high-point of the range are separated by the colon : character.

  • Open ranges that contain neither endpoint (low:high)

  • Closed ranges that contain both endpoints [low:high]. The equivalent 'between' keyword also defines a closed range.

  • Half-open ranges that contain the low endpoint but not the high endpoint [low:high)

  • Half-closed ranges that contain the high endpoint but not the low endpoint (low:high]

The following statement two statements are equivalent: Both statements select orders where the price is in the range of zero and 10000 (endpoints inclusive):

select * from OrderEvent where price in [0:10000]
select * from OrderEvent where price between 0 and 10000

The next statement selects order events where the price is greater then 100 and less-or-equal to 2000:

select * from OrderEvent where price in (100:2000]

Use the not in keywords to specify an inverted range.

The following statement selects an inverted range by selecting all order events where the price is less then zero or the price is greater or equal to 10000:

select * from OrderEvent where price not in (0:10000]

In case the value of low endpoint is less then the value of high endpoint the in operator reverses the range.

The following two statements are also equivalent:

select * from OrderEvent where price in [10000:0]
select * from OrderEvent where price >= 0 and price <= 1000

8.8. The 'between' Keyword

The between keyword specifies a range to test. The syntax of the keyword is:

test_expression [not] between begin_expression and end_expression

The test_expression is any valid expression and is the expression to test for in the range defined by begin_expression and end_expression. The not keyword specifies that the result of the predicate be negated.

The result of a between expression is of type Boolean. If the value of test_expression is greater then or equal to the value of begin_expression and less than or equal to the value of end_expression, the result is true.

The next example shows how the between keyword can be used to select events with a price between 55 and 60 (endpoints inclusive).

select * from StockTickEvent where price between 55 and 60

The equivalent expression without between is:

select * from StockTickEvent where price >= 55 and price <= 60

And also equivalent to:

select * from StockTickEvent where price between 60 and 55

While the between keyword always includes the endpoints of the range, the in operator allows finer control of endpoint inclusion.

In case the value of begin_expression is less then the value of end_expression the between operator reverses the range.

The following two statements are also equivalent:

select * from StockTickEvent where price between 60 and 55
select * from StockTickEvent where price >= 55 and price <= 60

8.9. The 'like' Keyword

The like keyword provides standard SQL pattern matching. SQL pattern matching allows you to use '_' to match any single character and '%' to match an arbitrary number of characters (including zero characters). In Esper, SQL patterns are case-sensitive by default. The syntax of like is:

test_expression [not] like pattern_expression [escape string_literal]

The test_expression is any valid expression yielding a String-type or a numeric result. The optional not keyword specifies that the result of the predicate be negated. The like keyword is followed by any valid standard SQL pattern_expression yielding a String-typed result. The optional escape keyword signals the escape character to escape '_' and '%' values in the pattern.

The result of a like expression is of type Boolean. If the value of test_expression matches the pattern_expression, the result value is true. Otherwise, the result value is false.

An example for the like keyword is below.

select * from PersonLocationEvent where name like '%Jack%'

The escape character can be defined as follows. In this example the where-clause matches events where the suffix property is a single '_' character.

select * from PersonLocationEvent where suffix like '!_' escape '!'

8.10. The 'regexp' Keyword

The regexp keyword is a form of pattern matching based on regular expressions implemented through the Java java.util.regex package. The syntax of regexp is:

test_expression [not] regexp pattern_expression

The test_expression is any valid expression yielding a String-type or a numeric result. The optional not keyword specifies that the result of the predicate be negated. The regexp keyword is followed by any valid regular expression pattern_expression yielding a String-typed result.

The result of a regexp expression is of type Boolean. If the value of test_expression matches the regular expression pattern_expression, the result value is true. Otherwise, the result value is false.

An example for the regexp keyword is below.

select * from PersonLocationEvent where name regexp '*Jack*'

8.11. The 'any' and 'some' Keywords

The any operator is true if the expression returns true for one or more of the values returned by a list of expressions including array, Collection and Map values.

The synopsis for the any keyword is as follows:

expression operator any (expression [,expression...] )

The left-hand expression is evaluated and compared to each expression result using the given operator, which must yield a Boolean result. The result of any is "true" if any true result is obtained. The result is "false" if no true result is found (including the special case where the expressions are collections that return no rows).

The operator can be any of the following values: =, !=, <>, <, <=, >, >=.

The some keyword is a synonym for any. The in construct is equivalent to = any.

Expression may also return an array, a java.util.Collection or a java.util.Map. Thus event properties that are lists, sets or maps may provide values to compare against.

All expressions must be of the same type or a compatible type. The any keyword coerces number values to compatible types. If expression returns an array, then the component type of the array must be compatible, unless the component type of the array is Object.

If expression returns an array, the operation compares each element of the array.

If expression returns a Collection, the operation determines if the collection contains the value returned by the left-hand expression, applying contains semantics. When using relationship operators <, <=, >, >= the operator applies to each element in the collection, and non-numeric elements are ignored.

If expression returns a Map, the operation determines if the map contains the key value returned by the left-hand expression, applying containsKey semantics. When using relationship operators <, <=, >, >= the operator applies to each key in the map, and non-numeric map keys are ignored.

Constants, arrays, Collection and Map expressions or event properties can be used combined.

The next statement demonstrates the use of the any operator:

select * from ProductOrder where category != any (categoryArray)

The above query selects ProductOrder event that have a category field and a category array, and returns only those events in which the category value is not in the array.

Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of the any construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.

8.12. The 'all' Keyword

The all operator is true if the expression returns true for all of the values returned by a list of expressions including array, Collection and Map values.

The synopsis for the all keyword is as follows:

expression operator all (expression [,expression...] )

The left-hand expression is evaluated and compared to each expression result using the given operator, which must yield a Boolean result. The result of all is "true" if all rows yield true (including the special case where the expressions are collections that returns no rows). The result is "false" if any false result is found. The result is null if the comparison does not return false for any row, and it returns null for at least one row.

The operator can be any of the following values: =, !=, <>, <, <=, >, >=.

The not in construct is equivalent to != all.

Expression may also return an array, a java.util.Collection or a java.util.Map. Thus event properties that are lists, sets or maps may provide values to compare against.

All expressions must be of the same type or a compatible type. The all keyword coerces number values to compatible types. If expression returns an array, then the component type of the array must be compatible, unless the component type of the array is Object.

If expression returns an array, the operation compares each element of the array.

If expression returns a Collection, the operation determines if the collection contains the value returned by the left-hand expression, applying contains semantics. When using relationship operators <, <=, >, >= the operator applies to each element in the collection, and non-numeric elements are ignored.

If expression returns a Map, the operation determines if the map contains the key value returned by the left-hand expression, applying containsKey semantics. When using relationship operators <, <=, >, >= the operator applies to each key in the map, and non-numeric map keys are ignored.

Constants, arrays, Collection and Map expressions or event properties can be used combined.

The next statement demonstrates the use of the all operator:

select * from ProductOrder where category = all (categoryArray)

The above query selects ProductOrder event that have a category field and a category array, and returns only those events in which the category value matches all values in the array.

8.13. The 'new' Keyword

The new operator populates a new data structure by evaluating column names and assignment expressions. This is useful when an expression should return multiple results, for performing a transformation or inside enumeration method lambda expressions.

The synopsis for the new keyword is as follows:

new { column_name = [assignment_expression] [,column_name...] }

The result of the new-operator is a map data structure that contains column_name keys and values. If an assignment expression is provided for a column, the operator evaluates the expression and assigns the result to the column name. If no assignment expression is provided, the column name is assumed to be an event property name and the value is the event property value.

The next statement demonstrates the use of the new operator:

select new {category, price = 2*price} as priceInfo from ProductOrder

The above query returns a single property priceInfo for each arriving ProductOrder event. The property value is itself a map that contains two entries: For the key name category the value of the category property and for the key name price the value of the price property multiplied by two.

The next EPL is an example of the new operator within an expression definition and a case-statement (one EPL statement not multiple):

expression calcPrice {
  productOrder => case
    when category = 'fish' then new { sterialize = 'XRAY', priceFactor = 1.01 }
    when category = 'meat' then new { sterialize = 'UVL', priceFactor = 1 }
  end
}  

select calcPrice(po) as priceDetail from ProductOrder po

In above example the expression calcPrice returns both a sterialize string value and a priceFactor double value. The expression is evaluated as part of the select-clause and the map-type result placed in the priceDetail property pf output events.

When used within the case operator, the operator validates that the data structure is compatible between each case-when result in terms of column names and types. The default value for else in case is null.

Chapter 9. EPL Reference: Functions

9.1. Single-row Function Reference

Single-row functions return a single value for every single result row generated by your statement. These functions can appear anywhere where expressions are allowed.

Esper allows static Java library methods as single-row functions, and also features built-in single-row functions. In addition, Esper allows instance method invocations on named streams.

You may also register your own single-row function name with the engine so that your EPL statements become less cluttered. This is described in detail in Section 16.3, “Single-Row Function”. Single-row functions that return an object can be chained.

Esper auto-imports the following Java library packages:

  • java.lang.*

  • java.math.*

  • java.text.*

  • java.util.*

Thus Java static library methods can be used in all expressions as shown in below example:

select symbol, Math.round(volume/1000)
from StockTickEvent.win:time(30 sec)

In general, arbitrary Java class names have to be fully qualified (e.g. java.lang.Math) but Esper provides a mechanism for user-controlled imports of classes and packages as outlined in Section 14.4.5, “Class and package imports”.

The below table outlines the built-in single-row functions available.

Table 9.1. Syntax and results of single-row functions

Single-row FunctionResult
case value 
  when compare_value then result
  [when compare_value then result ...] 
  [else result] 
  end 

Returns result where the first value equals compare_value.

case 
  when condition then result
  [when condition then result ...] 
  [else result] 
  end

Returns the result for the first condition that is true.

cast(expression, type_name)

Casts the result of an expression to the given type.

coalesce(expression, expression [, expression ...])

Returns the first non-null value in the list, or null if there are no non-null values.

current_timestamp[()]

Returns the current engine time as a long millisecond value. Reserved keyword with optional parenthesis.

exists(dynamic_property_name)

Returns true if the dynamic property exists for the event, or false if the property does not exist.

instanceof(expression, type_name [, type_name ...])

Returns true if the expression returns an object whose type is one of the types listed.

max(expression, expression [, expression ...])

Returns the highest numeric value among the 2 or more comma-separated expressions.

min(expression, expression [, expression ...])

Returns the lowest numeric value among the 2 or more comma-separated expressions.

prev(expression, event_property)

Returns a property value or all properties of a previous event, relative to the event order within a data window, or according to an optional index parameter (N) the positional Nth-from-last value.

prevtail(expression, event_property)

Returns a property value or all properties of the first event in a data window relative to the event order within a data window, or according to an optional index parameter (N) the positional Nth-from-first value.

prevwindow(event_property)

Returns a single property value of all events or all properties of all events in a data window in the order that reflects the sort order of the data window.

prevcount(event_property)

Returns the count of events (number of data points) in a data window.

prior(integer, event_property)

Returns a property value of a prior event, relative to the natural order of arrival of events

typeof(expression)

If expression is a stream name, returns the event type name of the evaluated event, often used with variant streams. If expression is a property name or expression, returns the name of the expression result type.

9.1.1. The Case Control Flow Function

The case control flow function has two versions. The first version takes a value and a list of compare values to compare against, and returns the result where the first value equals the compare value. The second version takes a list of conditions and returns the result for the first condition that is true.

The return type of a case expression is the compatible aggregated type of all return values.

The case expression is sometimes used with the new operator to return multiple results, see Section 8.13, “The 'new' Keyword”.

The example below shows the first version of a case statement. It has a String return type and returns the value 'one'.

select case myexpression when 1 then 'one' when 2 then 'two' else 'more' end from ...

The second version of the case function takes a list of conditions. The next example has a Boolean return type and returns the boolean value true.

select case when 1>0 then true else false end from ...

9.1.2. The Cast Function

The cast function casts the return type of an expression to a designated type. The function accepts two parameters: The first parameter is the property name or expression that returns the value to be casted. The second parameter is the type to cast to.

Valid parameters for the second (type) parameter are:

  • Any of the Java built-in types: int, long, byte, short, char, double, float, string, BigInteger, BigDecimal, where string is a short notation for java.lang.String and BigInteger as well as BigDecimal are the classes in java.math. The type name is not case-sensitive. For example:

    cast(price, double)

  • The fully-qualified class name of the class to cast to, for example:

    cast(product, org.myproducer.Product)

The cast function is often used to provide a type for dynamic (unchecked) properties. Dynamic properties are properties whose type is not known at compile type. These properties are always of type java.lang.Object.

The cast function as shown in the next statement casts the dynamic "price" property of an "item" in the OrderEvent to a double value.

select cast(item.price?, double) from OrderEvent

The cast function returns a null value if the expression result cannot be casted to the desired type, or if the expression result itself is null.

The cast function adheres to the following type conversion rules:

  • For all numeric types, the cast function utilitzes java.lang.Number to convert numeric types, if required.

  • For casts to string or java.lang.String, the function calls toString on the expression result.

  • For casts to other objects including application objects, the cast function considers a Java class's superclasses as well as all directly or indirectly-implemented interfaces by superclasses .

9.1.3. The Coalesce Function

The result of the coalesce function is the first expression in a list of expressions that returns a non-null value. The return type is the compatible aggregated type of all return values.

This example returns a String-typed result of value 'foo':

select coalesce(null, 'foo') from ...

9.1.4. The Current_Timestamp Function

The current_timestamp function is a reserved keyword and requires no parameters. The result of the current_timestamp function is the long-type millisecond value of the current engine system time.

The function returns the current engine timestamp at the time of expression evaluation. When using external-timer events, the function provides the last value of the externally-supplied time at the time of expression evaluation.

This example selects the current engine time:

select current_timestamp from MyEvent
// equivalent to
select current_timestamp() from MyEvent

9.1.5. The Exists Function

The exists function returns a boolean value indicating whether the dynamic property, provided as a parameter to the function, exists on the event. The exists function accepts a single dynamic property name as its only parameter.

The exists function is for use with dynamic (unchecked) properties. Dynamic properties are properties whose type is not known at compile type. Dynamic properties return a null value if the dynamic property does not exists on an event, or if the dynamic property exists but the value of the dynamic property is null.

The exists function as shown next returns true if the "item" property contains an object that has a "serviceName" property. It returns false if the "item" property is null, or if the "item" property does not contain an object that has a property named "serviceName" :

select exists(item.serviceName?) from OrderEvent

9.1.6. The Instance-Of Function

The instanceof function returns a boolean value indicating whether the type of value returned by the expression is one of the given types. The first parameter to the instanceof function is an expression to evaluate. The second and subsequent parameters are Java type names.

The function determines the return type of the expression at runtime by evaluating the expression, and compares the type of object returned by the expression to the defined types. If the type of object returned by the expression matches any of the given types, the function returns true. If the expression returned null or a type that does not match any of the given types, the function returns false.

The instanceof function is often used in conjunction with dynamic (unchecked) properties. Dynamic properties are properties whose type is not known at compile type.

This example uses the instanceof function to select different properties based on the type:

select case when instanceof(item, com.mycompany.Service) then serviceName?
  when instanceof(item, com.mycompany.Product) then productName? end 
  from OrderEvent

The instanceof function returns false if the expression tested by instanceof returned null.

Valid parameters for the type parameter list are:

  • Any of the Java built-in types: int, long, byte, short, char, double, float, string, where string is a short notation for java.lang.String. The type name is not case-sensitive. For example, the next function tests if the dynamic "price" property is either of type float or type double:

    instanceof(price?, double, float)

  • The fully-qualified class name of the class to cast to, for example:

    instanceof(product, org.myproducer.Product)

The function considers an event class's superclasses as well as all the directly or indirectly-implemented interfaces by superclasses.

9.1.7. The Min and Max Functions

The min and max function take two or more parameters that itself can be expressions. The min function returns the lowest numeric value among the 2 or more comma-separated expressions, while the max function returns the highest numeric value. The return type is the compatible aggregated type of all return values.

The next example shows the max function that has a Double return type and returns the value 1.1.

select max(1, 1.1, 2 * 0.5) from ...

The min function returns the lowest value. The statement below uses the function to determine the smaller of two timestamp values.

select symbol, min(ticks.timestamp, news.timestamp) as minT
	from StockTickEvent.win:time(30 sec) as ticks, NewsEvent.win:time(30 sec) as news
	where ticks.symbol = news.symbol

9.1.8. The Previous Function

The prev function returns the property value or all event properties of a previous event. For data windows that introduce a sort order other then the order of arrival, such as the sorted data window and the time order data window, the function returns the event at the specified position.

The prev function is not an aggregation function and therefore does not return results per group when used with group by. Please consider the last, lastever or nth aggregation functions instead as described in Section 9.2.2, “Data Window Aggregation Functions”. You must use an aggregation function instead of prev when querying a named window.

The first parameter to the prev function is an index parameter and denotes the i-th previous event, in the order established by the data window. If no index is provided, the default index is 1 and the function returns the previous event. The second parameter is a property name or stream name. If specifying a property name, the function returns the value for the previous event property value. If specifying a stream name, the function returns the previous event underlying object.

This example selects the value of the price property of the 2nd-previous event from the current Trade event:

select prev(2, price) from Trade.win:length(10)

By using the stream alias in the previous function, the next example selects the trade event itself that is immediately previous to the current Trade event

select prev(1, trade) from Trade.win:length(10) as trade

Since the prev function takes the order established by the data window into account, the function works well with sorted windows.

In the following example the statement selects the symbol of the 3 Trade events that had the largest, second-largest and third-largest volume.

select prev(0, symbol), prev(1, symbol), prev(2, symbol)
  from Trade.ext:sort(3, volume desc)

The i-th previous event parameter can also be an expression returning an Integer-type value. The next statement joins the Trade data window with an RankSelectionEvent event that provides a rank property used to look up a certain position in the sorted Trade data window:

select prev(rank, symbol) from Trade.ext:sort(10, volume desc), RankSelectionEvent unidirectional

Use the prev function in combination with a grouped data window to access a previous event per grouping criteria.

The example below returns the price of the previous Trade event for the same symbol, or null if for that symbol there is no previous Trade event:

select prev(1, price) from Trade.std:groupwin(symbol).win:length(2)

The prev function returns a null value if the data window does not currently hold the i-th previous event. The example below illustrates this using a time batch window. Here the prev function returns a null value for any events in which the previous event is not in the same batch of events. Note that the prior function as discussed below can be used if a null value is not the desired result.

select prev(1, symbol) from Trade.win:time_batch(1 min)

An alternative form of the prev function allows the index to not appear or appear after the property name if the index value is a constant and not an expression:

select prev(1, symbol) from Trade
// ... equivalent to ...
select prev(symbol) from Trade
// ... and ...
select prev(symbol, 1) from Trade

The combination of the prev function and std:groupwin view returns the property value for a previous event in the given data window group.

The following example returns for each event the current smallest price per symbol:

select symbol, prev(0, price) as topPricePerSymbol 
from Trade.std:groupwin(symbol).ext:sort(1, price asc)

9.1.8.1. Restrictions

The following restrictions apply to the prev functions and its results:

  • The function always returns a null value for remove stream (old data) events.

  • The function requires a data window view, or a std:groupwin and data window view, without any additional sub-views. See Chapter 12, EPL Reference: Views for built-in data window views.

9.1.8.2. Comparison to the prior Function

The prev function is similar to the prior function. The key differences between the two functions are as follows:

  • The prev function returns previous events in the order provided by the data window, while the prior function returns prior events in the order of arrival as posted by a stream's declared views.

  • The prev function requires a data window view while the prior function does not have any view requirements.

  • The prev function returns the previous event grouped by a criteria by combining the std:groupwin view and a data window. The prior function returns prior events posted by the last view regardless of data window grouping.

  • The prev function returns a null value for remove stream events, i.e. for events leaving a data window. The prior function does not have this restriction.

9.1.9. The Previous-Tail Function

The prevtail function returns the property value or all event properties of the positional-first event in a data window. For data windows that introduce a sort order other then the order of arrival, such as the sorted data window and the time order data window, the function returns the first event at the specified position.

The prevtail function is not an aggregation function and therefore does not return results per group when used with group by. Please consider the first, firstever or window aggregation functions instead as described in Section 9.2.2, “Data Window Aggregation Functions”. You must use an aggregation function instead of prevtail when querying a named window.

The first parameter is an index parameter and denotes the i-th from-first event in the order established by the data window. If no index is provided the default is zero and the function returns the first event in the data window. The second parameter is a property name or stream name. If specifying a property name, the function returns the value for the previous event property value. If specifying a stream name, the function returns the previous event underlying object.

This example selects the value of the price property of the first (oldest) event held in the length window:

select prevtail(price) from Trade.win:length(10)

By using the stream alias in the prevtail function, the next example selects the trade event itself that is the second event held in the length window:

select prevtail(1, trade) from Trade.win:length(10) as trade

Since the prevtail function takes the order established by the data window into account, the function works well with sorted windows.

In the following example the statement selects the symbol of the 3 Trade events that had the smallest, second-smallest and third-smallest volume.

select prevtail(0, symbol), prevtail(1, symbol), prevtail(2, symbol)
  from Trade.ext:sort(3, volume asc)

The i-th previous event parameter can also be an expression returning an Integer-type value. The next statement joins the Trade data window with an RankSelectionEvent event that provides a rank property used to look up a certain position in the sorted Trade data window:

select prevtail(rank, symbol) from Trade.ext:sort(10, volume asc), RankSelectionEvent unidirectional

The prev function returns a null value if the data window does not currently holds positional-first or the Nth-from-first event. For batch data windows the value returned is relative to the current batch.

The following example returns the first and second symbol value in the batch:

select prevtail(0, symbol), prevtail(1, symbol) from Trade.win:time_batch(1 min)

An alternative form of the prevtail function allows the index to not appear or appear after the property name if the index value is a constant and not an expression:

select prevtail(1, symbol) from Trade
// ... equivalent to ...
select prevtail(symbol) from Trade
// ... and ...
select prevtail(symbol, 1) from Trade

The combination of the prevtail function and std:groupwin view returns the property value for a positional first event in the given data window group.

Let's look at an example. This statement outputs the oldest price per symbol retaining the last 10 prices per symbol:

select symbol, prevtail(0, price) as oldestPrice
from Trade.std:groupwin(symbol).win:length(10)

9.1.9.1. Restrictions

The following restrictions apply to the prev functions and its results:

  • The function always returns a null value for remove stream (old data) events.

  • The function requires a data window view, or a std:groupwin and data window view, without any additional sub-views. See Chapter 12, EPL Reference: Views for built-in data window views.

9.1.10. The Previous-Window Function

The prevwindow function returns property values or all event properties for all events in a data window. For data windows that introduce a sort order other then the order of arrival, such as the sorted data window and the time order data window, the function returns the event data sorted in that order, otherwise it returns the events sorted by order of arrival with the newest arriving event first.

The prevwindow function is not an aggregation function and therefore does not return results per group when used with group by. Please consider the window aggregation function instead as described in Section 9.2.2, “Data Window Aggregation Functions”. You must use an aggregation function instead of prevwindow when querying a named window.

The single parameter is a property name or stream name. If specifying a property name, the function returns the value of the event property for all events held by the data window. If specifying a stream name, the function returns the event underlying object for all events held by the data window.

This example selects the value of the price property of all events held in the length window:

select prevwindow(price) from Trade.win:length(10)

By using the stream alias in the prevwindow function, the next example selects all trade events held in the length window:

select prevwindow(trade) from Trade.win:length(10) as trade

When used with a data window that introduces a certain sort order, the prevwindow function returns events sorted according to that sort order.

The next statement outputs for every arriving event the current 10 underying trade event objects that have the largest volume:

select prevwindow(trade) from Trade.ext:sort(10, volume desc) as trade

The prevwindow function returns a null value if the data window does not currently hold any events.

The combination of the prevwindow function and std:groupwin view returns the property value(s) for all events in the given data window group.

This example statement outputs all prices per symbol retaining the last 10 prices per symbol:

select symbol, prevwindow(price) from Trade.std:groupwin(symbol).win:length(10)

9.1.10.1. Restrictions

The following restrictions apply to the prev functions and its results:

  • The function always returns a null value for remove stream (old data) events.

  • The function requires a data window view, or a std:groupwin and data window view, without any additional sub-views. See Chapter 12, EPL Reference: Views for built-in data window views.

9.1.11. The Previous-Count Function

The prevcount function returns the number of events held in a data window.

The prevcount function is not an aggregation function and therefore does not return results per group when used with group by. Please consider the count(*) aggregation function instead as described in Section 9.2, “Aggregate Functions”. You must use an aggregation function instead of prevcount when querying a named window.

The single parameter is a property name or stream name of the data window to return the count for.

This example selects the number of data points for the price property held in the length window:

select prevcount(price) from Trade.win:length(10)

By using the stream alias in the prevcount function the next example selects the count of trade events held in the length window:

select prevcount(trade) from Trade.win:length(10) as trade

The combination of the prevcount function and std:groupwin view returns the count of events in the given data window group.

This example statement outputs the number of events retaining the last 10 events per symbol:

select symbol, prevcount(price) from Trade.std:groupwin(symbol).win:length(10)

9.1.11.1. Restrictions

The following restrictions apply to the prev functions and its results:

  • The function always returns a null value for remove stream (old data) events.

  • The function requires a data window view, or a std:groupwin and data window view, without any additional sub-views. See Chapter 12, EPL Reference: Views for built-in data window views.

9.1.12. The Prior Function

The prior function returns the property value of a prior event. The first parameter is an integer value that denotes the i-th prior event in the natural order of arrival. The second parameter is a property name for which the function returns the value for the prior event. The second parameter is a property name or stream name. If specifying a property name, the function returns the property value for the prior event. If specifying a stream name, the function returns the prior event underlying object.

This example selects the value of the price property of the 2nd-prior event to the current Trade event.

select prior(2, price) from Trade

By using the stream alias in the prior function, the next example selects the trade event itself that is immediately prior to the current Trade event

select prior(1, trade) from Trade as trade

The prior function can be used on any event stream or view and does not have any specific view requirements. The function operates on the order of arrival of events by the event stream or view that provides the events.

The next statement uses a time batch window to compute an average volume for 1 minute of Trade events, posting results every minute. The select-clause employs the prior function to select the current average and the average before the current average:

select average, prior(1, average) 
    from TradeAverages.win:time_batch(1 min).stat:uni(volume)

9.1.13. The Type-Of Function

The typeof function, when parameterized by a stream name, returns the event type name of the evaluated event which can be useful with variant streams. When parameterized by an expression or property name, the function returns the type name of the expression result or null if the expression result is null.

In summary, the function determines the return type of the expression at runtime by evaluating the expression and returns the type name of the expression result.

The typeof function is often used in conjunction with variant streams. A variant stream is a predefined stream into which events of multiple disparate event types can be inserted. The typeof function, when passed a stream name alias, returns the name of the event type of each event in the stream.

The following example elaborates on the use of variant streams with typeof. The first statement declares a variant stream SequencePatternStream:

create variant schema SequencePatternStream as *

The next statement inserts all order events and is followed by a statement to insert all product events:

insert into SequencePatternStream select * from OrderEvent;
insert into SequencePatternStream select * from PriceEvent;

This example statement returns the event type name for each event in the variant stream:

select typeof(sps) from SequencePatternStream as sps

The next example statement detects a pattern by utilizing the typeof function to find pairs of order event immediately followed by product event:

select * from SequencePatternStream match_recognize(
  measures A as a, B as b
  pattern (A B)
  define A as typeof(A) = "OrderEvent",
         B as typeof(B) = "ProductEvent"
  )

When passing a property name to the typeof function, the function evaluates whether the property type is event type (a fragment event type). If the property type is event type, the function returns the type name of the event in the property value or null if not provided. If the property type is not event type, the function returns the simple class name of the property value.

When passing an expression to the typeof function, the function evaluates the expression and returns the simple class name of the expression result value or null if the expression result value is null.

This example statement returns the simple class name of the value of the dynamic property prop of events in stream MyStream, or a null value if the property is not found for an event or the property value itself is null:

select typeof(prop?) from MyStream

When using subclasses or interface implementations as event classes or when using Map-event type inheritance, the function returns the event type name provided when the class or Map-type event was registered, or if the event type was not registered, the function returns the fully-qualified class name.

9.2. Aggregate Functions

Aggregation functions return a single value from a collection of input values. The group by keywords are often used in conjunction with aggregation functions to group the result-set by one or more columns.

The EPL language extends the standard SQL aggregation functions by allowing filters and by further useful aggregation functions that can track a data window or compute event rates, for example. Your application may also add its own aggregation function as Section 16.5, “Aggregation Function” describes.

Aggregation values are always computed incrementally: Insert and remove streams result in aggregation value changes. The exceptions are on-demand queries and joins when using the unidirectional keyword. Aggregation functions are optimized to retain the minimal information necessary to compute the aggregated result.

Aggregation functions can also be used with unbound streams when no data window is specified.

9.2.1. SQL-Standard Functions

The SQL-standard aggregation functions are shown in below table.

Table 9.2. Syntax and results of SQL-standard aggregation functions

Aggregate FunctionResult
avedev([all|distinct] expression [, filter_expr])

Mean deviation of the (distinct) values in the expression, returning a value of double type.

The optional filter expression limits the values considered for computing the mean deviation.

avg([all|distinct] expression [, filter_expr])

Average of the (distinct) values in the expression, returning a value of double type.

The optional filter expression limits the values considered for computing the average.

count([all|distinct] expression [, filter_expr])

Number of the (distinct) non-null values in the expression, returning a value of long type.

The optional filter expression limits the values considered for the count.

count(* [, filter_expr])

Number of events, returning a value of long type.

The optional filter expression limits the values considered for the count.

max([all|distinct] expression)

fmax([all|distinct] expression, filter_expr)

Highest (distinct) value in the expression, returning a value of the same type as the expression itself returns.

Use fmax to provide a filter expression that limits the values considered for computing the maximum.

median([all|distinct] expression [, filter_expr])

Median (distinct) value in the expression, returning a value of double type. Double Not-a-Number (NaN) values are ignored in the median computation.

The optional filter expression limits the values considered for computing the median.

min([all|distinct] expression)

fmin([all|distinct] expression, filter_expr)

Lowest (distinct) value in the expression, returning a value of the same type as the expression itself returns.

Use fmin to provide a filter expression that limits the values considered for computing the maximum.

stddev([all|distinct] expression [, filter_expr])

Standard deviation of the (distinct) values in the expression, returning a value of double type.

The optional filter expression limits the values considered for computing the standard deviation.

sum([all|distinct] expression [, filter_expr])

Totals the (distinct) values in the expression, returning a value of long, double, float or integer type depending on the expression.

The optional filter expression limits the values considered for computing the total.

If your application provides double-type values to an aggregation function, avoid using Not-a-Number (NaN) and infinity. Also when using double-type values, round-off errors (or rounding errors) may occur due to double-type precision. Consider rounding your result value to the desired precision.

Each of the aggregation functions above takes an optional filter expression as a parameter. The filter expression must return a boolean-type value and applies to the events considered for the aggregation. If a filter expression is provided, then only if the filter expression returns a value of true does the engine update the aggregation for that event or combination of events.

Consider the following example, which computes the quantity fraction of buy orders among all orders:

select sum(quantity, side='buy') / sum(quantity) as buy_fraction from Orders

Use the fmin and fmax aggregation functions instead of the min and max aggregation functions when providing a filter expression (the min and max functions are also single-row functions).

The next example computes the minimum quantity for buy orders and a separate minimum quantity for sell orders:

select fmin(quantity, side='buy'), fmin(quantity, side = 'sell') from Orders

9.2.2. Data Window Aggregation Functions

The first, last and window aggregation functions return event properties of events present in a stream's data window. They are useful when information about current data window contents is required.

When comparing the last aggregation function to the prev function, the differences are as follows. The prev function is not an aggregation function and thereby not sensitive to the presence of group by. The prev function accesses data window contents directly and respects the sort order of the data window. The last aggregation function returns results based on arrival order and tracks data window contents in a separate shared data structure.

When comparing the first aggregation function to the prevtail function, the differences are as follows. The prevtail function is not an aggregation function and thereby not sensitive to the presence of group by. The prevtail function accesses data window contents directly and respects the sort order of the data window. The first aggregation function returns results based on arrival order and tracks data window contents in a separate shared data structure.

When comparing the window aggregation function to the prevwindow function, the differences are as follows. The prevwindow function is not an aggregation function and thereby not sensitive to the presence of group by. The prevwindow function accesses data window contents directly and respects the sort order of the data window. The window aggregation function returns results based on arrival order and tracks data window contents in a separate shared data structure.

When comparing the count aggregation function to the prevcount function, the differences are as follows. The prevcount function is not an aggregation function and thereby not sensitive to the presence of group by.

When comparing the last aggregation function to the nth aggregation function, the differences are as follows. The nth aggregation function does not consider out-of-order deletes (for example with on-delete and sorted windows) and does not revert to the prior expression value when the last event or nth-event was deleted from a data window. The last aggregation function tracks the data window and reflects out-of-order deletes.

From an implementation perspective, the first, last and window aggregation functions share a common data structure for each stream.

9.2.2.1. First Aggregation Function

The synopsis for the first aggregation function is:

first(*|stream.*|value_expression [, index_expression])

The first aggregation function returns properties of very first event in the data window. When used with group by, it returns properties of the first event in the data window for each group. When specifying an index expression, the function returns properties of the Nth-subsequent event to the first event, all according to order of arrival.

The first parameter to the function is required and defines the event properties or expression result to return. The second parameter is an optional index_expression that must return an integer value used as an index to evaluate the Nth-subsequent event to the first event.

You may specify the wildcard (*) character in which case the function returns the underlying event of the single selected stream. When selecting a single stream you may specify no parameter instead of wildcard. For joins and subqueries you must use the stream wildcard syntax below.

You may specify the stream name and wildcard (*) character in the stream.* syntax. This returns the underlying event for the specified stream.

You may specify a value_expression to evaluate for the first event. The value expression may not select properties from multiple streams.

The index_expression is optional. If no index expression is provided, the function returns the first event. If present, the function evaluates the index expression to determine the value for N, and evaluates the Nth-subsequent event to the first event. A value of zero returns the first event and a value of 1 returns the event subsequent to the first event. You may not specify event properties in the index expression.

The function returns null if there are no events in the data window or when the index is larger then the number of events held in the data window. If used with group by, it returns null if there are no events in the data window for that group or when the index is larger then the number of events held in the data window for that group.

To explain, consider the statement below which selects the underlying event of the first sensor event held by the length window of 2 events.

select first(*) from SensorEvent.win:length(2) 

Assume event E1, event E2 and event E3 are of type SensorEvent. When event E1 arrives the statement outputs the underlying event E1. When event E2 arrives the statement again outputs the underlying event E1. When event E3 arrives the statement outputs the underlying event E2, since event E1 has left the data window.

The stream wildcard syntax is useful for joins and subqueries. This example demonstrates a subquery that returns the first SensorEvent when a DoorEvent arrives:

select (select first(se.*) from SensorEvent.win:length(2) as se) from DoorEvent 

The following example shows the use of an index expression. The output value for f1 is the temperature property value of the first event in the data window, the value for f2 is the temperature property value of the second event in the data window:

select first(temperature, 0) as f1, first(temperature, 1) as f2
from SensorEvent.win:time(10 sec)

You may use dot-syntax to invoke a method on the first event. You may also append a property name using dot-syntax.

9.2.2.2. Last Aggregation Function

The synopsis for the last aggregation function is:

last(*|stream.*|value_expression [, index_expression])

The last aggregation function returns properties of the very last event in the data window. When used with group by, it returns properties of the last event in the data window for each group. When specifying an index expression, the function returns properties of the Nth-prior event to the last event, all according to order of arrival.

Similar to the first aggregation function described above, you may specify the wildcard (*) character, no parameter or stream name and wildcard (*) character or a value_expression to evaluate for the last event.

The index_expression is optional. If no index expression is provided, the function returns the last event. If present, the function evaluates the index expression to determine the value for N, and evaluates the Nth-prior event to the last event. A value of zero returns the last event and a value of 1 returns the event prior to the last event. You may not specify event properties in the index expression.

The function returns null if there are no events in the data window or when the index is larger then the number of events held in the data window. If used with group by, it returns null if there are no events in the data window for that group or when the index is larger then the number of events held in the data window for that group.

The next statement selects the underlying event of the first and last sensor event held by the time window of 10 seconds:

select first(*), last(*) from SensorEvent.win:time(10 sec) 

The statement shown next selects the last temperature (f1) and the prior-to-last temperature (f1) of sensor events in the last 10 seconds:

select last(temperature, 0) as f1, select last(temperature, 1) as f2
from SensorEvent.win:time(10 sec)

9.2.2.3. Window Aggregation Function

The synopsis for the window aggregation function is:

window(*|stream.*|value_expression)

The window aggregation function returns properties of all events in the data window. When used with group by, it returns properties of all events in the data window for each group.

Similar to the first aggregation function described above, you may specify the wildcard (*) character or stream name and wildcard (*) character or a value_expression to evaluate for all events.

The function returns null if there are no events in the data window. If used with group by, it returns null if there are no events in the data window for that group.

The next statement selects the underlying event of all events held by the time window of 10 seconds:

select window(*) from SensorEvent.win:time(10 sec) 

The window aggregation function requires that your stream is bound by a data window. You may not use the window aggregation function on unbound streams with the exception of on-demand queries.

9.2.3. Additional Aggregation Functions

Esper provides the following additional aggregation functions beyond those in the SQL standard:

Table 9.3. Syntax and results of EPL aggregation functions

Aggregate FunctionResult
firstever(expression [, filter_expr])

The firstever aggregation function returns the very first value ever. If used with group by it returns the first value ever for that group.

If used with a data window, the result of the function does not change as data points leave a data window. Use the first or prevtail function to return values relative to a data window.

The optional filter expression limits the values considered for retaining the first-ever value.

The next example statement outputs the first price ever for sell orders:

select firstever(price, side='sell') from Order
lastever(expression [, filter_expr])

Returns the last value or last value per group, if used with group by.

This sample statement outputs the total price, the first price and the last price per symbol for the last 30 seconds of events and every 5 seconds:

select symbol, sum(price), lastever(price), firstever(price)
from StockTickEvent.win:time(30 sec) 
group by symbol
output every 5 sec

If used with a data window, the result of the function does not change as data points leave a data window (for example when all data points leave the data window). Use the last or prev function to return values relative to a data window.

The optional filter expression limits the values considered for retaining the last-ever value.

The next example statement outputs the last price (ever) for sell orders:

select lastever(price, side='sell') from Order
leaving()

Returns true when any remove stream data has passed, for use in the having clause to output only when a data window has filled.

The leaving aggregation function is useful when you want to trigger output after a data window has a remove stream data point. Use the output after syntax as an alternative to output after a time interval.

This sample statement uses leaving() to output after the first data point leaves the data window, ignoring the first datapoint:

select symbol, sum(price) 
from StockTickEvent.win:time(30 sec) 
having leaving()
nth(expression, N_index)

Returns the Nth oldest element; If N=0 returns the most recent value. If N=1 returns the value before the most recent value. If N is larger than the events held in the data window for this group, returns null.

A maximum N historical values are stored, so it can be safely used to compare recent values in large views without incurring excessive overhead.

As compared to the prev row function, this aggregation function works within the current group by group, see Section 3.7.2, “Output for Aggregation and Group-By”.

This statement outputs every 2 seconds the groups that have new data and their last price and the previous-to-last price:

select symbol, nth(price, 1), last(price) 
from StockTickEvent 
group by symbol
output last every 2 sec

rate(number_of_seconds)

Returns an event arrival rate per second over the provided number of seconds, computed based on engine time.

Returns null until events fill the number of seconds. Useful with output snapshot to output a current rate. This function footprint is for use without a data window onto the stream(s).

A sample statement to output, every 2 seconds, the arrival rate per second considering the last 10 seconds of events is shown here:

select rate(10) from StockTickEvent
output snapshot every 2 sec

The aggregation function retains an engine timestamp value for each arriving event.

rate(timestamp_property[, accumulator])

Returns an event arrival rate over the data window including the last remove stream event. The timestamp_property is the name of a long-type property of the event that provides a timestamp value.

The first parameter is a property name or expression providing millisecond timestamp values.

The optional second parameter is a property or expression for computing an accumulation rate: If a value is provided as a second parameter then the accumulation rate for that quantity is returned (e.g. turnover in dollars per second).

This footprint is designed for use with a data window and requires a data window declared onto the stream. Returns null until events start leaving the window.

This sample statement outputs event rate for each group (symbol) with fixed sample size of four events (and considering the last event that left). The timestamp event property must be part of the event for this to work.

select colour, rate(timestamp) as rate 
from StockTickEvent.std:groupwin(symbol).win:length(4) 
group by symbol

Built-in aggregation functions can be disabled via configuration (see Section 14.4.20.4, “Extended Built-in Aggregation Functions”). A custom aggregation function of the same name as a built-on function may be registered to override the built-in function.

9.3. User-Defined Functions

A user-defined function (UDF) is a single-row function that can be invoked anywhere as an expression itself or within an expresson. The function must simply be a public static method that the classloader can resolve at statement creation time. The engine resolves the function reference at statement creation time and verifies parameter types.

You may register your own function name for the user-defined function. Please see the instructions in Section 16.3, “Single-Row Function” for registering a function name for a user-defined single-row function.

A single-row function that has been registered with a function name can simply be referenced as function_name(parameters) thus EPL statements can be less cluttered as no class name is required. The engine also optimizes evaluation of such registered single-row functions when used in filter predicate expressions as described in Section 16.3.4, “Single-Row Functions in Filter Predicate Expressions”.

An example EPL statement that utilizes the discount function is shown next (assuming that function has been registered).

select discount(quantity, price) from OrderEvent

When selecting from a single stream, use the wildcard (*) character to pass the underlying event:

select discount(*) from OrderEvent

User-defined functions can be also be invoked on instances of an event: Please see Section 5.4.5, “Using the Stream Name” to invoke event instance methods on a named stream.

Note that user-defined functions (not single-row functions) are candidate for caching their return result if the parameters passed are constants and they are not used chained. Please see below for details and configuration.

The example below assumes a class MyClass that exposes a public static method myFunction accepting 2 parameters, and returing a numeric type such as double.

select 3 * com.mycompany.MyClass.myFunction(price, volume) as myValue 
from StockTick.win:time(30 sec)

User-defined functions also take array parameters as this example shows. The section on Section 8.5, “Array Definition Operator” outlines in more detail the types of arrays produced.

select * from RFIDEvent where com.mycompany.rfid.MyChecker.isInZone(zone, {10, 20, 30})

Java class names have to be fully qualified (e.g. java.lang.Math) but Esper provides a mechanism for user-controlled imports of classes and packages as outlined in Section 14.4.5, “Class and package imports”.

User-defined functions can return any value including null, Java objects or arrays. Therefore user-defined functions can serve to transform, convert or map events, or to extract information and assemble further events.

The following statement is a simple pattern that looks for events of type E1 that are followed by events of type E2. It assigns the tags "e1" and "e2" that the function can use to assemble a final event for output:

select MyLib.mapEvents(e1, e2) from pattern [every e1=E1 -> e2=E2]

User-defined functions may also be chained: If a user-defined function returns an object then the object can itself be the target of the next function call and so on.

Assume that there is a calculator function in the MyLib class that returns a class which provides the search method taking two parameters. The EPL that takes the result of the calculator function and that calls the search method on the result and returns its return value is shown below:

select MyLib.calculator().search(zonevariable, zone) from RFIDEvent]

A user-defined function should be implemented thread-safe.

9.3.1. Event Type Conversion via User-Defined Function

A function that converts from one event type to another event type is shown in the next example. The first statement declares a stream that consists of MyEvent events. The second statement employs a conversion function to convert MyOtherEvent events to events of type MyEvent:

insert into MyStream select * from MyEvent
	insert into MyStream select MyLib.convert(other) from MyOtherEvent as other

In the example above, assuming the event classes MyEvent and MyOtherEvent are Java classes, the static method should have the following footprint:

public static MyEvent convert(MyOtherEvent otherEvent)

9.3.2. User-Defined Function Result Cache

For user-defined functions that take no parameters or only constants as parameters the engine automatically caches the return result of the function, and invokes the function only once. This is beneficial to performance if your function indeed returns the same result for the same input parameters.

You may disable caching of return values of user-defined functions via configuration as described in Section 14.4.20.3, “User-Defined Function or Static Method Cache”.

9.3.3. Parameter Matching

EPL follows Java standards in terms of widening, performing widening automatically in cases where widening type conversion is allowed without loss of precision, for both boxed and primitive types.

When user-defined functions are overloaded, the function with the best match is selected based on how well the arguments to a function can match up with the parameters, giving preference to the function that requires the least number of widening conversions.

Boxing and unboxing of arrays is not supported in UDF as it is not supported in Java. For example, an array of Integer and an array of int are not compatible types.

When using {} array syntax in EPL, the resulting type is always a boxed type: "{1, 2}" is an array of Integer (and not int since it may contain null values), "{1.0, 2d}" is an array of Double and "{'A', "B"}" is an array of String, while "{1, "B", 2.0}" is an array of Object (Object[]).

9.4. Select-Clause transpose Function

The transpose function is only valid in the select-clause and indicates that the result of the parameter expression should become the underlying event object of the output event.

The transpose function takes a single expression as a parameter. The result object of the parameter expression is subject to transposing as described below.

The function can be useful with insert into to allow an object returned by an expression to become the event itself in the output stream.

Any expression returning a Java object can be used with the transpose function. Typical examples for expressions are a static method invocation, the result of an enumeration method, a plug-in single row function or a subquery.

The examples herein assume that a single-row function by name makeEvent returns an OrderEvent instance (a POJO object, not shown).

The following EPL takes the result object of the invocation of the makeEvent method (assumed to be an OrderEvent instance) and returns the OrderEvent instance as the underlying event of the output event:

select transpose(makeEvent(oi)) from OrderIndication oi

Your select-clause can select additional properties or expressions. In this case the output event underlying object is a pair of the expression result object and the additional properties.

The next EPL also selects the origin property of the order indication event. The output event is a pair of the OrderEvent instance and a map containing the property name and value of origin:

select origin, transpose(makeEvent(oi)) from OrderIndication oi

If the transpose function is not a top-level function, i.e. if it occurs within another expression or within any other clause then the select-clause, the function simple returns the expression result of the parameter expression.

9.4.1. Transpose with Insert-Into

You may insert transposed output events into another stream.

If the stream name in the insert-into clause is already associated to an event type, the engine checks whether the event type associated to the stream name provided in the insert-into clause matches the event type associated to the object returned by the expression. If the stream name in the insert-into clause is not already associated to an existing event type the engine associates a new event type using the stream name provided in the insert-into clause.

For example, the next statement associates the stream name OrderEvent with the class. Alternatively this association can be achieved via static or runtime configuration API:

create schema OrderEvent as com.mycompany.OrderEvent

An EPL statement can insert into the OrderEvent stream the OrderEvent instance returned by the makeEvent method, as follows:

insert into OrderEvent select transpose(makeEvent(oi)) from OrderIndication oi

It is not valid to select additional properties or expressions in this case, as they would not be part of the output event. The following is not valid:

// not valid
insert into OrderEvent select origin, transpose(makeEvent(oi)) from OrderIndication oi

Chapter 10. EPL Reference: Enumeration Methods

10.1. Overview

EPL provides enumeration methods that work with lambda expressions to perform common tasks on subquery results, named windows, event properties or inputs that are or can be projected to a collection of events, scalar values or objects.

A lambda expression is an anonymous expression. Lambda expressions are useful for encapsulating user-defined expressions that are applied to each element in a collection. This section discusses built-in enumeration methods and their lambda expression parameters.

Lambda expressions use the lambda operator =>, which is read as "goes to". The left side of the lambda operator specifies the lambda expression input parameter(s) (if any) and the right side holds the expression. The lambda expression x => x * x is read "x goes to x times x.". Lambda expressions are also used for expression declaration as discussed in Section 5.2.7, “Expression Declaration”.

When writing lambdas, you do not have to specify a type for the input parameter(s) or output result(s) because the engine can infer all types based on the input and the expression body. So if you are querying an RFIDEvent, for example, then the input variable is inferred to be an RFIDEvent event, which means you have access to its properties and methods.

The term element in respect to enumeration methods means a single event, scalar value or object in a collection that is the input to an enumeraton method. The term collection means a sequence or group of elements.

The below table summarizes the built-in enumeration methods available:

Table 10.1. Enumeration Methods

MethodResult
aggregate(seed, accumulator lambda)

Aggregate elements by using seed as an initial accumulator value and applying an accumulator expression.

Section 10.6.1, “Aggregate”.

allof(predicate lambda)

Return true when all elements satisfy a condition.

Section 10.6.2, “AllOf”.

anyof(predicate lambda)

Return true when any element satisfies a condition.

Section 10.6.3, “AnyOf”.

average()

Computes the average of values obtained from numeric elements.

Section 10.6.4, “Average”.

average(projection lambda)

Computes the average of values obtained from elements by invoking a projection expression on each element.

Section 10.6.4, “Average”.

countof()

Returns the number of elements.

Section 10.6.5, “CountOf”.

countof(predicate lambda)

Returns the number of elements that satisfy a condition.

Section 10.6.5, “CountOf”.

except(source)

Produces the set difference of the two collections.

Section 10.6.6, “Except”.

firstof()

Returns the first element.

Section 10.6.7, “FirstOf”.

firstof(predicate lambda)

Returns the first element that satisfies a condition.

Section 10.6.7, “FirstOf”.

groupby(key-selector lambda)

Groups the elements according to a specified key-selector expression.

Section 10.6.8, “GroupBy”.

groupby(key-selector lambda, value-selector lambda)

Groups the elements according to a key-selector expression mapping each element to a value according to a value-selector.

Section 10.6.8, “GroupBy”.

intersect(source)

Produces the set intersection of the two collections.

Section 10.6.9, “Intersect”.

lastof()

Returns the last element.

Section 10.6.10, “LastOf”.

lastof(predicate lambda)

Returns the last element that satisfies a condition.

Section 10.6.10, “LastOf”.

leastFrequent()

Returns the least frequent value among a collection of values.

Section 10.6.11, “LeastFrequent”.

leastFrequent(transform lambda)

Returns the least frequent value returned by the transform expression when applied to each element.

Section 10.6.11, “LeastFrequent”.

max()

Returns the maximum value among a collection of elements.

Section 10.6.12, “Max”.

max(value-selector lambda)

Returns the maximum value returned by the value-selector expression when applied to each element.

Section 10.6.12, “Max”.

maxby(value-selector lambda)

Returns the element that provides the maximum value returned by the value-selector expression when applied to each element.

Section 10.6.13, “MaxBy”.

min()

Returns the minimum value among a collection of elements.

Section 10.6.12, “Max”.

min(value-selector lambda)

Returns the minimum value returned by the value-selector expression when applied to each element.

Section 10.6.14, “Min”.

minby(value-selector lambda)

Returns the element that provides the minimum value returned by the value-selector expression when applied to each element..

Section 10.6.15, “MinBy”.

mostFrequent()

Returns the most frequent value among a collection of values.

Section 10.6.16, “MostFrequent”.

mostFrequent(transform lambda)

Returns the most frequent value returned by the transform expression when applied to each element.

Section 10.6.16, “MostFrequent”.

orderBy()

Sorts the elements in ascending order.

Section 10.6.17, “OrderBy and OrderByDesc”.

orderBy(key-selector lambda)

Sorts the elements in ascending order according to a key.

Section 10.6.17, “OrderBy and OrderByDesc”.

orderByDesc()

Sorts the elements in descending order.

Section 10.6.17, “OrderBy and OrderByDesc”.

orderByDesc(key-selector lambda)

Sorts the elements in descending order according to a key.

Section 10.6.17, “OrderBy and OrderByDesc”.

reverse

Reverses the order of elements.

Section 10.6.18, “Reverse”.

selectFrom(transform lambda)

Transforms each element resulting in a collection of transformed elements.

Section 10.6.19, “SelectFrom”.

sequenceEqual(second)

Determines whether two collections are equal by comparing each element (equals semantics apply).

Section 10.6.20, “SequenceEqual”.

sumOf()

Computes the sum from a collection of numeric elements.

Section 10.6.21, “SumOf”.

sumOf(projection lambda)

Computes the sum by invoking a projection expression on each element.

Section 10.6.21, “SumOf”.

take(numElements)

Returns a specified number of contiguous elements from the start.

Section 10.6.22, “Take”.

takeLast(numElements)

Returns a specified number of contiguous elements from the end.

Section 10.6.23, “TakeLast”.

takeWhile(predicate lambda)

Returns elements from the start as long as a specified condition is true.

Section 10.6.24, “TakeWhile”.

takeWhile( (predicate, index) lambda)

Returns elements from the start as long as a specified condition is true, allowing each element's index to be used in the logic of the predicate expression.

Section 10.6.24, “TakeWhile”.

takeWhileLast(predicate)

Returns elements from the end as long as a specified condition is true.

Section 10.6.25, “TakeWhileLast”.

takeWhileLast( (predicate,index) lambda)

Returns elements from the end as long as a specified condition is true, allowing each element's index to be used in the logic of the predicate expression.

Section 10.6.25, “TakeWhileLast”.

toMap(key-selector lambda, value-selector lambda)

Returns a Map according to specified key selector and value-selector expressions.

Section 10.6.26, “ToMap”.

union(source)

Forms a union of the input elements with source elements.

Section 10.6.27, “Union”.

where(predicate lambda)

Filters elements based on a predicate.

Section 10.6.28, “Where”.

where( (predicate,index) lambda)

Filters elements based on a predicate, allowing each element's index to be used in the logic of the predicate expression.

Section 10.6.28, “Where”.

10.2. Example Events

The examples in this section come out of the domain of location report (aka. RFID, asset tracking etc.) processing:

  1. The Item event is a report of the location of a certain item. An item can be either a piece of luggage or a passenger.

  2. The LocationReport event contains a list of Item items for which it reports location.

  3. The Zone event describes areas that items may move through.

The examples use example single-row functions for computing the distance (distance) and for determining if a location falls within a rectangle (inrect) that are not provided by the EPL language. These example UDF functions are not enumeration methods and are used in EPL statements to provide a sensible example.

The Item event contains an assetId id, a (x,y) location, a luggage flag to indicate whether the item represents a luggage (true) or passenger (false), and the assetIdPassenger that holds the asset id of the associated passenger when the item is a piece of luggage.

The Item event is defined as follows (access methods not shown for brevity):

public class Item {
  String assetId;             // passenger or luggage asset id
  Location location;          // (x,y) location
  boolean luggage;            // true if this item is a luggage piece
  String assetIdPassenger;    // if the item is luggage, contains passenger associated
...

The LocationReport event contains a list of Item items for which it reports events.

The LocationReport event is defined as follows:

public class LocationReport {
  List<Item> items;
...

The Zone event contains a zone name and (x1, y1, x2, y2) rectangle.

The Zone event is defined as follows:

public class Zone {
  String name;
  Rectangle rectangle;
...

The Location object is a nested object to Item and provides the current (x,y) location:

public class Location {
  int x;
  int y;
...

The Rectangle object is a nested object to Zone and provides a zone rectangle(x1,y1,x2,y2):

public class Rectangle {
  int x1;
  int y1;
  int x2;
  int y2;
...

10.3. How to Use

10.3.1. Syntax

The syntax for enumeration methods is the same syntax as for any chained invocation:

input_coll.enum_method_name( [method_parameter [, method_parameter [,...]]])
	  .[ [enum_method_name(...) [...]] | property_name] 

Following the input_coll input collection (options outlined below), is the . (dot) operator and the enum_method_name enumeration method name. It follows in parenthesis a comma-separated list of method parameter expressions. Additional enumeration methods can be chained thereafter. An event property name can follow for those enumeration methods returning an event-typed (non-scalar) element.

If the method parameter is a lambda expression with a single lambda-parameter, specify the lambda-parameter name followed by the => lambda operator and followed by the expression. The synopsis for use with a single lambda-parameter is:

method_parameter: lamda_param => lamda_expression

If the method parameter is a lambda expression with two or more lambda-parameters, specify the lambda parameter names in parenthesis followed by the => lambda operator followed by the expression. The synopsis for use with multiple lambda-parameters is:

method_parameter: (lamda_param [,lamda_param [,...]]) => lamda_expression

Generally for lambda expressions, the engine applies the lambda expression to each element in the input collection. The expression yields a result that, depending on the particular enumeration method, is used for aggregation, as a filter or for output, for example.

10.3.2. Introductory Examples

Let's look at an EPL statement that employs the where enumeration method and a lambda expression. This example returns items that have a (x, y) location of (0, 0):

select items.where(i => i.location.x = 0 and i.location.y = 0) as zeroloc
from LocationReport

As enumeration methods can be chained, this selection is equivalent:

select items.where(i => i.location.x = 0).where(i => i.location.y = 0) as zeroloc
from LocationReport

According to above statement the engine outputs in field zeroloc a collection of Item objects matching the condition.

The where enumeration method has a second version that has two lambda-parameters. The second parameter is the name of the index property which represents the current index of the element within the collection.

This sample query returns a collection that consists of the first 3 items. This sample query does not use the item lambda parameter:

select items.where( (item, indexElement) => indexElement < 3) as firstThreeItems
from LocationReport

10.3.3. Input, Output and Limitations

It is not necessary to use classes for event representation. The example above applies the same to Map or XML underlying events.

For most enumeration methods the input can be any collection of events, scalar values or objects. For some enumeration methods limitations apply that are documented below. For example, the sumOf enumeration method requires a collection of numeric scalar values if used without parameters. If the input to sumOf is a collection of events the enumeration method requires a lambda expression as parameter that yields the numeric value to use to compute the sum.

Many examples of this section operate on the collection returned by the event property items in the LocationReport event class. There are many other inputs yielding collections as listed below. Most examples herein use an event property as a input simply because the example can thus be brief and does not need to refer to a subquery or named window or other concept.

For enumeration methods that return a collection, for example where and orderBy, the engine outputs an implementation of the Collection interface that contains the selected value(s). The collection returned must be considered read-only. As Java does not allow resettable iterators, the Collection interface allows more flexibility to query size and navigate among collection elements. We recommend against down-casting a collection returned by the engine to a more specific subclass of the Collection interface.

For enumeration methods that return an element, for example firstOf, lastOf, minBy and maxBy the engine outputs the scalar value or the underlying event if operating on events. You may add an event property name after the enumeration method to return a property value.

Enumeration methods generally retain the order of elements provided by the collection.

The following restrictions apply to enumeration methods:

  1. Enumeration methods returning a collection return a read-only implementation of the Collection interface. You may not use any of the write-methods such as add or remove on a result collection.

10.4. Inputs

The input of data for built-in enumeration methods is a collection of scalar values, events or other objects. Input can originate from any of the following:

  1. A subquery.

  2. A named window.

  3. A property of an event that is itself a collection of events or classes, i.e. an indexed property.

  4. Any of the data window aggregation functions (window, first, last).

  5. The special prevwindow, prev and prevtail single-row functions.

  6. A plug-in single-row function or user-defined function.

  7. A declared expression.

  8. Another enumeration method that returns a collection.

  9. An array returned by the {} array operator.

10.4.1. Subquery Results

Subqueries can return the rows of another stream's data window or rows from a named window. By providing a where-clause the rows returned by a subquery can be correlated to data provided by stream(s) in the from-clause. See Section 5.11, “Subqueries”.

A subquery that selects (*) wildcard provides a collection of events as input. A subquery that selects a single value expression provides a collection of scalar values as input. Subqueries that selects multiple value expressions are not allowed as input to enumeration methods.

The following example uses a subquery to retrieve all zones for each location report item where the location falls within the rectangle of the zone. Please see a description of example events and functions above.

select assetId,
  (select * from Zone.std:unique(name)).where(z => inrect(z.rectangle, location)) as zones
from Item

You may place the subquery in an expression declaration to reuse the subquery in multiple places of the same EPL statement.

This sample EPL declares the same query as above in an expression declaration:

expression myquery {itm =>
  (select * from Zone.std:unique(name)).where(z => inrect(z.rectangle, itm.location))
}
select assetId, myquery(item) as subq, 
    myquery(item).where(z => z.zone = 'Z01') as assetItem
from Item as item

The above query also demonstrates how an enumeration method, in the example the where-method, can be run across the results returned by a subquery in an expression declaration.

Place a single column in the subquery select-clause to provide a collection of scalar values as input.

The next example selects all names of zones and orders the names returning an order collection of string names every 30 seconds:

select (select name from Zone.std:unique(name)).orderBy() as orderedZones
from pattern[every timer:interval(30)]

Note that the engine can cache intermediate results thereby is not forced to re-evaluate the subquery for each occurrence in the select-clause.

10.4.2. Named Window

Named windows are globally-visible data windows. See Section 5.15, “Creating and Using Named Windows”.

You may specify the named window name as input for an enumeration method and can optionally provide a correlation where-clause. The syntax is equivalent to a sub-query against a named window but much shorter.

Synopsis:

named-window-name[(correlation-expression)].enum-method-name(...)

When selecting all events in a named window you do not need the correlation-expression. To select a subset of data in the named window, specify a correlation-expression. From the perspective of best runtime performance, a correlation expression is preferred to reduce the number of rows returned.

The following example first declares a named window to hold the last zone event per zone name:

create window ZoneWindow.std:unique(name) as Zone

Then we create a statement to insert zone events that arrive to the named window:

insert into ZoneWindow select * from Zone

Finally this statement queries the named window to retrieve all zones for each location report item where the location falls within the rectangle of the zone:

select ZoneWindow.where(z => inrect(z.rectangle, location)) as zones from Item

If you have a filter or correlation expression, append the expression to the named window name and place in parenthesis.

This slightly modified query is the example above except that it adds a filter expression such that only zones with name Z1, Z2 or Z3 are considered:

select ZoneWindow(name in ('Z1', 'Z2', 'Z3')).where(z => inrect(z.rectangle, location)) as zones 
from Item

You may prefix property names provided by the named window with the name to disambiguate property names.

This sample query prefixed the name property and returns the count of matching zones:

select ZoneWindow(ZoneWindow.name in ('Z1', 'Z2', 'Z3')).countof()) as zoneCount
from Item

The engine internally interprets the shortcut syntax and creates a subquery from it. Thus all indexing and query planning for subqueries against named windows apply here as well.

10.4.3. Event Property

Indexed event properties are event properties that are a collection, array or iterable of scalar values or objects.

The LocationReport event from the example contains a list of Item events. Any indexed property (list, array, collection, iterable) is eligible for use as input to an enumeration method. If the indexed property contains non-scalar objects the objects are treated as events and can be used as input to enumeration methods as a collection of events.

The next sample query returns items that are less then 20 units away from the center, taking the items event property provided by each LocationReport event as input:

select items.where(p => distance(0, 0, p.location.x, p.location.y) < 20) as centeritems
from LocationReport

The following example consists of two statements: The first statement declares an a new event type and the second statement invokes the sequenceEqual method to compare sequences contained in two properties of the same event:

create schema MyEvent (seqone String[], seqtwo String[])
select seqone.sequenceEqual(seqtwo) from MyEvent

10.4.4. Data Window Aggregation

Data window aggregation functions return events present in a stream's current data window. They are aggregation functions and as such sensitive to the presence of group by. See Section 9.2.2, “Data Window Aggregation Functions”.

When using data window aggregation functions as input to a built-in enumeration method you can specify the * wildcard to provide a collection of events as input. Or you can specify a property name in the data window aggregation function to provide a collection of scala values as input. Use any of the window, first or last functions.

In this example query the window(*) aggregation function returns the last 10 seconds of item location reports for the same asset id as the incoming event. Among that last 10 seconds of events for the same asset id, the enumeration method returns those item location reports where the distance to center is less then 20, for each arriving Item event.

Sample query:

select window(*).where(p => distance(0, 0, p.location.x, p.location.y) < 20) as centeritems
from Item(type='P').win:time(10) group by assetId

The next sample query instead selects the asset id property of all events and returns an ordered collection:

select window(assetId).orderBy() as orderedAssetIds
from Item.win:time(10) group by assetId

10.4.5. prev, prevwindow and prevtail Single-Row Functions as Input

The prev, prevwindow and prevtail single-row functions allow access into a stream's data window however are not aggregation functions and and as such not sensitive to the presence of group by. See Section 9.1.10, “The Previous-Window Function”.

When using any of the prev single-row functions as input to a built-in enumeration method you can specify the stream name as a parameter to the function or an event property. The input to the enumeration method is a collection of events if you specify the stream name, or a collection of scalar value if you specify an event property.

In this example query the prevwindow(stream) single-row function returns the last 10 seconds of item location reports, among which the enumeration method filters those item location reports where the distance to center is less then 20, for each Item event that arrived in the last 10 seconds considering passenger-type Item events only (see filter type = 'P').

Sample query:

select prevwindow(items)
    .where(p => distance(0, 0, p.location.x, p.location.y) < 20) as centeritems
from Item(type='P').win:time(10) as items

This sample query demonstrates the use of the prevwindow function to return a collection of scalar values (collection of asset id) as input to orderby:

select prevwindow(assetId).orderBy() as orderedAssetIds
from Item.win:time(10) as items

10.4.6. Single-Row Function or UDF

Your single-row or user-defined function can return either an array or any collection that implements either the Collection or Iterable interface. For arrays, the array component type and for collections, the collection or iterable generic type should be the class providing event properties.

As an example, assume a ZoneFactory class exists and a static method getZones() returns a list of zones to filter items, for example:

public class ZoneFactory {
  public static Iterable<Zone> getZones() {
    List<Zone> zones = new ArrayList<Zone>();
    zones.add(new Zone("Z1", new Rectangle(0, 0, 20, 20)));
    return zones;
  }
}

Import the class through runtime or static configuration, or add the method above as a plug-in single-row function.

The following query returns for each Item event all zones that the item belongs to:

select ZoneFactory.getZones().where(z => inrect(z.rectangle, item.location)) as zones
from Item as item

If the class and method were registered as a plug-in single-row function, you can leave the class name off, for example:

select getZones().where(z => inrect(z.rectangle, item.location)) as zones
from Item as item

Your single-row or user-defined function can also return an array, collection or iterable or scalar values.

For example, the static method getZoneNames() returns a list of zone names:

public static String[] getZoneNames() { 
  return new String[] { "Z1", "Z2"};
}

The following query returns zone names every 30 seconds and excludes zone Z1:

select getZoneNames().where(z => z != "Z1")
from pattern[every timer:interval(30)]

10.4.7. Declared Expression

A declared expression may return input data for an enumeration method.

The below query declares an expression that returns all passenger location reports among the items in the location report event in a column named passengerCollection. The query uses the result returned by the declared expression a second time to filter through the list returning the passenger location report where the asset id is a given value in a column named passengerP01.

Sample query:

expression passengers {
  lr => lr.items.where(l => l.type='P')
}
select passengers(lr) as passengerCollection,
  passengers(lr).where(x => assetId = 'P01') as passengerP01
from LocationReport lr

The engine applies caching techniques to avoid re-evaluating the declared expression multiple times.

10.5. Example

Following the RFID asset tracking example as introduced earlier, this section introduces two use cases solved by enumeration methods.

The first use case requires us to find any luggage that is more then 20 units away from the passenger that the luggage belongs to. The declared expression lostLuggage solves this question.

The second question to answer is: For each of such lost luggage what single other passenger is nearest to that luggage. The declared expression nearestOwner which uses lostLuggage answers this question.

Below is the complete EPL statement (one statement not multiple):

// expression to return a collection of lost luggage
expression lostLuggage {
  lr => lr.items.where(l => l.type='L' and
    lr.items.some(p => p.type='P' and p.assetId=l.assetIdPassenger 
      and LRUtil.distance(l.location.x, l.location.y, p.location.x, p.location.y) > 20))
}

// expression to return all passengers
expression passengers {
  lr => lr.items.where(l => l.type='P')
}

// expression to find the nearest owner
expression nearestOwner {
  lr => lostLuggage(lr).toMap(key => key.assetId,
    value => passengers(lr).minBy(
        p => LRUtil.distance(value.location.x, value.location.y, p.location.x, p.location.y))
    )
}

select lostLuggage(lr) as val1, nearestOwner(lr) as val2 from LocationReport lr

10.6. Reference

10.6.1. Aggregate

The aggregate enumeration method takes an expression providing the initialization value (seed) and an accumulator lambda expression. The return value is the final accumulator value.

Via the aggregate method you may perform a calculation over elements. The method initializes the aggregated value by evaluating the expression provided in the first parameter. The method then calls the lambda expression of the second parameter once for each element in the input. The lambda expression receives the last aggregated value and the element from the input. The result of the expression replaces the previous aggregated value and returns the final result after completing all elements.

An expression example with scalar values:

{1, 2, 3}.aggregate(0, (result, value) => result + value)  // Returns 6

The example below aggregates price of each OrderEvent in the last 10 seconds computing a total price:

// Initialization value is zero.
// Aggregate by adding up the price.
select window(*).aggregate(0, (result, order) => result + order.price) as totalPrice
from OrderEvent.win:time(10)

In the query above, the initialization value is zero, result is used for the last aggregated value and order denotes the element that the expression adds the value of the price property.

This example aggregation builds a comma-separated list of all asset ids of all items:

select items.aggregate('', 
  (result, item) => result || (case when result='' then '' else ',' end) || item.assetId) as assets			
from LocationReport

In above query, the empty string '' represents the initialization value. The name result is used for the last aggregated value and the name item is used to denote the element.

The type value returned by the initialization expression must match to the type of value returned by the accumulator lambda expression.

If the input is null the method returns null. If the input is empty the method returns the initialization value.

10.6.2. AllOf

The allof enumeration method determines whether all elements satisfy the predicate condition.

The method takes a single parameter: The predicate lambda expression that must yield a Boolean result. The enumeration method applies the lambda expression to each element and if the expression returns true for all elements, the method returns true.

An expression example with scalar values:

{1, 2, 3}.allOf(v => v > 0)    // Returns true as all values are > 0
{1, 2, 3}.allOf(v => v > 1)    // Returns false

The EPL statement below returns true when all items are within 1000 unit distance of center:

select items.allof(i => distance(i.location.x, i.location.y, 0, 0) < 1000) as centered			
from LocationReport

If the input is null the method returns null. If the input is empty the method returns true.

10.6.3. AnyOf

The anyof enumeration method determines whether any element satisfies the predicate condition.

The only parameter is the predicate lambda expression that must yield a Boolean result. The enumeration method applies the lambda expression to each element and if the expression returns true for all elements, the method returns true.

An expression example with scalar values:

{1, 2, 3}.anyOf(v => v > 0)    // Returns true
{1, 2, 3}.anyOf(v => v > 1)    // Returns true
{1, 2, 3}.anyOf(v => v > 3)    // Returns false

The EPL statement below return true when any of the items are within 10 unit distance of center:

select items.anyof(i => distance(i.location.x, i.location.y, 0, 0) < 10) as centered			
from LocationReport

If the input is null the method returns null. If the input is empty the method returns false.

10.6.4. Average

The average enumeration method computes the average of scalar values. If passing a projection lambda expression the method computes the average obtained by invoking the projection lambda expression on each element.

The method takes a projection lambda expression yielding a numeric value as a parameter. It applies the lambda expression to each element and computes the average of the result, returning a Double value. A BigDecimal is returned for expressions returning BigInteger or BigDecimal.

An expression example with scalar values:

{1, 2, 3}.average()    // Returns 2

The EPL statement as shown next computes the average distance from center among all items in the location report event:

select items.average(i => distance(i.location.x, i.location.y, 0, 0)) as avgdistance
from LocationReport

If the input is null the method returns null. If the input is empty the method returns double zero or BigDecimal zero.

10.6.5. CountOf

The countof enumeration method returns the number of elements, or the number of elements that satisfy a condition.

The enumeration method has two versions: The first version takes no parameters and computes the number of elements. The second version takes a predicate lambda expression that must yield Boolean true or false, and computes the number of elements that satisfy the condition.

An expression example with scalar values:

{1, 2, 3}.countOf()    // Returns 3
{1, 2, 3}.countOf(v => v < 2)    // Returns 1

The next sample statement counts the number of items:

select items.countOf() as cnt from LocationReport

This example statement counts the number of items that have a distance to center that is less then 20 units:

select items.countOf(i => distance(i.location.x, i.location.y, 0, 0) < 20) as cntcenter
from LocationReport

If the input is null the method returns null. If the input is empty the method returns integer zero.

10.6.6. Except

The except enumeration method forms a set difference of the input elements with the elements that the parameter expression yields.

The enumeration method takes a single parameter that must itself return a collection of events, objects or scalar values. The method returns the elements of the first collection that do not appear in the second collection.

An expression example with scalar values:

{1, 2, 3}.except({1})   // Returns {2, 3}

The following statement compares the items of the last location report against all items in the previous 10 location reports, and reports for each combination only those items in the current item report that are not also in the location report compared to:

select za.items.except(zb.items) as itemsCompared 
from LocationReport as za unidirectional, LocationReport.win:length(10) as zb

If the input is null the method returns null. For scalar values and objects equals-semantics apply.

10.6.7. FirstOf

The firstOf enumeration method returns the first element or the first element that satisfies a condition.

The method has two versions: The first version takes no parameters and returns the first element. The second version takes a predicate lambda expression yielding true or false. It applies the lambda expression to each element and returns the first element for which the expression returns true. The return type is the element itself and not a collection. You may append a property name to return the property value for the first element.

An expression example with scalar values:

{1, 2, 3}.firstOf()   // Returns 1
{1, 2, 3}.firstOf(v => v / 2 > 1)   // Returns 3

In the following EPL sample the query returns the first item that has a distance to center that is less then 20 units:

select items.firstof(i => distance(i.location.x, i.location.y, 0, 0) < 20) as firstcenter
from LocationReport

The next sample EPL returns the first item's asset id:

select items.firstof().assetId as firstAssetId from LocationReport

If the input is null, empty or if none of the elements match the condition the method returns null.

10.6.8. GroupBy

The groupby enumeration method groups the elements according to a specified key-selector lambda expression. There are two version of the groupby method.

The first version of the method takes a key-selector lambda expression and returns a Map of key with each value a list of objects, one for each distinct key that was encountered. The result is a Map<Object, Collection<Object>> wherein object is the event underlying object.

The second version of the method takes a key-selector lambda expression and value-selector lambda expression and returns a Map of key with each value a list of values, one for each distinct key that was encountered. The result is a Map<Object, Collection<Object>> wherein object is the result of applying the value-selector expression.

The next query filters out all luggage items using a where method and then groups by the luggage's passenger asset id. It returns a map of passenger asset id and the collection of luggage items for each passenger:

select items.where(type='L').groupby(i => assetIdPassenger) as luggagePerPerson
from LocationReport

The query shown below generates a map of item asset id and distance to center:

select items.groupby(
    k => assetId, v => distance(v.location.x, v.location.y, 0, 0)) as distancePerItem
from LocationReport

If the input is null the method returns null. Null values as key and value are allowed.

10.6.9. Intersect

The intersect enumeration method forms a set intersection of the input elements with the elements that the parameter expression yields.

The enumeration method takes a single parameter that must itself return a collection of events, objects or scalar values. The method returns the elements of the first collection that also appear in the second collection.

An expression example with scalar values:

{1, 2, 3}.intersect({2, 3})   // Returns {2, 3}

The following statement compares the items of the last location report against all items in the previous 10 location reports, and reports for each combination all items in the current item report that also occur in the other location report:

select za.items.intersect(zb.items) as itemsCompared 
from LocationReport as za unidirectional, LocationReport.win:length(10) as zb

If the input is null the method returns null. For scalar values and objects equals-semantics apply.

10.6.10. LastOf

The lastOf enumeration method returns the last element or the last element that satisfies a condition.

The method has two versions: The first version takes no parameters and returns the last element. The second version takes a predicate lambda expression yielding true or false. It applies the lambda expression to each element and returns the last element for which the expression returns true. The return type is the element itself and not a collection. You may append a property name to return the property value for the last element.

An expression example with scalar values:

{1, 2, 3}.lastOf()   // Returns 3
{1, 2, 3}.lastOf(v => v < 3)   // Returns 2

In the following EPL sample the query returns the last item that has a distance to center that is less then 20 units:

select items.lastof(i => distance(i.location.x, i.location.y, 0, 0) < 20) as lastcenter 
from LocationReport

The next sample EPL returns the last item's asset id:

select items.lastof().assetId as lastAssetId from LocationReport

If the input is null, empty or if none of the elements match the condition the method returns null.

10.6.11. LeastFrequent

The leastFrequent enumeration method returns the least frequent value among a collection of values, or the least frequent value after applying a transform expression to each element.

The method has two versions: The first version takes no parameters and returns the least frequent value. The second version takes a transform lambda expression yielding the value to count occurrences for. The method applies the lambda expression to each element and returns the expression result value with the least number of occurrences. The return type is the type of value in the collection or the type of value returned by the transform lambda expression if one was provided.

An expression example with scalar values:

{1, 2, 3, 2, 1}.leastFrequent()   // Returns 3

The example EPL below returns the least frequent item type, counting the distinct item types among all items for the current LocationReport event:

select items.leastFrequent(i => type) as leastFreqType from LocationReport

If the input is null or empty the method returns null. The transform expression may also yield null. A null value can be returned as the most frequent value if the most frequent value is null. If multiple values have the same number of occurrences the method returns the first value with the least number of occurrences considering the ordering of the collection.

10.6.12. Max

The max enumeration method returns the maximum value among a collection of values.

If no value-selector lambda expression is provided, the method finds the maximum.

If a value-selector lambda expression is provided, the enumeration method invokes a value-selector lambda expression on each element and returns the maximum value. The type of value returned follows the return type of the lambda expression that was provided as parameter.

An expression example with scalar values:

{1, 2, 3, 2, 1}.max()   // Returns 3

The next query returns the maximum distance of any item from center:

select items.max(i => distance(i.location.x, i.location.y, 0, 0)) as maxcenter 
from LocationReport

The value-selector lambda expression must return a comparable type: Any primitive or boxed type or Comparable type is permitted.

If the input is null, empty or if none of the elements when transformed return a non-null value the method returns null.

10.6.13. MaxBy

The maxBy enumeration method returns the element that provides the maximum value returned by the value-selector lambda expression when applied to each element.

The enumeration method returns the element itself. You may append an event property name to return a property value of the element.

The next query returns the first item with the maximum distance to center:

select items.maxBy(i => distance(i.location.x, i.location.y, 0, 0)) as maxItemCenter 
from LocationReport

The next sample returns the type of the item with the largest asset id (string comparison) among all items:

select items.maxBy(i => assetId).type as minAssetId from LocationReport

The transform expression must return a comparable type: Any primitive or boxed type or Comparable type is permitted.

If the input is null, empty or if none of the elements when transformed return a non-null value the method returns null.

10.6.14. Min

The min enumeration method returns the minimum value among a collection of values.

If no value-selector lambda expression is provided, the method finds the minimum.

If a value-selector lambda expression is provided, the enumeration method invokes a value-selector lambda expression on each element and returns the minimum value. The type of value returned follows the return type of the lambda expression that was provided as parameter.

An expression example with scalar values:

{1, 2, 3, 2, 1}.min()   // Returns 1

The next query returns the minimum distance of any item to center:

select items.min(i => distance(i.location.x, i.location.y, 0, 0)) as mincenter 
from LocationReport

The transform expression must return a comparable type: Any primitive or boxed type or Comparable type is permitted.

If the input is null, empty or if none of the elements when transformed return a non-null value the method returns null.

10.6.15. MinBy

The minBy enumeration method returns the element that provides the minimum value returned by the value-selector lambda expression when applied to each element.

The enumeration method returns the element itself. You may append an event property name to return a property value of the element.

The next query returns the first item with the minimum distance to center:

select items.minBy(i => distance(i.location.x, i.location.y, 0, 0)) as minItemCenter 
from LocationReport

The next sample returns the type of the item with the smallest asset id (string comparison) among all items:

select items.minBy(i => assetId).type as minAssetId from LocationReport

The transform expression must return a comparable type: Any primitive or boxed or Comparable type is permitted.

If the input is null, empty or if none of the elements when transformed return a non-null value the method returns null.

10.6.16. MostFrequent

The mostFrequent enumeration method returns the most frequent value among a collection of values, or the most frequent value after applying a transform expression to each element.

The method has two versions: The first version takes no parameters and returns the most frequent value. The second version takes a transform lambda expression yielding the value to count occurrences for. The method applies the lambda expression to each element and returns the expression result value with the most number of occurrences. The return type is the type of value in the collection or the type of value returned by the transform lambda expression if one was provided.

An expression example with scalar values:

{1, 2, 3, 2, 1, 2}.mostFrequent()   // Returns 2

The example EPL below returns the least frequent item type, counting the distinct item types among all items for the current LocationReport event:

select items.leastFrequent(i => type) as leastFreqType from LocationReport

If the input is null or empty the method returns null. The transform expression may also yield null. A null value can be returned as the most frequent value if the most frequent value is null. If multiple values have the same number of occurrences the method returns the first value with the most number of occurrences considering the ordering of the collection.

10.6.17. OrderBy and OrderByDesc

The orderBy enumeration method sorts elements in ascending order according to a key. The orderByDesc enumeration method sorts elements in descending order according to a key.

The enumeration method takes a single key-selector lambda expression as parameter and orders elements according to the key yielded by the expression. For same-value keys, it maintains the existing order.

An expression example with scalar values:

{2, 3, 2, 1}.orderBy()   // Returns {1, 2, 2, 3}

This example orders all items from a location report according to their distance from center:

select items.orderBy(i => distance(i.location.x, i.location.y, 0, 0)) as itemsNearFirst,
  items.orderByDesc(i => distance(i.location.x, i.location.y, 0, 0)) as itemsFarFirst
from LocationReport

The key-selector lambda expression must return a comparable type: Any primitive or boxed or Comparable type is permitted.

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.18. Reverse

The reverse enumeration method simply reverses the order of elements returning a collection.

An expression example with scalar values:

{2, 3, 2, 1}.reverse()   // Returns {1, 2, 3, 2}

The following EPL reverses the items:

select items.reverse() as reversedItems from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.19. SelectFrom

The selectFrom enumeration method transforms each element resulting in a collection of transformed elements.

The enumeration method applies a transformation lambda expression to each element and returns the result of each transformation as a collection. Use the new operator to yield multiple values for each element, see Section 8.13, “The 'new' Keyword”.

The next EPL query returns a collection of asset ids:

select items.selectFrom(i => assetId) as itemAssetIds from LocationReport

This sample EPL query evaluates each item and returns the asset id as well as the distance from center for each item:

select items.selectFrom(i => 
  new {
    assetId, 
    distanceCenter = distance(i.location.x, i.location.y, 0, 0)
  } ) as itemInfo from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.20. SequenceEqual

The sequenceEqual enumeration method determines whether two collections are equal by comparing each element.

The method enumerates the two source collections in parallel and compares corresponding elements by using the equals method to compare. The method takes a single parameter expression that must return a collection containing elements of the same type as the input. The method returns true if the two source sequences are of equal length and their corresponding elements are equal.

An expression example with scalar values:

{1, 2, 3}.sequenceEqual({1})   // Returns false
{1, 2, 3}.sequenceEqual({1, 2, 3})   // Returns true

The following example compares the asset id of all items to the asset ids returned by a method ItemUtil.redListed() which is assumed to return a list of asset id of string type:

select items.selectFrom(i => assetId).sequenceEquals(ItemUtil.redListed()) from LocationReport

If the input is null the method returns null.

10.6.21. SumOf

The sumOf enumeration method computes the sum. If a projection lambda expression is provided, the method invokes the projection lambda expression on each element and computes the sum on each returned value.

The projection lambda expression should yield a numeric value, BigDecimal or BigInteger value. Depending on the type returned by the projection lambda expression the method returns either Integer, Long, Double, BigDecimal or BigInteger.

An expression example with scalar values:

{1, 2, 3}.sumOf()   // Returns 6

The following example computes the sum of the distance of each item to center:

select items.sum(i => distance(i.location.x, i.location.y, 0, 0) as totalAllDistances
from LocationReport

If the input is null or empty the method returns null.

10.6.22. Take

The take enumeration method returns a specified number of contiguous elements from the start.

The enumeration method takes a single size (non-lambda) expression that returns an Integer value.

An expression example with scalar values:

{1, 2, 3}.take(2)   // Returns {1, 2}

The following example returns the first 5 items:

select items.take(5) as first5Items from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.23. TakeLast

The takeLast enumeration method returns a specified number of contiguous elements from the end.

The enumeration method takes a single size (non-lambda) expression that returns an Integer value.

An expression example with scalar values:

{1, 2, 3}.takeLast(2)   // Returns {2, 3}

The following example returns the last 5 items:

select items.takeLast(5) as last5Items from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.24. TakeWhile

The takeWhile enumeration method returns elements from the start as long as a specified condition is true.

The enumeration method has two versions. The first version takes a predicate lambda expression and the second version takes a predicate lambda expression and index for use within the predicate expression. Both versions return elements from the start as long as the specified condition is true.

An expression example with scalar values:

{1, 2, 3}.takeWhile(v => v < 3)   // Returns {1, 2}
{1, 2, 3}.takeWhile((v,ind) => ind > 2)   // Returns {1, 2}
{1, 2, -1, 4, 5, 6}.takeWhile((v,ind) => ind < 5 and v > 0)  // Returns {1, 2} (Take while index<5 amd value>0)

This example selects all items from a location report in the order provided until the first item that has a distance to center greater then 20 units:

select items.takeWhile(i => distance(i.location.x, i.location.y, 0, 0) < 20)
from LocationReport

The second version of the where represents the index of the input element starting at zero for the first element.

The next example is similar to the query above but also limits the result to the first 10 items:

select items.takeWhile((i, ind) => distance(i.location.x, i.location.y, 0, 0) < 20) and ind < 10)
from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.25. TakeWhileLast

The takeWhileLast enumeration method returns elements from the end as long as a specified condition is true.

The enumeration method has two versions. The first version takes a predicate lambda expression and the second version takes a predicate lambda expression and index for use within the predicate expression. Both versions return elements from the end as long as the specified condition is true.

An expression example with scalar values:

{1, 2, 3}.takeWhileLast(v => v < 3)   // Returns {} (empty collection)
{1, 2, 3}.takeWhileLast(v => v > 1)   // Returns {2, 3}
{1, 2, 3}.takeWhileLast((v,ind) => ind > 2)   // Returns {2, 3}
{1, 2, -1, 4, 5, 6}.takeWhileLast((v,ind) => ind < 5 and v > 0)  // Returns {4, 5, 6} (Take while index<5 amd value>0)

This example selects all items from a location report, starting from the last element and proceeding backwards, until the first item that has a distance to center greater then 20 units:

select items.takeWhile(i => distance(i.location.x, i.location.y, 0, 0) < 20)
from LocationReport

The second version provides the index of the input element starting at zero for the last element (reverse index).

The next example is similar to the query above but also limits the result to the last 10 items:

select items.takeWhile((i, ind) => distance(i.location.x, i.location.y, 0, 0) < 20) and ind < 10)
from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

10.6.26. ToMap

The toMap enumeration method returns a Map according to specified key-selector lambda expression and value-selector lambda expression.

The enumeration method takes a key-selector expression and a value-selector expression. For each element the method applies the key-selector expression to determine the map key and the value-selector expression to determine the map value. If the key already exists in the map the value is overwritten.

The next example EPL outputs a map of item asset id and distance to center for each item:

select items.toMap(k => k.assetId, v => distance(v.location.x, v.location.y, 0, 0)) as assetDistance
from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty map.

10.6.27. Union

The union enumeration method forms a union of the input elements with the elements that the parameter expression yields.

The enumeration method takes a single parameter that must itself return a collection of events (input), objects or scalar values. It appends the collection to the input elements and returns the appended collection of elements.

An expression example with scalar values:

{1, 2, 3}.union({4, 5})   // Returns {1, 2, 3, 4, 5}

This example selects a union of all items that have an asset id of L001 or that are of type passenger:

select items.where(i => i.assetId = 'L001')
    .union(items.where(i => i.type = 'P')) as itemsUnion
from LocationReport

If the input is null the method returns null.

10.6.28. Where

The where enumeration method filters elements based on a predicate.

The enumeration method has two versions. The first version takes a predicate lambda expression and the second version takes a predicate lambda expression and index for use within the predicate expression. Both version returns all elements for which the predicate expression is true.

An expression example with scalar values:

{1, 2, 3}.where(v => v != 2)   // Returns {1, 3}

This example selects all items from a location report that are passenger-type:

select items.where(p => p.type = 'P') from LocationReport

The second version of the where represents the index of the input element starting at zero for the first element.

The example below selects all items from a location report that are passenger-type but ignores the first 3 elements:

select items.where((p, ind) => p.type = 'P' and ind > 2) from LocationReport

If the input is null the method returns null. If the input is empty the method returns an empty collection.

Chapter 11. EPL Reference: Date-Time Methods

11.1. Overview

EPL date-time methods work on date-time values to perform common tasks such as comparing times and time periods, adding or subtracting time periods, setting or rounding calendar fields and querying fields.

Date-time methods operate on:

  1. Any expression or event property that returns either a long-type millisecond value, a java.util.Calendar or java.util.Date including subclasses. Consider the built-in single-row function current_timestamp for use with date-time methods.

  2. Any event for which the event type declares a start timestamp property name and optionally also an end timestamp property name. Date-time methods operate on events by means of the stream-alias.method-name syntax.

The below table summarizes the built-in date-time methods available:

Table 11.1. Date-Time Methods

MethodResult
after(event or timestamp)

Returns true if an event happens after another event, or a timestamp is after another timestamp.

Section 11.4.1, “After”.

before(event or timestamp)

Returns true if an event happens before another event, or a timestamp is before another timestamp.

Section 11.4.2, “Before”.

between(timestamp, timestamp, boolean, boolean)

Returns true if a timestamp is between two timestamps.

Section 11.3.1, “Between”.

coincides(event or timestamp)

Returns true if an event and another event happen at the same time, or two timestamps are the same value.

Section 11.4.3, “Coincides”.

during(event or timestamp)

Returns true if an event happens during the occurrence of another event, or when a timestamps falls within the occurrence of an event.

Section 11.4.4, “During”.

finishes(event or timestamp)

Returns true if an event starts after another event starts and the event ends at the same time as the other event.

Section 11.4.5, “Finishes”.

finishedBy(event or timestamp)

Returns true if an event starts before another event starts and ends at the same time as the other event.

Section 11.4.6, “Finished By”.

format()

Formats the date-time returning a string.

Section 11.3.2, “Format”.

get(field)

Returns the value of the given date-time value field.

Section 11.3.3, “Get (By Field)”.

getMillisOfSecond()

getSecondOfMinute()

getMinuteOfHour()

getHourOfDay()

getDayOfWeek()

getDayOfMonth()

getDayOfYear()

getWeekyear()

getMonthOfYear()

getYear()

getEra()

Returns the value of the given date-time value field.

Section 11.3.4, “Get (By Name) ”.

includes(event or timestamp)

Returns true if the parameter event happens during the occurrence of the event, or when a timestamps falls within the occurrence of an event.

Section 11.4.7, “Includes”.

meets(event or timestamp)

Returns true if the event's end time is the same as another event's start time.

Section 11.4.8, “Meets”.

metBy(event or timestamp)

Returns true if the event's start time is the same as another event's end time.

Section 11.4.9, “Met By”.

minus(duration-millis)

Returns a date-time with the specified duration in long-type milliseconds taken away.

Section 11.3.5, “Minus”.

minus(time-period)

Returns a date-time with the specified duration in time-period syntax taken away.

Section 11.3.5, “Minus”.

overlaps(event or timestamp)

Returns true if the event starts before another event starts and finishes after the other event starts, but before the other event finishes (events have an overlapping period of time).

Section 11.4.10, “Overlaps”.

overlappedBy(event or timestamp)

Returns true if the parameter event starts before the input event starts and the parameter event finishes after the input event starts, but before the input event finishes (events have an overlapping period of time).

Section 11.4.11, “Overlapped By”.

plus(duration-millis)

Returns a date-time with the specified duration in long-type milliseconds added.

Section 11.3.6, “Plus”.

plus(time-period)

Returns a date-time with the specified duration in time-period syntax added.

Section 11.3.6, “Plus”.

roundCeiling(field)

Returns a date-time rounded to the highest whole unit of the date-time field.

Section 11.3.7, “RoundCeiling”.

roundFloor(field)

Returns a date-time rounded to the lowest whole unit of the date-time field.

Section 11.3.8, “RoundFloor”.

roundHalf(field)

Returns a date-time rounded to the nearest whole unit of the date-time field.

Section 11.3.9, “RoundHalf”.

set(field, value)

Returns a date-time with the specified field set to the value returned by a value expression.

Section 11.3.10, “Set (By Field)”.

starts(event or timestamp)

Returns true if an event and another event start at the same time and the event's end happens before the other event's end.

Section 11.4.12, “Starts”.

startedBy(event or timestamp)

Returns true if an event and another event start at the same time and the other event's end happens before the input event's end.

Section 11.4.13, “Started By”.

withDate(year,month,day)

Returns a date-time with the specified date, retaining the time fields.

Section 11.3.11, “WithDate”.

withMax(field)

Returns a date-time with the field set to the maximum value for the field.

Section 11.3.12, “WithMax”.

withMin(field)

Returns a date-time with the field set to the minimum value for the field.

Section 11.3.13, “WithMin”.

withTime(hour,minute,sec,msec)

Returns a date-time with the specified time, retaining the date fields.

Section 11.3.14, “WithTime”.

toCalendar()

Returns the Calendar object for this date-time value.

Section 11.3.15, “ToCalendar”.

toDate()

Returns the Date object for this date-time value.

Section 11.3.16, “ToDate”.

toMillisec()

Returns the long-type milliseconds value for this date-time value.

Section 11.3.17, “ToMillisec”.

11.2. How to Use

11.2.1. Syntax

The syntax for date-time methods is the same syntax as for any chained invocation:

input_val.datetime_method_name( [method_parameter [, method_parameter [,...]]])
	  .[ datetime_method_name(...) [...]] 

Following the input_val input value is the . (dot) operator and the datetime_method_name date-time method name. It follows in parenthesis a comma-separated list of method parameter expressions. Additional date-time methods can be chained thereafter.

The input value can be any expression or event property that returns a value of type long or java.util.Calendar or java.util.Date. If the input value is null, the expression result is also null.

The input value can also be an event. In this case the event type of the event must have the start timestamp property name defined and optionally also the end timestamp property name.

The following example EPL statement employs the withTime date-time method. This example returns the current engine time with the time-part set to 1 am:

select current_timestamp.withTime(1, 0, 0, 0) as time1am from MyEvent

As date-time methods can be chained, this EPL is equivalent:

select current_timestamp.set('hour', 1).set('min', 0).set('sec', 0).set('msec', 0) as time1am
from MyEvent

The statement above outputs in field time1am a long-type millisecond-value reflecting 1am on the same date as engine time. Since the input value is provided by the built-in current_timestamp function which returns current engine date as a long-type millisecond value the output is also a long-type millisecond value.

You may apply a date-time method to an event property.

Assume that the RFIDEvent event type has a Date-type property by name timeTaken. The following query rounds each time-taken value down to the nearest minute and outputs a Date-type value in column timeTakenRounded:

select timeTaken.roundFloor('min') as timeTakenRounded from RFIDEvent

You may apply a date-time method to events. This example assumes that the RFIDEvent and WifiEvent event types both have a timestamp property defined. The EPL compares the timestamps of the RFIDEvent and the WifiEvent:

select rfid.after(wifi) as isAfter 
from RFIDEvent.std:lastevent() rfid, WifiEvent.std:lastevent() wifi

For comparing date-time values and considering event duration (event start and end timestamps) we recommend any of the interval algebra methods. You may also compare millisecond values using the between or in ranges and inverted ranges or relational operators (> , <, >=, <=).

From a performance perspective, the date-time method evaluation ensures that for each unique chain of date-time methods only a single calendar objects is copied or created when necessary.

11.3. Calendar and Formatting Reference

11.3.1. Between

The between date-time method compares the input date-time value to the two date-time values passed in and returns true if the input value falls between the two parameter values.

The synopsis is:

input_val.between(range_start, range_end [, include_start, include_end])

The method takes either 2 or 4 parameters. The first two parameters range_start and range_end are expressions or properties that yield either a long-typed, Date-typed or Calendar-typed range start and end value.

The next two parameters include_start and include_end are optional. If not specified, the range start value and range end value are included in the range i.e. specify a closed range where both endpoints are included. If specified, the expressions must return a boolean-value indicating whether to include the range start value and range end value in the range.

The example below outputs true when the time-taken property value of the RFID event falls between the time-start property value and the time-end property value (closed range includes endpoints):

select timeTaken.between(timeStart, timeEnd) from RFIDEvent

The example below performs the same test as above but does not include endpoints (open range includes neither endpoint):

select timeTaken.between(timeStart, timeEnd, false, false) from RFIDEvent

If the range end value is less then the range start value, the algorithm reverses the range start and end value.

If the input date-time value or any of the parameter values evaluate to null the method returns a null result value.

11.3.2. Format

The format date-time method formats the date-time returning a string.

The method takes no parameters. It returns the date-time value formatted using the default locale format obtained from new SimpleDateFormat().

The example below outputs the time-taken property value of the RFID event:

select timeTaken.format() as timeTakenStr from RFIDEvent

11.3.3. Get (By Field)

The get date-time method returns the value of the given date-time value field.

The method takes a single string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The method returns the numeric value of the field within the date-time value. The value returned adheres to Calendar-class semantics: For example, the value for month starts at zero and has a maximum of 11.

The example below outputs the month value of the time-taken property value of the RFID event:

select timeTaken.get('month') as timeTakenMonth from RFIDEvent

11.3.4. Get (By Name)

The following list of getter-methods are available: getMillisOfSecond(), getSecondOfMinute(), getMinuteOfHour(), getHourOfDay(), getDayOfWeek(), getDayOfMonth(), getDayOfYear(), getWeekYear(), getMonthOfYear(), getYear() and getEra().

All get-methods take no parameter and return the numeric value of the field within the date-time value. The value returned adheres to Calendar-class semantics: For example, the value for month starts at zero and has a maximum of 11.

The example below outputs the month value of the time-taken property value of the RFID event:

select timeTaken.getMonthOfYear() as timeTakenMonth from RFIDEvent

11.3.5. Minus

The minus date-time method returns a date-time with the specified duration taken away.

The method has two versions: The first version takes the duration as a long-type millisecond value. The second version takes the duration as a time-period expression, see Section 5.2.1, “Specifying Time Periods”.

The example below demonstrates the time-period parameter to subtract two minutes from the time-taken property value of the RFID event:

select timeTaken.minus(2 minutes) as timeTakenMinus2Min from RFIDEvent

The next example is equivalent but passes a millisecond-value instead:

select timeTaken.minus(2*60*1000) as timeTakenMinus2Min from RFIDEvent

11.3.6. Plus

The plus date-time method returns a date-time with the specified duration added.

The method has two versions: The first version takes the duration as a long-type millisecond value. The second version takes the duration as a time-period expression, see Section 5.2.1, “Specifying Time Periods”.

The next example adds two minutes to the time-taken property value of the RFID event:

select timeTaken.plus(2 minutes) as timeTakenPlus2Min from RFIDEvent

The next example is equivalent but passes a millisecond-value instead:

select timeTaken.plus(2*60*1000) as timeTakenPlus2Min from RFIDEvent

11.3.7. RoundCeiling

The roundCeiling date-time method rounds to the highest whole unit of the date-time field.

The method takes a single string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The next example rounds-to-ceiling the minutes of the time-taken property value of the RFID event:

select timeTaken.roundCeiling('min') as timeTakenRounded from RFIDEvent

If the input time is 2002-05-30 09:01:23.050, for example, the output is 2002-05-30 09:02:00.000 (example timestamps are in format yyyy-MM-dd HH:mm:ss.SSS).

11.3.8. RoundFloor

The roundFloor date-time method rounds to the lowest whole unit of the date-time field.

The method takes a single string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The next example rounds-to-floor the minutes of the time-taken property value of the RFID event:

select timeTaken.roundFloor('min') as timeTakenRounded from RFIDEvent

If the input time is 2002-05-30 09:01:23.050, for example, the output is 2002-05-30 09:01:00.000 (example timestamps are in format yyyy-MM-dd HH:mm:ss.SSS).

11.3.9. RoundHalf

The roundFloor date-time method rounds to the nearest whole unit of the date-time field.

The method takes a single string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The next example rounds the minutes of the time-taken property value of the RFID event:

select timeTaken.roundHalf('min') as timeTakenRounded from RFIDEvent

The following table provides a few examples of the rounding (example timestamps are in format yyyy-MM-dd HH:mm:ss.SSS):

Table 11.2. RoundHalf Examples

InputOutput
2002-05-30 09:01:23.0502002-05-30 09:01:00.000
2002-05-30 09:01:29.9992002-05-30 09:01:00.000
2002-05-30 09:01:30.0002002-05-30 09:02:00.000

11.3.10. Set (By Field)

The set date-time method returns a date-time with the specified field set to the value returned by an expression.

The method takes a string-constant field name and an expression returning an integer-value as parameters. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The method returns the new date-time value with the field set to the provided value. Note that value adheres to Calendar-class semantics: For example, the value for month starts at zero and has a maximum of 11.

The example below outputs the time-taken with the value for month set to April:

select timeTaken.set('month', 3) as timeTakenMonth from RFIDEvent

11.3.11. WithDate

The withDate date-time method returns a date-time with the specified date, retaining the time fields.

The method takes three expressions as parameters: An expression for year, month and day.

The method returns the new date-time value with the date fields set to the provided values. For expressions returning null the method ignores the field for which null is returned. Note the Calendar-class semantics: For example, the value for month starts at zero and has a maximum of 11.

The example below outputs the time-taken with the date set to May 30, 2002:

select timeTaken.withDate(2002, 4, 30) as timeTakenDated from RFIDEvent

11.3.12. WithMax

The withMax date-time method returns a date-time with the field set to the maximum value for the field.

The method takes a string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The method returns the new date-time value with the specific date field set to the maximum value.

The example below outputs the time-taken property value with the second-part as 59 seconds:

select timeTaken.withMax('sec') as timeTakenMaxSec from RFIDEvent

11.3.13. WithMin

The withMin date-time method returns a date-time with the field set to the minimum value for the field.

The method takes a string-constant field name as parameter. Please see Section 5.2.1, “Specifying Time Periods” for a list of recognized keywords (not case-sensitive).

The method returns the new date-time value with the specific date field set to the minimum value.

The example below outputs the time-taken property value with the second-part as 0 seconds:

select timeTaken.withMin('sec') as timeTakenMaxSec from RFIDEvent

11.3.14. WithTime

The withTime date-time method returns a date-time with the specified time, retaining the date fields.

The method takes four expressions as parameters: An expression for hour, minute, second and millisecond.

The method returns the new date-time value with the time fields set to the provided values. For expressions returning null the method ignores the field for which null is returned.

The example below outputs the time-taken with the time set to 9am:

select timeTaken.withTime(9, 0, 0, 0) as timeTakenDated from RFIDEvent

11.3.15. ToCalendar

The toCalendar date-time method returns the Calendar object for this date-time value.

The method takes no parameters.

The example below outputs the time-taken as a Calendar object:

select timeTaken.toCalendar() as timeTakenCal from RFIDEvent

11.3.16. ToDate

The toDate date-time method returns the Date object for this date-time value.

The method takes no parameters.

The example below outputs the time-taken as a Date object:

select timeTaken.toDate() as timeTakenDate from RFIDEvent

11.3.17. ToMillisec

The toMillisec date-time method returns the long-typed millisecond value for this date-time value.

The method takes no parameters.

The example below outputs the time-taken as a long-typed millisecond value:

select timeTaken.toMillisec() as timeTakenLong from RFIDEvent

11.4. Interval Algebra Reference

Interval algebra methods compare start and end timestamps of events or timestamps in general.

When the expression input is only a timestamp value, such as a long-type value or a Date or Calendar object, the start and end timestamp represented by that value are the same timestamp value.

When expression input is an event stream alias, the engine determine the event type for the stream. If the event type declares a start timestamp property name, the engine uses that start timestamp property to determine the start timestamp for the event. If the event type also declares an end timestamp property name, the engine uses that end timestamp property to determine the end timestamp for the event (i.e. an event with duration). If an end timestamp property name is not declared, the start and end timestamp for each event is the same value and the event is considered to have zero duration (i.e. a point-in-time event).

Interval algebra methods all return Boolean-type value. When the input value start timestamp is null, or the end timestamp (if declared for the event type) is null or any of the start timestamp and end timestamp (if declared for the event type) values of the first parameter is null, the result value is null.

11.4.1. Examples

The examples in this section simply use A and B as event type names. The alias a is used to represent A-type events and respectively the alias b represents B-type events.

The create-schema for types A and B is shown next. The two types are declared the same. The example declares the property providing start timestamp values as startts and the property providing end timestamp values as endts:

create schema A as (startts long, endts long) starttimestamp 'startts' endtimestamp 'endts'
create schema B as (startts long, endts long) starttimestamp 'startts' endtimestamp 'endts'

The sample EPL below joins the last A and the last B event. It detects A-B event combinations for which, when comparing timestamps, the last A event that occurs before the last B event. The example employs the before method:

select * from A.std:lastevent() as a, B.std:lastevent() as b where a.before(b)

For simplicity, the examples in this section refer to A and the alias a as the input event. The examples refer to B and the alias b as the parameter event.

11.4.2. Interval Algebra Parameters

The first parameter of each interval algebra methods is the event or timestamp to compare to.

All remaining parameters to interval algebra methods are intervals and can be any of the following:

  1. A constant, an event property or more generally any expression returning a numeric value that is the number of seconds. For example, in the expression a.before(b, 2) the parameter 2 is interpreted to mean 2 seconds. The expression a.before(b, myIntervalProperty) is interpreted to mean myIntervalProperty seconds.

  2. A time period expression as described in Section 11.4.7, “Includes”. For example: a.before(b, 1 hour 2 minutes).

When an interval parameter is provided and is null, the method result value is null.