Adventures with Event Bridge

What is Amazon Event Bridge?


AWS describes Event Bridge as:

… a serverless event bus that makes it easy to connect applications together using data from your own applications, integrated Software-as-a-Service (SaaS) applications, and AWS services. 

EventBridge delivers a stream of real-time data from event sources, such as Zendesk, Datadog, or Pagerduty, and routes that data to targets like AWS Lambda.

In this post, we’ll dive into the feature set of the product and look at some appropriate and downright dangerous use cases.

What we’re seeing is a natural extension of what began as Cloudwatch Events which many used primarily for their own internal events into a more generalised offering geared more towards supporting tools outside the AWS ecosystem. As more and more services move towards event-driven architectures this becomes an attractive option.



Event Bridge exposes two main methods of ingestion of events into one or more event buses. The first is designed to support third party tools such as DataDog and Zendesk via ingestion directly to event bus through a no-code style setup with the integration provider. This is similar in configuration to webhooks but the overhead is much less tedious – you don’t need to deal with HTTP requests on the other side – you just end up with the JSON payload in an event bus. For supported vendors this is a five minute integration process – and critically AWS have avoided needing to mess around with anything like cross-account roles to get this setup.

Although there’s a sizeable number of integrations out of the box already there’s also the opportunity (and likely necessity) to spin up API Gateway + Lambda to send data into an event bus – though I wouldn’t be surprised if in the near future API Gateway is able to push to Event Bridge directly or a Lambda blueprint designed to make this single click. There are more SaaS tools out there than observable stars in the universe – so user adoption is going to be contingent on the ease of use to integrate these non-supported tools.

There’s very much a sense that this service has evolved out of a significant number of AWS customers stringing together various services (Lambda, Kinesis etc) resulting in converging designs to support event-driven infrastructures. For someone looking to dip their toes in the water of what that might look like without having to navigate the intricacies of AWS services and the interconnects between them this is a no-brainer.



Rules are a reasonably simple concept that determine how data in the event bus is routed to one or more targets. This evaluation doesn’t cost you anything and the default quota is 100 rules per bus – so there’s quite a bit of flexibility here.

Currently there are two categories of rules available:

Scheduled – allows you to specify a fixed rate or cron-like expression but doesn’t work on partner or custom event buses (just the AWS default bus).

Event pattern – allows you to specify a JSON object that specifies a DSL used for pattern matching.

For the AWS default bus and partner buses there’s a nice dropdown user interface for this but not so for custom event buses. There’s no nice user interface for this yet (just a text box) so you’ll likely need to build this from scratch if you’re using a custom event bus. Some of the criteria you can use to match are:

  • String match (equality, prefix, not)
  • In or not in array of values
  • Numeric matching with standard comparison operators
  • IP address matching using CIDR notation
  • Presence or absence of a JSON field
  • Support for using multiple expressions (AND-like)
  • Null / empty string matching
  • Wildcard matching (by omitting a field entirely)

Fortunately AWS have provided a TestEventPattern API for testing your expressions making it significantly easier to test and debug these patterns.



To the credit of AWS they have launched and continue to add a large number of AWS services where data can be seamlessly proxied to. This is a critical design decision – I don’t think anybody particularly wants to spend additional time connecting one thing to another if it can be automated and scaled for them.

These services include things you would expect, like Lambda (if the data requires transformation or processing), Firehose (for sinking data to S3 for example for durable storage) as well as the more typical queues or “queue like” services – Kinesis, SQS, and SNS.

It’s clear that this has really been designed as a proxy service – rather than maintaining its own durable state – so if your intention is to have consumers that can read off a more persistent source of data you’ll be looking at something like Kinesis.

That’s both a blessing and a curse in some ways – the plethora of options makes Event Bridge highly adaptable to the needs of the consumer (Do I want the push semantics of SNS or the pull semantics of Kinesis?) but may make it harder to select one option over the other.

Of course it’s possible to send data directly to Lambda for serverless processing but having a queuing model in front of this seems like a sensible option particularly if the throughput can be variable and you have some throttling or concurrent execution limits on your Lambda function.



The pricing itself largely matches that of Cloud Watch events – a competitive $1 USD per million events.

This is pretty compelling pricing – at least at smaller event volumes when contrasted with comparable alternatives such as Kinesis.

At moderately larger event volumes 100 million events / day across a month you are looking at close to $3k month so it’s clear that the pricing is targeted initially towards lower volume consumption and the inbuilt SaaS integrations (such as PagerDuty and Zendesk) that are unlikely to emitting large numbers of events reflects that.


What is it not designed for?

– Not designed for particularly high throughput applications, at least if you are cost sensitive such as larger volume behavioural analytics

– Re-using the same rule on a repeated basis (current quota is 5 targets per rule, with a maximum of 100 rules, so one rule per consumer seems like a more appropriate pattern).

– Exactly once semantics. Like some other AWS services Event Bridge supports at-least-once semantics with respect to your targets. You’ll want to ensure that operations are idempotent or you are performing some deduplication after your event has been sent to the target.

– A silver bullet for collecting data from all SaaS services. Although there’s already a number of pre-built connectors to common SaaS tools it’s by no means comprehensive. If you are expecting to collect data from some currently unsupported tools expect to do a little plumbing around API Gateway and Lambda or possibly an Application Load Balancer (ALB).

– Durable storage. Event Bridge will keep trying to send to your target for a period of 24 hours but if it can’t successfully get a response back there’s no dead letter queue. Your event is lost – like tears in the rain (so persist your events somewhere else!)


What is it designed for?

– Simple proxying of events with filters to a wide array of other AWS services

– Enabling easier schema discovery via the usage of Schema Registry (currently in preview)

– Hands off scaling of an event bus (there’s no need to switch on autoscaling or manage shards!)

– Receiving events from some third party SaaS vendors without significant configuration (only for supported vendors)

– Automated schema discovery, when used in conjunction with Schema Registry

– A “configuration-free” solution for capturing events that has autoscaling capabilities


Integration with Schema Registry

In the next post we’ll dive into more detail on the currently in preview Schema Registry – designed but not required to be used in conjunction with Event Bridge. It’s an interesting piece of tech that is part schema registry, part schema discovery.


Published by Mike Robins

CTO at Poplin Data

Popular posts like this