%
Making Passenger

Better, faster, stronger – Latest journey planner updates

A move to AWS Batch to run computing jobs for Passenger’s journey planner functionality. More resilient, scalable and less resource-intensive system - even during increased demand.

14th Nov 2019

Passenger’s journey planner functionality is one of the most frequently used features of the Passenger ecosystem.

Using the journey planner, customers select the desired destination by entering an address if known, or by simply dropping a pin on a map. Passenger then calculates the fastest and most convenient way to arrive at that destination using an operator’s services. The customer then follows the step-by-step instructions provided by the app or website to get where they need to be.

We’ve had plenty of feedback from customers expressing their satisfaction with the journey planner, but that doesn’t mean it’s time to rest on our laurels. In fact, we’ve recently made some changes to improve this process on the server-side using Amazon Web Services (AWS) Batch, thus making Passenger and its journey planner more resilient as we onboard new operators with large operating areas.

Making Journey Planner scalable

Passenger uses OpenStreetMap (OSM) as some of its inputs for search and journey planning.

To supply correct and up to date journey planning information to customers on a daily basis, our servers read the whole of the United Kingdom OSM data (over 1GB) every morning. They slice this data by operator boundary and then store the information, using it to update search and build new datasets. Each server requires a large number of resources (many GBs of RAM + CPUs) to handle the countless journey requests made every day, not to mention the daily OSM import and operators importing new datasets.

During the build of our NaPTAN improvement project Bus Stop Checker we also download and process the same OSM data. We decided to use this project as an opportunity to try out a new approach to our daily journey planning tasks: using AWS Batch. This process produced such solid results that we chose to implement it for journey planning too.

Using AWS Batch for journey planning

AWS Batch is a system that allows users to run hundreds of thousands of batch computing jobs (the automated running of multiple programs) on AWS’s cloud infrastructure.

The new process for Passenger journey planning uses application and infrastructure monitoring service AWS CloudWatch to create a cron task (a time-based job schedule) which triggers every morning.

This task starts an AWS Batch job, which spins up an EC2 instance (Amazon Elastic Compute Virtual Machine) with less RAM and CPU than we previously required for the process. The EC2 instance pulls in a purpose-built, Alpine-based Docker container, which performs the daily OSM import.

Whenever an operator imports a new dataset, our servers trigger another AWS Batch build (using another purpose-built Alpine container) which compiles the routing graph based on that dataset using the latest OSM data created that morning.

What does this mean? Quite simply, it means much-reduced complexity for our journey planning servers. Thanks to the speed of the process, we can continue to provide the same journey planning service via a more resilient, scalable and less resource-intensive (and therefore cheaper) system.

Our work in this area is largely preemptive. As we onboard more operators, and in particular Enterprise customers accessing this individual service for their own apps, we’re expecting more users making more journey planner requests.

Thankfully, with the scalability of AWS Batch, we’re more than primed to take on the extra demand.

Want to learn more about the work we’re doing to improve Passenger? Get in touch with the team or sign up to our newsletter.

Newsletter sign up image

Newsletter

We care about protecting your data. Here’s our Privacy Policy.

Related news

Start your journey with Passenger

If you want to learn more, request a demo or talk to someone who can help you take the next step forwards, just drop us a line.