Welcome to Front Engineering Stories! At Front, we value transparency — in our company and our product. In this series, our engineering team will share insights into our engineering philosophy, our unique challenges, and visions for the future of our product.
In this first story, Front Engineering Manager Sean Xie explains the journey of how an email arrives in your Front inbox.
Okay, I'll be honest: it takes a little longer than a blink — but we've worked hard to make sure receiving emails in Front feels instantaneous.
Our p99 latency ranges from 500 milliseconds to 5 seconds. Here’s a peek into the journey of an email from when it’s sent, to when I receive it in my personal work inbox in Front.
It's a Tuesday morning in San Francisco, and I’m enjoying my morning coffee. Someone sent an email to my work email address. I don’t know this yet, but Front’s webhooks infrastructure has received a webhook notification from Google. It told Front that there is a new email in my Gmail inbox.
Our webhooks infrastructure was designed to be highly available. We do very little work when we receive these notifications — our infrastructure is in AWS, so we simply dump the payload in S3, then queue an event on SQS for a downstream service to process the notification. In the event of an S3 or SQS failure in our primary region, we have logic to fall back to a secondary region.
In under 100ms, our downstream service picks up the event from SQS. We call this service “gmail-incomings,” as it processes all gmail-related inbound events. We have different services processing each type of inbox (SMS, Twitter, Front chat for instance) so we can separately monitor and tune them based on their workloads. It also provides some level of isolation when there are issues affecting a specific type of inbox.
You may wonder how I am piecing together this story for you. We log every action we take in our ELK cluster, which we rely on heavily for debugging.
It’s worth mentioning our approach to distributed system tracing. As you can guess from my description of the first stages of inbound email processing, we have an event-driven architecture with multiple services each fulfilling a core responsibility. Each worker in a service, after processing its task, can defer work to one or more other services via our primary bus, SQS.
It’s crucial that we have an easy way to trace how tasks get propagated through our system so that we can isolate bugs or identify bottlenecks. We shipped a feature 2 years ago to add a task ID to every piece of work performed by a worker, and propagate that to any downstream tasks, so we could rebuild a graph of tasks from a source event. It took all of 1 day of engineering effort and has served us extremely well to date.
A worker from another service eventually picks up where the last left off to retrieve the email contents from the Gmail API. The main job of this service is to parse the RFC822 (standard internet email address format) email into a Front standard format.
It also threads the email into the correct email conversation in Front and creates all the related resources in Front’s primary data stores. This includes information about which inboxes your email belongs to, defining access rights to the email, and associating recipients of the email to the corresponding contacts in Front. Each email that is processed may involve hundreds of writes to our data stores.
At this point, Front’s databases have everything they need to serve my email. Not so quick, though! What else stands between me and my email?
Front has a sophisticated rules engine that helps you be more efficient at managing email. You can automate any action you can think of, like tagging an email, moving it to another inbox, assigning it to another person, or triggering an event in another app, like Jira or Github. Many of our customers have thousands of rules, and our rule engine needs to apply rules in a sequential manner so that they can chain in a predictable way.
One approach would have been to apply these rules after serving the email to a user’s inbox, but that could lead to a poor user experience, for example, when rules move conversations between inboxes. It would also bring more complexity in managing the state of the conversation, i.e. “open”, “archived”, or “assigned”.
Instead, our rules engine takes a first look at every incoming message to ensure it executes efficiently. I cannot confirm or deny that I have a rule to tag an email with URGENT if it comes from my head of engineering, Shane. Turns out this email did not come from Shane, so the rules engine leaves it alone.
Now, Front is ready to tell me that I’ve got mail! Our real-time infrastructure sends a payload to subscribed web and mobile clients via open WebSocket connections to inform clients that a new email has arrived. Clients take care of parsing the payload and rendering the corresponding email correctly, with additional API requests to our servers if necessary.
At this step, we also simultaneously index the new email in ElasticSearch, which we use to power search within Front. Our search infrastructure is an entire discussion of its own, but to give you a sense of numbers, we index hundreds of documents a second, and a full re-indexing of all documents in Front takes months!
A few minutes later (hey, I was enjoying my coffee!), I notice the email, which came from Leo, one of our backend engineers. She sent an update about our Typescript conversion efforts. We’ve converted close to 8% of our server-side codebase!
I remember that a few days ago, Charlie, another backend engineer, mentioned that he would be converting our billing codebase to Typescript. I use the superpowers of Front comments to connect the dots.
I love working with this team. We collaborate on anything and everything — and we get to use the very product we’re building to do it.
After leaving my comment, I archive the email. It disappears from my view, and I’m at inbox zero again.
Super cool that you took the time to read all of this. I hope it gives you a flavor of some of the very many engineering challenges we solve every day here at Front. Our small backend team of 5 engineers build and maintain the systems I just described, and that’s barely scratching the surface of their scope of responsibilities.
Uh...yes, we’d love your help! If you are looking to join a low-ego, high-wattage team to work on some seriously challenging engineering problems, check out our jobs page. We’re hiring in many roles across both our San Francisco and Paris offices.