• What's new

Mailgun Authentication Service – Post Mortem July 2018

Josh Odom
5 min read
featured

This was originally posted on July 18, 2018.

The stuff of urban legends? An uncanny coincidence? Perhaps. What we do know is that this past Friday the 13th was not a great day for us.

Some of our customers were impacted by downtime, and we took immediate action to determine the root cause. We would like to be transparent and take a moment to share the details of our findings:

This is what happened

As a part of ongoing work by our engineering teams, several of our internal and external services were updated to delegate authentication to a centralized authentication service. One of those updated services was deployed at just after 10:00 UTC.

At 11:00 UTC on Friday, July 13, Mailgun engineering began receiving alerts of problems with several services. Our initial investigation suggested that the problem was related to this software change released earlier in the day, and we initiated immediate efforts to roll back that release.

Continued investigation revealed that, despite the roll back, our authentication services were still not responding in a timely manner. Authentication (and related) services were restarted, and systems began to resume normal operations. By 12:44 UTC, all services were fully functional again.

Why did this happen? What did you do about it?

Before this release, we had deployed an unrelated set of changes to the authentication service. This introduced additional latency to the authentication flow and reduced the rate at which requests could be serviced. Combined with the additional load generated by our updated services, the queue of authentication requests grew faster than they could be serviced. Additionally, failed requests were being retried, which further compounded the load problem.

We worked to reduce the impact and took several immediate measures to restore services by:

  • reducing authentication load by reverting the most recently updated service

  • removing the circular dependency to reduce latency

  • restarting authentication services to clear request backlog

Lessons Learned

Mailgun engineering has performed a comprehensive root cause analysis of this incident, and we have identified several actions we’ll be taking to reduce the likelihood of future incidents.

In addition to code and configuration changes made to remove unnecessary response latency, we are also in the process of formalizing SLOs. This will help increase our visibility into service latency and introduce more comprehensive data collection, monitoring, and alerting to aid in SLO enforcement.

We are also developing tooling to identify potential problem areas earlier in the development and release cycle in order to keep incidents like this from impacting our customers.

We really appreciate the understanding from our customers while we worked to resolve the issue quickly. We’d be happy to answer any questions or address concerns for impacted accounts – just open a support ticket, and our team will get back to you.

Last updated on August 28, 2020

  • Related posts
  • Recent posts
  • Top posts
View all

Always be in the know and grab free email resources!

No spam, ever. Only musings and writings from the Mailgun team.

By sending this form, I agree that Mailgun may contact me and process my data in accordance with its Privacy Policy.

sign up
It's easy to get started. And it's free.
See what you can accomplish with the world's best email delivery platform.
Sign up for Free