- What's new
InboxReady x Salesforce: The Key to a Stronger Email Deliverability
Being in the API business has its challenges and maintaining the robustness of the system during peak hours is one of them. That’s why we do lots of stress testing here at Mailgun.
We have tried many different approaches over time, from simple Apache bench to more complicated custom testing suites. But this post is about a “quick and dirty” yet very flexible stress testing using Python.
When it comes to writing HTTP clients in Python we are fans of the Requests library. This is what we recommend to our API users. Requests is great, but it has one weakness: It’s a blocking one-call-per-thread affair: it’s hard or impossible to generate tens of thousands of requests quickly with it.
To solve this problem we looked at Treq (Github repository). Treq is an HTTP client library inspired by Requests, but it runs on Twisted and it possesses the typical Twisted powers: it is asynchronous and highly concurrent when it comes to network I/O.
Treq is not specific to stress testing at all: it’s a great tool for writing highly concurrent HTTP clients in general, like web crawlers. Treq is elegant, simple to use and powerful. Here’s an example:
1 >>> from treq import get2 >>> def done(response):3 ... print response.code4 ... reactor.stop()5 >>> get("http://www.github.com").addCallback(done)6 >>> from twisted.internet import reactor7 >>> reactor.run() 200
Below is a simple script which uses Treq to bombard a single URL with maximum possible number of requests.
1 #!/usr/bin/env python 2 from twisted.internet import epollreactor 3 epollreactor.install()4 from twisted.internet import reactor, task 5 from twisted.web.client import HTTPConnectionPool 6 import treq 7 import random 8 from datetime import datetime9 req_generated = 0 10 req_made = 0 11 req_done = 012 cooperator = task.Cooperator()13 pool = HTTPConnectionPool(reactor)14 def counter(): 15 '''This function gets called once a second and prints the progress at one 16 second intervals. 17 '''18 print("Requests: {} generated; {} made; {} done".format(19 req_generated, req_made, req_done))20 # reset the counters and reschedule ourselves21 req_generated = req_made = req_done = 022 reactor.callLater(1, counter)23 def body_received(body): 24 global req_done25 req_done += 126 def request_done(response): 27 global req_made28 deferred = treq.json_content(response)29 req_made += 130 deferred.addCallback(body_received)31 deferred.addErrback(lambda x: None) # ignore errors32 return deferred33 def request(): 34 deferred = treq.post('http://api.host/v2/loadtest/messages',35 auth=('api', 'api-key'),36 data={'from': 'Loadtest <test@example.com>',37 'to': 'to@example.org',38 'subject': "test"},39 pool=pool)40 deferred.addCallback(request_done)41 return deferred42 def requests_generator(): 43 global req_generated44 while True:45 deferred = request()46 req_generated += 147 # do not yield deferred here so cooperator won't pause until48 # response is received49 yield None50 if __name__ == '__main__': 51 # make cooperator work on spawning requests52 cooperator.cooperate(requests_generator())53 # run the counter that will be reporting sending speed once a second54 reactor.callLater(1, counter)55 # run the reactor56 reactor.run()
The output:
12013-04-25 09:30 Requests: 327 generated; 153 sent; 153 received 22013-04-25 09:30 Requests: 306 generated; 156 sent; 156 received 32013-04-25 09:30 Requests: 318 generated; 184 sent; 154 received
The “Generated” ones are the requests that have been prepared, but the Twisted reactor has not sent them yet. This script ignores all errors for simplicity, adding the stats for timeouts is left as an exercise for the reader.
The script can be used as a starting point and improved and extended with your own custom application-specific logic. One suggested improvement would be to use collections.Counter instead of the ugly globals. The script runs on a single thread, and to squeeze the maximum number of requests from a machine something like mulitprocessing can be used.
Happy stress testing!
Cheers, Mailgunners
Learn about our Deliverability Services
Looking to send a high volume of emails? Our email experts can supercharge your email performance. See how we've helped companies like Lyft, Shopify, Github increase their email delivery rates to an average of 97%.
Last updated on May 17, 2021
InboxReady x Salesforce: The Key to a Stronger Email Deliverability
How To Improve Your Email Deliverability In 2022
Mailgun Joins Sinch: The Future of Customer Communications Is Here
Mailpets: For The Love Of Animals
The Mailgun Maverick Program Is Here!
Force for Change: It's Time to Speak Out
When Should You Use An Email API?
How To Use Parallel Programming
Mailgun’s COVID-19 Plan of Action
What we've been up to: Mailgun's 2019 Year in Review
InboxReady x Salesforce: The Key to a Stronger Email Deliverability
Become an Email Pro With Our Templates API
Google Postmaster Tools: Understanding Sender Reputation
Navigating Your Career as a Woman in Tech
Implementing Dmarc – A Step-by-Step Guide
Email Bounces: What To Do About Them
Announcing InboxReady: The deliverability suite you need to hit the inbox
Black History Month in Tech: 7 Visionaries Who Shaped The Future
How To Create a Successful Triggered Email Program
Designing HTML Email Templates For Transactional Emails
InboxReady x Salesforce: The Key to a Stronger Email Deliverability
Implementing Dmarc – A Step-by-Step Guide
Announcing InboxReady: The deliverability suite you need to hit the inbox
Designing HTML Email Templates For Transactional Emails
Email Security Best Practices: How To Keep Your Email Program Safe
Mailgun’s Active Defense Against Log4j
Email Blasts: The Dos And Many Don’ts Of Mass Email Sending
Email's Best of 2021
5 Ideas For Better Developer-Designer Collaboration
Mailgun Joins Sinch: The Future of Customer Communications Is Here
Always be in the know and grab free email resources!
By sending this form, I agree that Mailgun may contact me and process my data in accordance with its Privacy Policy.