Outbound SMTP service will no longer be supported by Mailsac.
What Does this Mean for Me?
Mail will no longer be able to sent from Mailsac addresses or custom domains using the outbound Mailsac SMTP service.
If you are sending from the REST API, compose email form, or Unified Inbox we encourage you to seek out other SMTP sending services. Mailsac has always supported direct SMTP from anywhere, without a mail relay, as well.
Reasons for the Change
Email delivery is not an easy problem at scale.
Our customers, especially quality assurance teams, are using Mailsac as a receive-only service.
The likelihood that Mailsac’s outbound messages are delivered to the intended inbox has been trending lower over time, despite increased effort.
We made the decision to focus our efforts on improving our core product – disposable inbound email for testing.
The Mailsac Self-Hosted Temporary Email User Interface is available in a GitHub repository. This project provides a self-hosted user interface for viewing disposable email. It uses mailsac.com as the backend email service.
Mailsac.com Limitations
Mailsac already offers disposable email without a need to sign up for an account. What need does this application meet that Mailsac doesn’t already provide?
Mailsac has limitations on what can be viewed without signing up for an account. Only the latest email in a public mailbox can be viewed without signing in. Mail in a private domain cannot be viewed without signing in with an account that has permissions to the private domain.
Use Cases
There are two use cases that customer’s have brought to our attention that Mailsac’s service doesn’t satisfy. Both stem from a requirement to allow users read-only access to an inbox without the requirement of creating a Mailsac account.
Class Room Use Case
An instructor may want students, who are young in age and don’t have an email address, to sign up for an account with a web service used in the class. The Mailsac Self-Hosted Temporary Email User Interface application provides a simplified interface for students to view email sent to a private mailsac hosted domain without the need to sign up for a mailsac account or email address.
Acceptance Tester Use Case
As part of the sofware development lifecycle there is a need to have software tested by users. Temporary email has long been beneficial to testing. The Mailsac Self-Hosted Temporary Email User Interface makes this easier. Users can test applications using email addresses in a Mailsac hosted private domain without the need to sign up for a Mailsac account. Furthermore, because the application is self-hosted companies can use a reverse proxy to enforce IP allow lists or put the application behind basic authentication.
Running the Mailsac Self-Hosted Email User Interface
Local
With NodeJS installed this application can be run with the following commands.
npm install && npm run build
MAILSAC_KEY=YOUR_MAILSAC_API_KEY npm run start
You will need to generate a Mailsac API key. To generate or manage API Keys use the API Keys page.
The application is now running and can be accessed via a web browser at http://localhost:3000 .
Any public or private Mailsac hosted address the API key has access to can be viewed by entering the email address in the text box and selecting “view mail”.
Domain Option
You can prepopulate the domain by using the NEXT_PUBLIC_MAILSAC_CUSTOM_DOMAIN environment variable.
NEXT_PUBLIC_MAILSAC_CUSTOM_DOMAIN=example.mailsac.com npm run build
MAILSAC_KEY=YOUR_MAILSAC_API_KEY npm run start
Vercel Hosted
Vercel is a platform as a service provider. Their service makes running your own Next.js application easy.
Grant Vercel permissions to read all your repos or choose to grant permission on the forked repo
Import forked repository into Vercel
Configure MAILSAC_KEY environment variable
Deploy application
After a successful deployment you can click on the image of the application to be taken to the live application.
NOTE There is currently no authentication on this application. Anyone with the URL will be able to view emails and domains associated with the Mailsac API key that was used. Operations will be tracked in the Mailsac account in which the API key is associated with.
You are free to deploy this app however you like. Please keep the attribution to Mailsac.
Mailsac is changing DNS providers to Cloudflare to provide a more resilient SaaS offering.
Customer Changes
No customer changes are required. If you implemented IP based ACLs at the VLAN or border firewall, it is possible these rules may need to be updated. Cloudflare publishes a list of their IP addresses.
Updates
Saturday April 2nd 14:33 UTC DNS has been switched over to use Cloudflare. All validation tests have been completed. We will continue to monitor for issues.
Mailsac provides a REST API to fetch and read email. The REST API also allows you to reserve an email address that can forward messages to another mailsac email address, Slack, WebSocket, or webhook
This article describes how to integrate with Mailsac using Java and the JUnit testing framework. The JavaMail API will be used to send email via SMTP.
What is JUnit?
JUnit is a unit testing framework for the Java programming language. The latest version of the framework, JUnit 5, requires Java 8 or above. It supports testing using a command-line interface, build automation tools, and IDEs.
JUnit can be used to test individual components of code to ensure that each unit is performing as intended.
Setting Up the Environment
Depending on the environment, there are multiple ways to run tests. Testing using the command and JUnit are included in this example.
Testing Using Command-Line
Running tests from the command-line requires the ConsoleLauncher application(junit-platform-console-standalone-1.7.2.jar). JUnit ConsoleLauncher is published in the Maven Central repository under the junit-platform-console-standalone directory.
The first section of output shows the name of the unit test (tests truth) and the test names (true equals true and false equals false). The checkmark next to the test name indicates it was successful.
The second section of output shows a summary of the test results.
Testing Using Build Tools
Testing from build automation tools, like Maven, is another option. In many ways, using build tools is the best option. For instance, they provide a standard directory layout that encourages industry standard naming conventions.
Maven abstracts many underlying mechanisms allowing developers to run a single command for validating, compiling, testing, packaging, verifying, installing, and deploying code.
This section will describe how to set up Maven for building, managing, and testing a project.
Edit the AppTest.java file: $EDITOR src/test/java/com/mailsac/api/AppTest.java
package com.mailsac.api;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.junit.jupiter.api.Test;
class TestClass {
@Test
void trueEqualsTrue() {
// The assertTrue method asserts that the supplied condition is true.
// static void assertTrue(condition)
assertTrue(true);
}
@Test
void falseEqualsFalse() {
// The assertEquals method asserts that expected and actual are equal.
// static void assertEquals(expected, actual)
assertEquals(false, false);
}
}
In the directory mailsac-integration-test-java, run mvn clean package. This command deletes the folder target , packages the project into a new target folder, and runs a unit test.
Tests can be manually run using the command mvn test in the mailsac-integration-test-java directory.The output should appear similar to:
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.mailsac.api.TestClass
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 s - in com.mailsac.api.TestClass
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
Mailsac Java Integration Test
This section describes how to leverage Mailsac and JUnit to test mail delivery. Emails will be sent to Mailsac using SMTP and email delivery will be validated with JUnit.
There are 3 additional libraries that will be used:
The Unirest library will to used to send REST requests to the Mailsac API
public class AppTest {
// MAILSAC_API_KEY environment variable. Generated by mailsac. See
// https://mailsac.com/api-keys
static String mailsacAPIKey = "";
// MAILSAC_TO_ADDRESS environment variable. Who you're sending an email to.
static String mailsacToAddress = "";
// SMTP_FROM_ADDRESS environment variable. Necessary if you are sending
// through out.mailsac.com (unlikely - you most likely will replace
// sendMail() below.
static String fromAddress = "";
// SMTP_USERNAME environment variable. Required for authenticated SMTP sending
static String smtpUserName = "";
// SMTP_PASSWORD environment variable. Required for authenticated SMTP sending
static String smtpPassword = "";
// SMTP_HOST environment variable. Hostname of your SMTP server
static String smtpHost = "";
// SMTP_PORT environment variable. Port used for SMTP sending
static int smtpPort = 587;
@BeforeAll
static void setup() throws Exception {
mailsacAPIKey = System.getenv().get("MAILSAC_API_KEY");
mailsacToAddress = System.getenv().get("MAILSAC_TO_ADDRESS");
fromAddress = System.getenv().get("SMTP_FROM_ADDRESS");
smtpUserName = System.getenv().get("SMTP_USERNAME");
smtpPassword = System.getenv().get("SMTP_PASSWORD");
smtpHost = System.getenv().get("SMTP_HOST");
if (System.getenv().get("SMTP_PORT") != null) {
Integer.parseInt(System.getenv().get("SMTP_PORT"));
}
if (mailsacAPIKey == null || mailsacToAddress == null || fromAddress == null) {
throw new Exception("Missing environment variable setup!");
}
if (smtpUserName == null || smtpPassword == null || smtpHost == null) {
throw new Exception("Missing SMTP environment variables");
}
System.out.println(mailsacAPIKey);
System.out.println(mailsacToAddress);
System.out.println(fromAddress);
}
}
Add a purgeInbox() method which makes a DELETE request to api/addresses/{email}/messages/(messageId}.
This section of code should be added to the existing AppTest class.
public class AppTest {
//...
@BeforeEach
@AfterEach
// purgeInbox cleans up all messages in the inbox before and after running each
// test,
// so there is a clean state.
void purgeInbox() throws UnirestException, JsonProcessingException {
HttpResponse<String> response = Unirest
.get(String.format("https://mailsac.com/api/addresses/%s/messages", mailsacToAddress))
.header("Mailsac-Key", mailsacAPIKey)
.asString();
// Parse JSON
ObjectMapper objectMapper = new ObjectMapper();
Object[] messagesArray = objectMapper.readValue(response.getBody(), Object[].class);
for (int i = 0; i < messagesArray.length; i++) {
JsonNode m = objectMapper.convertValue(messagesArray[i], JsonNode.class);
String id = m.get("_id").asText();
System.out.printf("Purging inbox message %s\n", id);
Unirest.delete(String.format("https://mailsac.com/api/addresses/%s/messages/%s", mailsacToAddress, id))
.header("Mailsac-Key", mailsacAPIKey)
.asString();
}
}
//...
}
Implement a sendMail() method which sends an email. This section will likely likely be different depending on your use case. For example, you may be sending emails via your web application or via an email campaign.
public class AppTest {
//...
static void sendMail(String subject, String textMessage, String htmlMessage)
throws UnsupportedEncodingException, MessagingException {
Session session = Session.getDefaultInstance(new Properties());
javax.mail.Transport transport = session.getTransport("smtp");
MimeMessage msg = new MimeMessage(session);
// set message headers
msg.addHeader("Content-type", "text/HTML; charset=UTF-8");
msg.addHeader("format", "flowed");
msg.addHeader("Content-Transfer-Encoding", "8bit");
msg.setFrom(fromAddress);
msg.setReplyTo(InternetAddress.parse(fromAddress));
msg.setSubject(subject, "UTF-8");
msg.setText(textMessage, "UTF-8");
msg.setContent(htmlMessage, "text/html");
msg.setSentDate(new Date());
msg.setRecipients(Message.RecipientType.TO, mailsacToAddress);
msg.saveChanges();
System.out.println("Email message is ready to send");
transport.connect(smtpHost, smtpPort, smtpUserName, smtpPassword);
transport.sendMessage(msg, msg.getAllRecipients());
System.out.println("Email sent successfully");
}
// ...
}
Add test. Use a for loop to check if the message was received by scanning the recipient inbox periodically. If the recipient inbox is not empty, and a message was found, the test verifies the message content:
This test uses the Mailsac API endpoint/api/addresses/{email}/messages which lists all messages in an inbox.
public class AppTest {
//...
@Test
void checkEmailWithLink() throws MessagingException, UnirestException, IOException, InterruptedException {
sendMail("Hello!", "Check out https://example.com", "Check out <a href='https://example.com'>My website</a>");
// Check inbox for the message up to 10x, waiting 5 seconds between checks.
found: {
for (int i = 0; i < 10; i++) {
// Send request to fetch a JSON array of email message objects from mailsac
HttpResponse<String> response = Unirest
.get(String.format("https://mailsac.com/api/addresses/%s/messages", mailsacToAddress))
.header("Mailsac-Key", mailsacAPIKey)
.asString();
// Parse JSON
ObjectMapper objectMapper = new ObjectMapper();
Object[] messagesArray = objectMapper.readValue(response.getBody(), Object[].class);
System.out.printf("Fetched %d messages from Mailsac for address %s\n", messagesArray.length,
mailsacToAddress);
eachMessage: {
for (int m = 0; m < messagesArray.length; m++) {
// Convert object into JSON to fetch a field
JsonNode thisMessage = objectMapper.convertValue(messagesArray[m], JsonNode.class);
// After a message is found, the JSON object is checked to see if the link was
// sent correctly
assertTrue(thisMessage.get("links").toString().contains("https://example.com"),
"Missing / Incorrect link in email");
System.out.printf("Message id %s contained the correct link\n",
thisMessage.get("_id").asText());
return; // end the tests
}
}
System.out.println("Message not found yet, waiting 5 secs");
Thread.sleep(5000);
}
// Fail the test if we haven't reached assertTrue above
fail("Never received expected message!");
}
}
// ..
}
At this point, the code is complete. Package the project: mvn clean package. This will also run a test.
Subsequent changes to the source file do not require you to run mvn clean package again. Instead, run mvn test.
The output should appear similar to this:
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running com.mailsac.api.AppTest
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.148 s s - in com.mailsac.api.AppTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
GitHub Repository
If you encounter any difficulties, git clone https://github.com/mailsac/mailsac-integration-test-java. Make edits as necessary, and run mvn package.
Alternatively, if your tests fail because of error codes when making requests to the Mailsac API, please refer to the API Specification for further reading.
Next Steps
The Mailsac API Specification has generated code examples in Java + Unirest for making requests. It also has code examples in other languages.
This example can be adjusted to get all private email addresses for an account and purge their inboxes if necessary.
Please visit our forums if you have any questions!
Many email services including Gmail, iCloud and Fastmail support stripping the + plus symbol and everything after it in the local-part of the address (everything before the @ symbol).
Plus-addressing has long been a useful feature to segment user accounts across services. At Mailsac we offer a variety of disposable email and forwarding utilities that are designed for software QA engineers and developers. Things like forwarding all messages in a domain to a single address, or automatically routing email to webhooks or slack, are really easy – may not even require DNS setup.
Tech news has recently been full of CVEs related to a popular JVM logging library named Log4J.
Mailsac services do rely on JVM languages, including Java. This extends through the entire stack, custom apps, self-hosted open source software, internal and external, infrastructure, proxies, and scripts.
The Mailsac Team is small yet mighty, with decades of experience taking security seriously. We follow best practices for infrastructure-as-code, patching, testing, network isolation, backups, restoration, and principle of least access access. Large enterprises including banks and government agencies trust Mailsac for disposable email testing. We provide exceptionally fast and predictable REST and Web Socket APIs with an excellent uptime record.
Mailsac has support for multiple users under the same account, so you can keep disposable email testing private within your company.
It’s free to test email immediately – no payment details required. You can send email to any address @mailsac.com and confirm delivery in seconds without even logging in. Start now at mailsac.com.
The Mailsac engineering team recently open sourced our internal throttling service, dracula, under an open source license. Check it out on Github. In the repo we prebuild server and CLI binaries for mac and linux, and provide a client library for Go.
Dracula has performed extremely well in AWS on ARM64 in production for us. It handles thousands of requests per second without noticeable CPU spikes, while maintaining low memory.
In this blog post we’re going to give an overview of why it was necessary, explain how it works, and describe dracula’s limitations.
Why we made it
For the past few years Mailsac tracked throttling in a PostgreSQL unlogged table. By using an unlogged table we didn’t have to worry about lots of disk writes, nor the safety provided by having the write-ahead-log. Throttling records are only kept for a few minutes. We figured if Postgres was rebooting, losing throttling records from the past minutes would be the least of our worries.
In the months leading up to replacing this unlogged table with dracula we began having performance bottlenecks. Mailsac is experiencing fast growth in the past few years. Heavy sustained inbound mail was resulting in big CPU time while Postgres vacuumed the throttling tables. The throttling table started eating too many CPU credits in AWS RDS – credits the we needed for more important stuff like processing emails.
We needed a better throttling solution. One that could independently protect inbound mail processing and REST API services. Postgres was also the primary data store for parsed emails. The Postgres-based solution was a multi-tiered approach to throttling – especially against bad actors – and helped our website and REST API snappy, even when receiving a lot of mail from questionable sources. The throttling layer also caches customer data so we can filter out the paying users from unknown users. Separating this layer from the primary data store would help them scale independently.
Can Redis do it?
So it was time to add a dedicated throttle cache. We reached for Redis, the beloved data structure server.
We were surprised to find our use case – counting quickly-expiring entries – is not something Redis does very well.
Redis can count items in a hash or list. Redis can return keys matching a pattern. Redis can expire keys. But it can’t expire list or hash item entries. And Redis can’t count the number of keys matching a pattern – it can only return those keys which you count yourself.
What we needed Redis to do was count items matching a pattern while also automatically expiring old entries. Since Redis couldn’t do this combination of things, we looked elsewhere.
Other utility services seemed too heavy and full-of-features for our needs. We could have stood up a separate Postgres instance, used MongoDB, Elasticache, or Prometheus. The team has experience running all these services. But the team is also aware that the more features and knobs a service has, the more context is needed to debug it – the more expertise to understand its nuances, the more risk you’ll actually use additional features, and the more risk you’ll be slow responding to issues under heavy load.
All we wanted to do was put some values in a store, have them expired automatically, and frequently count them. We’d need application level logic to do at least some of this, so we made a service for it – dracula. Please check it out and give it a try!
How it works under the hood
Dracula is a server where you can put entries, count the entries, and have the entries automatically expire.
The dracula packet layout is as follows. See protocol.go for the implementation.
Section Description
Command characterPut, Count, Error
space
xxhashpre shared key + id + namespace + data
space
Client Message ID
space
Namespace
space
Entry data
Size
1 byte
1 byte
8 bytes
1 byte
4 bytesunsigned 32 bit integer (Little Endian)
1 byte
64 bytes
1 byte
remaining 1419 bytes
Example
byte(‘P’), ‘C’, ‘E’
byte(‘ ‘)
0x1c330fb2d66be179
byte(‘ ‘)
6 or []byte{6, 0, 0, 0}
byte(‘ ‘)
“Default” or “anything” up to 64 bytes
byte(‘ ‘)
192.169.0.1, or any string up to end of packet
500 byte dracula packet byte order
Here’s roughly how the dracula client-server model works:
The client constructs a 1500 byte packet containing a client-message ID, the namespace, and the value they want to store in the namespace (to be counted later).
A hash of the pre-shared secret + message ID + namespace + entry data is set inside the front part of the message.
A handler is registered under the client message ID.
The bytes are sent over UDP to the dracula server.
Client is listening on a response port.
If no response is received before the message times out, a timeout error is returned and the handler is destroyed. If the response comes after the timeout, it’s ignored.
Server receives packet, decodes it and checks the hash which contains a pre-shared secret.
Server performs the action. There are only two commands – either Put a namespace + entry key, or Count a namespace + entry key.
Server responds to the client using the same command (Put or Count). The entry data is replaced with a 32 bit unsigned integer in the case of a Count command. The hash is computed similarly to before.
Client receives the packed, decodes it and confirms the response hash.
Data structures
Dracula uses a few data structures for storing data.
Namespaces are stored in a hashmap provided by github.com/emirpasic/gods, and we use a simple mutex to sync multithreaded access. Entries in each namespace are stored in wrapped AVL tree from the same repo, which we added garbage collection and thread safety. Each node of the AVL tree has an array of sorted dates.
Here’s another view:
dracula server
Namespaces (hashmap)
Entries (avltree)
sorted dates (go slice / dynamic array of int64)
Server configuration
When using dracula, the client has a different receiving port than the server. By default the dracula server uses port 3509. The server will write responses back to the same UDP port it received messages from on the client.
Messages are stored in a “namespace” which is pretty much just a container for stored values. The namespace is like a top-level key in Redis. The CLI has a default namespace if you don’t provide one. The go client requires choosing a namespace.
Namespaces and entries in namespaces are exact – dracula does not offer any matching on namespaces.
At Mailsac, we use uses the namespaces to separate messages on a per-customer basis, and to separate free traffic. Namespaces are intentionally generic. You could just use one namespace if you like, but performance under load improves if entries are bucketed into namespaces.
Production Performance
Dracula is fast and uses minimal resources by today’s standards.
While we develop it on Intel, and in production we run dracula on Arm64 architecture under Amazon Linux for a significant savings.
In its first months of use, dracula did not spike above 1% CPU usage and 19 MB of RAM, even when handling single-digit-thousands of requests simultaneously.
Tradeoffs
By focusing on a small subset of needs, we designed a service with sharp edges. Some of these may be unexpected features so we want to enumerate what we know.
It only counts
It’s called dracula in an allusion to Count Dracula. There’s no way to list namespaces, keys, nor return stored values. Entries in a namespace can be counted, and the number of namespaces can be counted. That’s it! If we provided features like listing keys or namespace, we would have needed to change the name to List Dracula.
No persistence
Dracula is designed for short-lived ephemeral data. If dracula restarts, nothing is currently saved. This may considered for the future, though. Storing metrics or session data in dracula is an interesting idea. On the other hand, we see no need to reinvent Redis or Prometheus.
Small messages
An entire dracula protocol message is 1500 bytes. If that sounds familiar, it’s because 1500 bytes is the normal maximum-transmission-unit for UDP. Namespaces are capped at 64 bytes and values can be up to 1419. After that they’re cut off.
Same expiry
All namespaces and entries in the entire server have the same expire time (in seconds). It shouldn’t be too difficult to run multiple draculas on other ports f you have different expiry needs.
HA
The approach to high-availability assumes short-lived expiry of entries. A pool of dracula servers can replicate to one another, and dracula clients track health of pool members, automatically handling failover. Any client can read from any server, but in the case of network partitioning, consistency won’t be perfect.
Retries
Messages that fail or timeout are not retried by the dracula client right now. There’s nothing stopping the application level from handling this. It may be added as an option later.
Garbage
While we have not yet experienced issues with dracula’s garbage collection, it’s worth noting that it exists. A subset of entries are crawled and expired on a schedule. On “count” commands, old entries are expired. The entire store is typically not locked, but it’s likely you would see a little slowdown when counting entires in very large namespaces, or when there are a lot of old entires to cleanup, while GC is running. In our testing it’s on the order of single digit miliseconds, but this can be expected to grow linearly with size.
Unknown scale
We’re working with low-tens-of-thousands entries per namespace, maximum. Above that, we’re unsure how it will perform.
Language support
Upon release, dracula has a reference client implementation in Golang. Node.js support is on our radar, but not finished. Please open an issue in the dracula repo to request support for a specific language. We’d be thrilled to receive links to community written drivers as well.
What’s next?
Hopefully you enjoyed learning a little bit about dracula and are ready to give it a try. Head over to Github https://github.com/mailsac/dracula where added examples of using the server, client library, and CLI.
Finally, Mailsac would love your feedback. Open a Github issue or head to forum.mailsac.com. If you’d like to see additional library languages supported, let us know.
On the US holiday Thanksgiving, November 25th at approximately 17:20, an email address [email protected] began sending tens of thousands of simultaneous emails to Mailsac. By 17:28, various alerts were sent to the devops team. Primary inbound mail services were exhausted of memory and locked up or ready to fall over. Soon the failover services were overrun and inbound mail stopped working entirely.
Recovery Actions
The devops team sprang into action and took evasive maneuvers. Grafana dashboards, which show key indicators of service health, were slow to load or unresponsive. Logging infrastructure was still working and showed that the sender was using a Reply-To address of[email protected] yet the envelope and FROM header address were generated from unique subdomains per inbound email address which exploited a previously unknown workaround of Mailsac’s multi-tier throttling infrastructure. All of these messages came from sandbox Salesforce subdomains – at least 6 subdomains deep.
Once the root cause was discovered, the sender’s mail was blocked and additional resources were allocated to inbound mail services to allow more memory to build up while blocklists were propagating across the network of inbound mail services. By 17:40, inbound mail was coming back online, and by 17:44 most alerts had resolved.
Lessons Learned
We monitor and throttle inbound mail in several custom systems. The goal of these systems is to keep pressure off our primary datastore and API services, and provide insight into system load and identify bad actors. The monitoring systems looked mostly at the domain and/or subdomain. Unfortunately we did not anticipate a sender with unique subdomains per message. This caused tens of thousands of superfluous Prometheus metrics which led to three things to be overwhelmed:
the metrics exporter inside the inbound mail server,
the prometheus metrics server running out of memory, and
grafana UI dashboard being non-responsive due to too many apparently unique senders.
All of the described issues have been fixed.
Non-Impacted Services
During the outage all other services remained up. The REST API, web sockets, outbound SMTP, SMTP capture, and more were unaffected.
We wanted to apologize to all of our paying customers. Mailsac is often integrated with automated tests in CI/CD systems. If our downtime also caused alerts for you, we’re very sorry about this! The root cause has been fixed and we’re continuing to monitor the situation.
Integration tests identify errors between systems. These tests can be slow to run because of the interactions between multiple systems.
Mailsac can facilitate integration testing between web apps and transactional email services.
This article explains how to use WebSockets to make your email integration tests faster and simpler than with REST API polling.
Explaining the differences: REST APIs vs WebSockets
A REST API call uses a HTTP request for creating, reading, updating, and deleting objects. The HTTP connection between the client and server is short lived. An example of this is the List Messages In Inbox endpoint. The endpoint will return JSON formatted information about email messages in a Mailsac Inbox. Each time a client checks for new messages a new HTTP connection will be used.
A WebSocket is a persistent connection between a client and server providing full-duplex communication. By reusing an established connection there is no need to poll the REST API for changes, instead data can be pushed to the client in real time. WebSockets often listen on port 80/443 but does not use HTTP, except for the initial connection handshake.
REST API Polling Examples
The examples below use the “old way” – hit the Mailsac REST API and “poll” the inbox for new messages.
Polling is reliable and familiar for API programmers.
However, this approach can result in significant delays between when the email was received by Mailsac and when the test checks for a new message. Every few seconds, you ask the server if there are new messages.
Using a Web Socket Instead of Polling
What if the server could notify you that there are new messages? That’s where WebSockets come in.
Steps to Validate Email Contents Using a WebSocket
Here’s how you can get notified of a new email message:
Establish a WebSocket connection with Mailsac
Send an email using SMTP to a private Mailsac address
Receive content of the email over the established WebSocket
Validate the content of the email
Send a second email using SMTP to a private Mailsac addrress
Receive content of second email over the established WebSocket
Validate the content of the second email
Delete both email messages
Close WebSocket connection
Test Configuration
This test requires several variables be defined. They can be set by editing the script or by setting environment variables.
const mailsacAPIKey = process.env.MAILSAC_API_KEY || ''; // Generated by mailsac. See https://mailsac.com/api-keys
const mailsacToAddress = process.env.MAILSAC_TO_ADDRESS || ''; // Mailsac email address where the email will be sent
const smtpUserName = process.env.SMTP_USER || ''; // Username for smtp server authentication
const smtpPassword = process.env.SMTP_PASSWORD || ''; // Password for smtp server authentication
const smtpHost = process.env.SMTP_HOST || ''; // hostname of the smtp server
const smtpPort = process.env.SMTP_PORT || 587; // port the smtp is listening on
Setup: Configure Mailsac Address for WebSocket Forwarding
The Mailsac address used in this example needs to have WebSocket forwarding enabled on it. Any messages sent to the email address will be forwarded by the Mailsac WebSocket server.
To enable WebSocket forwarding the Mailsac address must be private. Private addresses have additional features such as forwarding to Slack, forwarding to a Webhook, and forwarding to a WebSocket. Select the “Settings” button next to the email address you want to configure from Manage Owned Email Addresses. Select the check box to “Enabled forwarding all incoming email via web socket” and select “Save Settings”.
The Websocket connection to Mailsac is established on lines 12-14. The ws package will only reject a Promise if it fails to connect to the WebSocket server due to a network error. Wrapping the connection in a Promise allows for additional validations.
The Mailsac WebSocket server will send the message ({"status":200,"msg":"Listening","addresses":["[email protected]"]}) after the initial connection. In lines 16-26 the initial message is parsed and checked for value of the property status. The Promise is rejected if the initial status message is not received or does not have a status code of 200.
const mailsacAPIKey = process.env.MAILSAC_API_KEY || ''; // Generated by mailsac. See https://mailsac.com/api-keys
const mailsacToAddress = process.env.MAILSAC_TO_ADDRESS || ''; // Mailsac email address where the email will be sent
describe("send email to mailsac", function () {
// Open websocket waiting for email. This websocket will be reused for tests in this file.
before(() => {
return new Promise((resolve, reject) => {
ws = new WebSocket(
`wss://sock.mailsac.com/incoming-messages?key=${mailsacAPIKey}&addresses=${mailsacToAddress}`
);
let wsMessage; // message response object
ws.on("message", (msg) => {
try {
wsMessage = JSON.parse(msg);
} catch {
assert(wsMessage, "Failed to parse JSON from websocket message");
}
if (wsMessage.status != 200) {
reject(new Error("connection error: " + wsMessage.error));
return;
}
resolve(wsMessage);
});
ws.on("error", (err) => {
reject(err);
});
});
});
});
The connection to the STMP server is configured in lines 4-10. Most SMTP servers will require authentication.
The email’s to, from, subject, and content are set in lines 14-18. The email will be sent to the address defined in the configuration at the beginning of the script or the environment variable MAILSAC_TO_ADDRESS. The email will include a link to the website https://example.com.
it("sends email with link to example.com website", async () => {
// create a transporter object using the default SMTP transport
const transport = nodemailer.createTransport({
host: smtpHost,
port: smtpPort,
auth: {
user: smtpUserName,
pass: smtpPassword,
},
});
// send mail using the defined transport object
const result = await transport.sendMail({
from: smtpUserName, // sender address
to: mailsacToAddress, // recipient address
subject: "Hello!",
text: "Check out https://example.com",
html: "Check out <a href https://example.com>My website</a>",
});
});
3. Receive Message via WebSocket
Once the email arrives, Mailsac will send a JSON formatted version of the email on the WebSocket established earlier in this example. ws.on("message", (msg) => { ... } is a function that will run when new message is sent by the WebSocket server. The msg is parsed as JSON. Then the Promise will resolve if the message has a to property. The existence of the to property is checked to make sure the message sent by the WebSocket server was an email and not a status message. The await keyword will cause the test to wait until a message is sent over the WebSocket or the test times out.
The assert package is used to validate the contents of the email. The subject and text properties are assigned to new variables. assert.equal(subject, "Hello!"); will cause an exception if subject is not equal to Hello!. The test framework Mocha will interpret this as a failure and the test will fail. Likewise, if the variable email_text is not Check out https://example.com the test will fail.
The reason to send a second email is to demonstrate that the Mailsac WebSocket connection can be reused. This second test will reuse the WebSocket connection established (variable name ws) in the before() block in step 1. The only difference between the first and second test is the content of the email.
// Sends a second email reusing the websocket.
it("sends email with link to unsubscribe.example.com website", async () => {
const transport = nodemailer.createTransport({
host: smtpHost,
port: smtpPort,
auth: {
user: smtpUserName,
pass: smtpPassword,
},
});
const result = await transport.sendMail({
from: smtpUserName, // sender address
to: mailsacToAddress, // recipient address
subject: "Unsubscribe",
text: "Click the link to unsubscribe https://unsubscribe.example.com",
html: "Check out <a href https://example.com>My website</a>",
});
console.log("Sent email with messageId: ", result.messageId);
const wsMessage = await new Promise((resolve) => {
ws.on("message", (msg) => {
const wsResponse = JSON.parse(msg);
if (wsResponse.to) {
resolve(wsResponse);
}
});
});
assert(wsMessage, "Never received messages!");
const subject = wsMessage.subject;
const email_text = wsMessage.text;
assert.equal(subject, "Unsubscribe");
assert.equal(
email_text,
"Click the link to unsubscribe https://unsubscribe.example.com"
);
});
8. Delete Emails to Prevent Leaky Tests
An after() block will run after the tests have completed. The REST API Endpoint – Delete All Messages In An In Inbox is called to delete the test emails. By deleting all the test emails, it prevents these emails from being fetched in another test, which could impact the results of another test. It is best practice to clean up tests after they ran.
The NPM package supertest is used to make the REST call to delete the messages. Virtually any HTTP client library could be used to do this. Feel free to use the HTTP client library you feel most comfortable with.
An after() block is used to close the WebSocket connection after all the tests have completed. It is best practice to close all connections on the termination of the test.
// close websocket after all tests finish
after(() => ws.close());
Next Steps
The use of WebSockets helps speed up tests and uses API calls more efficiently.
See the WebSocket Test Page to see a WebSocket in action in your browser. This page includes a basic code example usage ofoffor a WebSocket client and is a great starting point before divining into integration testing using WebSockets.
If you have questions about this example or the Mailsac WebSocket service please post on https://forum.mailsac.com