Logout fixes, new charts, running on the software treadmill

Today’s release of Mailsac’s frontend services includes many fixes.

We tend to release weekly with no downtime and no fanfare. Occasionally we’ll post updates here.

fix: infrequent crash on logout

Most notable is a fix to a UI crash when logging out. In certain situations, you may have seen an error message, even though the logout was successful.

Prettier and more usable charts

We heard your feedback. Usage charts have been aging. Sometimes the styling makes the charts hard to read.

Some additional PostgreSQL optimizations are coming soon. We’ll continue reducing the load time on charts. Thanks for the patience!

Dependency upgrades

For better or worse, modern software stacks have huge numbers of dependencies. Staying ahead of security issues is a daily effort.

At Mailsac we use security scans, Dependabot, and npm audit to stay on top of upgrades. Hundreds of automated tests run to give us confidence that minor and patch semver updates don’t introduce breaking changes.

We subscribe to security mailing lists for our software, such as:

and more!

Plus-addressing is supported by all Mailsac inboxes

When you send to any inbox @mailsac.com, if a + plus symbol is included, we remove that symbol and everything after it.

jeff+12345asdf@mailsac.com

will be delivered to

[email protected]ailsac.com

Many email services including Gmail, iCloud and Fastmail support stripping the + plus symbol and everything after it in the local-part of the address (everything before the @ symbol).

Plus-addressing has long been a useful feature to segment user accounts across services. At Mailsac we offer a variety of disposable email and forwarding utilities that are designed for software QA engineers and developers. Things like forwarding all messages in a domain to a single address, or automatically routing email to webhooks or slack, are really easy – may not even require DNS setup.

Mailsac is not affected by Log4J CVEs

Tech news has recently been full of CVEs related to a popular JVM logging library named Log4J.

Mailsac services do rely on JVM languages, including Java. This extends through the entire stack, custom apps, self-hosted open source software, internal and external, infrastructure, proxies, and scripts.

There is one exception – an instance of the CI server Jenkins which is isolated behind a VPN, and is was never vulnerable according to troubleshooting steps from the Jenkins developers.

Mailsac and Security

The Mailsac Team is small yet mighty, with decades of experience taking security seriously. We follow best practices for infrastructure-as-code, patching, testing, network isolation, backups, restoration, and principle of least access access. Large enterprises including banks and government agencies trust Mailsac for disposable email testing. We provide exceptionally fast and predictable REST and Web Socket APIs with an excellent uptime record.

Mailsac has support for multiple users under the same account, so you can keep disposable email testing private within your company.

It’s free to test email immediately – no payment details required. You can send email to any address @mailsac.com and confirm delivery in seconds without even logging in. Start now at mailsac.com.

A new open source counting and throttling server: say hello to dracula

dracula is a high performance, low latency, low resource counting server with auto-expiration.

The Mailsac engineering team recently open sourced our internal throttling service, dracula, under an open source license. Check it out on Github. In the repo we prebuild server and CLI binaries for mac and linux, and provide a client library for Go.

Dracula has performed extremely well in AWS on ARM64 in production for us. It handles thousands of requests per second without noticeable CPU spikes, while maintaining low memory.

In this blog post we’re going to give an overview of why it was necessary, explain how it works, and describe dracula’s limitations.

Why we made it

For the past few years Mailsac tracked throttling in a PostgreSQL unlogged table. By using an unlogged table we didn’t have to worry about lots of disk writes, nor the safety provided by having the write-ahead-log. Throttling records are only kept for a few minutes. We figured if Postgres was rebooting, losing throttling records from the past minutes would be the least of our worries.

In the months leading up to replacing this unlogged table with dracula we began having performance bottlenecks. Mailsac is experiencing fast growth in the past few years. Heavy sustained inbound mail was resulting in big CPU time while Postgres vacuumed the throttling tables. The throttling table started eating too many CPU credits in AWS RDS – credits the we needed for more important stuff like processing emails.

We needed a better throttling solution. One that could independently protect inbound mail processing and REST API services. Postgres was also the primary data store for parsed emails. The Postgres-based solution was a multi-tiered approach to throttling – especially against bad actors – and helped our website and REST API snappy, even when receiving a lot of mail from questionable sources. The throttling layer also caches customer data so we can filter out the paying users from unknown users. Separating this layer from the primary data store would help them scale independently.

Can Redis do it?

So it was time to add a dedicated throttle cache. We reached for Redis, the beloved data structure server.

We were surprised to find our use case – counting quickly-expiring entries – is not something Redis does very well.

Redis can count items in a hash or list. Redis can return keys matching a pattern. Redis can expire keys. But it can’t expire list or hash item entries. And Redis can’t count the number of keys matching a pattern – it can only return those keys which you count yourself.

What we needed Redis to do was count items matching a pattern while also automatically expiring old entries. Since Redis couldn’t do this combination of things, we looked elsewhere.

Other utility services seemed too heavy and full-of-features for our needs. We could have stood up a separate Postgres instance, used MongoDB, Elasticache, or Prometheus. The team has experience running all these services. But the team is also aware that the more features and knobs a service has, the more context is needed to debug it – the more expertise to understand its nuances, the more risk you’ll actually use additional features, and the more risk you’ll be slow responding to issues under heavy load.

All we wanted to do was put some values in a store, have them expired automatically, and frequently count them. We’d need application level logic to do at least some of this, so we made a service for it – dracula. Please check it out and give it a try!

How it works under the hood

Dracula is a server where you can put entries, count the entries, and have the entries automatically expire.

The dracula packet layout is as follows. See protocol.go for the implementation.

Section DescriptionCommand characterPut, Count, Errorspacexxhashpre shared key + id + namespace + dataspaceClient Message IDspaceNamespacespaceEntry data
Size1 byte1 byte8 bytes1 byte4 bytesunsigned 32 bit integer (Little Endian)1 byte64 bytes1 byteremaining 1419 bytes
Examplebyte(‘P’), ‘C’, ‘E’byte(‘ ‘)0x1c330fb2d66be179byte(‘ ‘)6 or []byte{6, 0, 0, 0}byte(‘ ‘)“Default” or “anything” up to 64 bytesbyte(‘ ‘)192.169.0.1, or any string up to end of packet
500 byte dracula packet byte order

Here’s roughly how the dracula client-server model works:

  1. The client constructs a 1500 byte packet containing a client-message ID, the namespace, and the value they want to store in the namespace (to be counted later).
  2. A hash of the pre-shared secret + message ID + namespace + entry data is set inside the front part of the message.
  3. A handler is registered under the client message ID.
  4. The bytes are sent over UDP to the dracula server.
  5. Client is listening on a response port.
  6. If no response is received before the message times out, a timeout error is returned and the handler is destroyed. If the response comes after the timeout, it’s ignored.
  7. Server receives packet, decodes it and checks the hash which contains a pre-shared secret.
  8. Server performs the action. There are only two commands – either Put a namespace + entry key, or Count a namespace + entry key.
  9. Server responds to the client using the same command (Put or Count). The entry data is replaced with a 32 bit unsigned integer in the case of a Count command. The hash is computed similarly to before.
  10. Client receives the packed, decodes it and confirms the response hash.

Data structures

Dracula uses a few data structures for storing data.

Namespaces are stored in a hashmap provided by github.com/emirpasic/gods, and we use a simple mutex to sync multithreaded access. Entries in each namespace are stored in wrapped AVL tree from the same repo, which we added garbage collection and thread safety. Each node of the AVL tree has an array of sorted dates.

Here’s another view:

  • dracula server
    • Namespaces (hashmap)
      • Entries (avltree)
        • sorted dates (go slice / dynamic array of int64)

Server configuration

When using dracula, the client has a different receiving port than the server. By default the dracula server uses port 3509. The server will write responses back to the same UDP port it received messages from on the client.

Messages are stored in a “namespace” which is pretty much just a container for stored values. The namespace is like a top-level key in Redis. The CLI has a default namespace if you don’t provide one. The go client requires choosing a namespace.

Namespaces and entries in namespaces are exact – dracula does not offer any matching on namespaces.

At Mailsac, we use uses the namespaces to separate messages on a per-customer basis, and to separate free traffic. Namespaces are intentionally generic. You could just use one namespace if you like, but performance under load improves if entries are bucketed into namespaces.

Production Performance

Dracula is fast and uses minimal resources by today’s standards.

While we develop it on Intel, and in production we run dracula on Arm64 architecture under Amazon Linux for a significant savings.

In its first months of use, dracula did not spike above 1% CPU usage and 19 MB of RAM, even when handling single-digit-thousands of requests simultaneously.

Tradeoffs

By focusing on a small subset of needs, we designed a service with sharp edges. Some of these may be unexpected features so we want to enumerate what we know.

It only counts

It’s called dracula in an allusion to Count Dracula. There’s no way to list namespaces, keys, nor return stored values. Entries in a namespace can be counted, and the number of namespaces can be counted. That’s it! If we provided features like listing keys or namespace, we would have needed to change the name to List Dracula.

No persistence

Dracula is designed for short-lived ephemeral data. If dracula restarts, nothing is currently saved. This may considered for the future, though. Storing metrics or session data in dracula is an interesting idea. On the other hand, we see no need to reinvent Redis or Prometheus.

Small messages

An entire dracula protocol message is 1500 bytes. If that sounds familiar, it’s because 1500 bytes is the normal maximum-transmission-unit for UDP. Namespaces are capped at 64 bytes and values can be up to 1419. After that they’re cut off.

Same expiry

All namespaces and entries in the entire server have the same expire time (in seconds). It shouldn’t be too difficult to run multiple draculas on other ports f you have different expiry needs.

HA

The approach to high-availability assumes short-lived expiry of entries. A pool of dracula servers can replicate to one another, and dracula clients track health of pool members, automatically handling failover. Any client can read from any server, but in the case of network partitioning, consistency won’t be perfect.

Retries

Messages that fail or timeout are not retried by the dracula client right now. There’s nothing stopping the application level from handling this. It may be added as an option later.

Garbage

While we have not yet experienced issues with dracula’s garbage collection, it’s worth noting that it exists. A subset of entries are crawled and expired on a schedule. On “count” commands, old entries are expired. The entire store is typically not locked, but it’s likely you would see a little slowdown when counting entires in very large namespaces, or when there are a lot of old entires to cleanup, while GC is running. In our testing it’s on the order of single digit miliseconds, but this can be expected to grow linearly with size.

Unknown scale

We’re working with low-tens-of-thousands entries per namespace, maximum. Above that, we’re unsure how it will perform.

Language support

Upon release, dracula has a reference client implementation in Golang. Node.js support is on our radar, but not finished. Please open an issue in the dracula repo to request support for a specific language. We’d be thrilled to receive links to community written drivers as well.

What’s next?

Hopefully you enjoyed learning a little bit about dracula and are ready to give it a try. Head over to Github https://github.com/mailsac/dracula where added examples of using the server, client library, and CLI.

Check out the roadmap of features, currently tracked in the Readme.

Finally, Mailsac would love your feedback. Open a Github issue or head to forum.mailsac.com. If you’d like to see additional library languages supported, let us know.

Happy counting!

Easy purging of inboxes

We’ve listened to your feedback. This week we released new functionality to delete all the messages in an inbox.

Purging an inbox can be accomplished in two ways:

  • programmatically, via the REST API DELETE /api/addresses/:email/messages route
  • by clicking the new “Purge Inbox” button

Here’s an example of clicking the Purge Inbox button, instantly recycling over 80 messages:

Why were all the messages not deleted?

Starred messages (savedBy in the REST API) will not be purged.

We recommend un-starring those messages first, then purging the inbox, if you want to completely clear the inbox. Or, use the existing single-message deletion feature will allow you to delete a starred message. There is a button for deleting messages on the inbox page. The REST API route to delete a single message is DELETE /api/addresses/:email/messages/:messageId.

Using Mailsac for Shared QA Email Accounts

When building software-as-a-service, several pre-production environments are often in play.

Developers, product managers, and QA engineers work together to test software in the various environments.

But there’s confusion around which user accounts can be used in which environments. Which accounts have the right permissions for testing. And your test environments environments don’t map 1:1 with 3rd party services. It’s confusing to know if you tested the right thing.

Mailsac lets you create disposable email accounts within a private custom. Temp email addresses to share with the team. This results in less effort keeping testing environment accounts separate. It prevents user collisions with third party providers.

Common Environment Setup Example

A QA team may have a test environment called “UAT” and developers have a different test environment called “Staging.”

The infrastructure might map to URLs with different subdomains like:

  • uat.example.com – QA team
  • staging.example.com – Developers
  • app.example.com – Production (customers)

where each subdomain has a completely separate database with a users table.

However, our sample app uses a 3rd identity provider (such as Amazon Cognito, Forgerock, Auth0, etc). The identity provider only has two environments:

  • test-identity.example.com – All non-production usage (UAT, Staging)
  • identity.example.com – Production (customers)

Furthermore, our app uses Stripe, which also has only two environments:

  • Stripe Test Mode – All non-production usage (UAT, Staging)
  • Production Mode – Production (customers)

One can imagine a users database table with the following properties:

  • users.id int, primary
  • users.email text, unique
  • users.identity_provider_id text, unique, corresponds to the Identity Provider
  • users.stripe_customer_id text, unique, corresponds to the Stripe Customer ID

Such a setup is common. Problems begin brewing when using the same email address in multiple environments.

Password issues with shared email addresses

A QA person wants to test their app. They sign up with [email protected] in UAT. [email protected] was created by a friendly sysadmin at their company. It is a real email inbox. The company has to pay a few bucks per month for the inbox, and it isn’t easily accessible by anybody else. Where’s that password again? Oh you asked Dave from IT to reset the email password? Oh you mean the UAT app password was changed only? The new password should be in a spreadsheet..oops somebody reset it and didn’t update the password? It doesn’t look like I have access to the [email protected] inbox. Wait a second..the Dev team is also using it?

Identity Provider Clash, Stripe duplicate

UAT person uses [email protected] and creates the user account with the Identity Provider, linking the identity_provider_id to their user in UAT. They also link the Stripe customer.

idemailidentity_provider_idstripe_id
22[email protected]idp_q7e4cus_t6n
UAT users table

But then a developer in Staging attempts to perform the action but gets blocked by the identity provider, and duplicates the customer in Stripe with the same email address, making the tracking of financial transactions overly complicated. UAT and Stating also end up with a different user id.

idemailidentity_provider_idstripe_id
19[email protected]NULL (failed)cus_yb1
Staging users table

It is possible the same password is used for [email protected] with the identity provider, and both UAT and Development are able to login. But the identity_provider_id will need to be manually set to match both environments, and it will never match the users.id column.

Let’s add one more common layer: role based permissions.

Developer 1 sets up [email protected] to

These are just a few of the problems with using a limited number of shared credentials for testing software.

Using Mailsac for Test User Accounts

A software team and QA team can share the Mailsac Business account to add nearly unlimited email addresses, and apply special features to up to 50 private address across 5 custom domains (and more via addons). Mailsac allows any custom subdomain of *.msdc.co, it may not even be necessary to involve an IT department to configure DNS.

QA team sets up example-uat.msdc.co.

The QA team will create 10 private addresses with specific purposes such as a user they will configure in uat.example.com with elevated admin permissions:

Next the Dev team, can do something similar but with a different custom domain, and different private email addresses.

Setting up a bunch of private addresses is simple and included with any paid plan. It can help prevent test credential collisions.

Random Inboxes and API Keys

It is not even necessary to setup private addresses, as done above, to receive email.

With a custom domain, any Developer or QA person can send email to any address in the domain without needing to create it first. Then they can check the mail with a personal API key.

The Business Plan allows creating multiple custom API keys:

API Key management in Mailsac

To make a random address, generate a random string:

openssl rand -hex 4 yields de692e19 (for example)

and prefix it to your custom domain:

[email protected]

Assume Greg’s API key is: wv6OCCXE4svjxuv7sOsCBA (note: never share these!)

He can easily check the inbox using the following URL scheme:
https://mailsac.com/inbox/[email protected]?_mailsacKey=wv6OCCXE4svjxuv7sOsCBA

Or get messages as JSON:

curl --header 'Mailsac-Key: wv6OCCXE4svjxuv7sOsCBA' https://mailsac.com/api/[email protected]/messages

which returns an array of messages including any links to be clicked.

[{
  "_id": "m77238f-0",
  "inbox": "[email protected]",
  "subject": "Confirm your account or something",
  //.........
  "links": ["https://app.example.com/confirm-account/iOZifOYkLX5qFfEo"]
}]

Concluding remarks

We hope this guide provides an overview of how software teams are using Mailsac to simplify testing.

Thousands of enterprises and software project teams use Mailsac to test their environments and manage “known good” test accounts for their SaaS.

Start for free instantly

(Resolved) Service degradation due to apparent attack

Beginning 2:36 AM US Pacific time, Mailsac internal monitoring indicated slowness due to an abnormally large amount of spam coming from China. By approximately 6:30 AM we identified all root causes and believe the issue is resolved.

Our service employs several methods of blocking, shaping, and throttling egregious traffic from unpaid users. This particular attack worked around these automatic mitigation efforts, in part because the attackers opened thousands of sockets and left them open a long time, exploiting a loophole in our SMTP inbound receiver code.

Here is a graph of our inbound message rate showing the attack compared to baseline.