Digests » 201

sponsor

🦸🏻‍♀️ Not all DevOps heroes wear capes, but they do use Honeybadger for monitoring 🦸🏻‍♂️

Let’s face it, your app is going to throw an error at some point (maybe even more than once…gasp!) Honeybadger simplifies your production stack by combining exception monitoring, uptime monitoring, and check-in monitoring into a single, easy to use platform. It also integrates with app you use: Slack, PagerDuty, GitHub, and tons more. Honeybadger makes it easy for you to be a DevOps hero.

elixir

You may not need GenServers and Supervision Trees

The thought that people seem to think of GenServers and supervision trees in the elixir/erlang world as too essential deterring people from using these languages has been brewing in my mind for quite some time. In fact I have written about it before in Elixir Forum. This is a summary and extended version of that (including some choice replies) to give it more visibility.

Dealing with long-running HTTP Requests and Timeouts in Phoenix

Phoenix is fast and highly concurrent and it can process HTTP requests in less than a millisecond. It is our priority to serve the requests as fast as possible, but the reality is that sometimes processing a request can take too long. This forces Phoenix to trigger a timeout and close the connection.

Chemanalysis: Dialyzing Elixir

No one wants to ship bugs in a production system, especially embarrassing ones! Dialyzer is a post-compilation type-checker that has found more bugs in my code than I can count, saving me a lot of time and frustration. This talk will discuss briefly what Dialyzer is, how to use it in Elixir projects, and go in-depth on three bugs it helped me find in the Elixir compiler and standard library.

Introducing Telemetry

“Let it crash” has been a long-running mantra in the BEAM world. While it might be misinterpreted, there is some merit to it - our software will do unexpected things, and more often than not, the only viable choice is to crash and start over. But simply restarting parts of our application is not sufficient - we should understand what was the cause of the error, and handle it properly in the future releases. We also need to know that the error occurred at all and how it affected our customers! To enable both of these things, we need a way to introspect and analyze our app’s behaviour at runtime - we need monitoring.

Organising Absithe GraphQL and Ecto errors

I’ve been using Absinthe in my own project for some time now and wanted to share my thoughts on the topic of organising and working with errors in general and particularly with Ecto.Changeset errors. In my opinion this topic is poorly covered though there are some guidance in the official documentation and numerous other places which I tried to unify them. Also as it is quite hard to figure out the exact common shape of possible API errors at the beginning and it comes over time once you have something in place.