Performance Optimization for Avo


Being a library/Rails engine, Avo represents a special case when it comes to performance optimization, because the surface area is confined by the interface to the host app.

This limits the radius of action for a performance audit, which made it all the more exciting. For example, quite naturally general database optimizations are out of scope. The bulk of database work has to be done by the host app's developers, but there is some leverage concerning Rails internals, as we shall see.

Memory and time benchmarks of Ruby code have shown some potential, but are only the icing on the cake, as shown below. Still, some best practices like more aggressive memoization have been distilled out of it.

The biggest potential, as expected, was found in frontend improvements, because it is decoupled from any parameters (such as caching etc.) that the host application offers.

Performing the Audit

Database: N+1 Detection

Cases of N+1 offenses are typically found on index pages. The /users, /posts, and /projects routes of the included sample app were inspected to spot any such instances. In general, Avo has a solid setup for dealing with N+1 issues, but it warrants a closer look nonetheless.

First, the display of Active Storage attachments inadvertently leads to N+1 of the associated Attachment and Blob models. The Active Storage API provides a remedy in form of scoped with_all_... scopes, but it can of course be tricky to generically apply them in an admin backend.

Furthermore, a few N+1 occurrences were found in show routes, where associations should be eager loaded.


In the context of any Rails app, memory management can make a huge difference regarding how many app server workers you will be available to deploy. As such, it's critical especially for libraries like Avo, not to add too much to the bundle.

Using memory_profiler, some of the more complex resources from the sample app (Post, Project, User) were scrutinized.


To get a broad overview over the frontend performance of Avo, several Lighthouse scans were conducted. To arrive at more realistic results concerning latency, the network requests were piped through an Ngrok tunnel. Below is an example result of a Lighthouse audit:

A lighthouse audit with a performance score of 81



The N+1 offenses found on the User and Post index routes could be eliminated and the response times reduced.

The oha load testing tool was used to measure the response times.

Exemplarily, below is a juxtaposition of before and after for the Post index action:

A response time comparison using two terminal panes

Similarly, eager loading declared associations in show routes could be ameliorated. Here's a comparison of before and after the optimization for the Post show route, showing slight improvements in the larger percentiles:

A response time comparison using two terminal panes


Some low hanging fruit for optimization was discovered during the memory profiling process.

For example, the BaseResource#file_hash method read the whole resource file from disk every time a hash had to be calculated for caching. Memoizing this file contents (which aren't bound to change during a single app deployment) not only lead to 1.64 less memory being allocated, but also in 1.44 times faster execution time.

Similarily, some variables like class_name that are used for metaprogramming were not memoized. Doing so reduced the memory consumption for this accessor by 12.48. Also, it was 7.91 times faster on average.

When collecting these metrics and improvements it is easy to get too enthusiastic too soon, though. Keep in mind that these represent microbenchmarks and the real meat of memory allocation lies elsewhere. Still, techniques like memoization are tried and tested and should be employed wherever it makes sense.


Mainly issues regarding images stood out (the text compression warnings etc. are due to the fact that it wasn't run in a production environment).

Lazy loading off-screen images while keeping their aspect ratio to avoid layout shift, plus preloading the hero image resulted in a large boost of the performance audit:

A lighthouse audit with a performance score of 98

Julian is exemplary in going deep in the code to find the inefficiencies we overlooked. The fact that Avo is a library where the users have so much control makes this task even more challenging but we are grateful to have him review every aspect of it.

(Adrian Martin, Avo Founder)


Some of the mentioned optimizations were tried and evaluated locally using a simple Prometheus backend with Grafana for visualization.

The green lines denote the mean, the yellow ones the p95 percentile of server response times. Evidently, for the index route the mean response time could be almost halved. The show route exhibits some improvement as well, mainly a more consistent distribution of response times.

A grafana chart juxtaposing response times in before and after the optimization

At the right margin of the chart below, both database and memory optimizations were combined and measured again. Memory optimizations were slightly detectable, but mostly insignificant. Most noticeably, though, the timings of the /posts index action are far more consistently lower than in the non-memory-optimized case to the left.

A grafana chart juxtaposing response times in before and after the optimization

All in all, this audit was able to deliver actionable To-Dos to efficiently improve Avo's performance in the most significant spots.

Supercharge Your Rails App

Upgrade Your Tech Stack for a Smoother Dev Experience and Increased Profits.

Order a Review