The more granulated your HTTP REST endpoints are, the easier it becomes to implement aggressive caching, which results in fewer HTTP requests, simpler server-side code, less server load, and generally more scalable and responsive web apps. In the video below I illustrate how to make sure your HTTP endpoints take advantage of the “Cache-Control” HTTP header, and such communicate “max-age” to your frontend, which allows the client to cache the results of your HTTP GET requests for some configurable amount of seconds.
The architecture of your server-side backend has a lot of consequences. Often a lot of developers wants to return “rich graph objects” to their clients, and for libraries such as GraphQL, this capability becomes arguably the main feature. I am here to tell you that even though this process can reduce the number of HTTP requests in the short term, it can also result in the inability to cache your HTTP requests, in addition to making the code that runs on your server unnecessarily complex, consuming large amounts of CPU time – Resulting in that your app becomes less responsive as it acquires more users.
An alternative is to rely upon what the developers behind GraphQL refers to as “waterfall”, retrieving data in a granular fashion, instead of returning “rich” graph objects. This process allows you to implement caching on a “per table” basis. For instance, your “users” table is probably a table with frequent inserts and updates, while your “roles” table probably doesn’t have updates or inserts more than once a month, or maybe even *never* after its initial creation.
If you return a graph object of your “users” containing the roles each user belongs to, this results in (at least) 2 SQL statements being evaluated towards your SQL database. One to select the user, and one to select the roles your user(s) belongs to. If you return more than one user at the time, this might even result in 20+ SQL statements being evaluated, to return a simple list of 10 users, with their associated roles. In addition, doing any amount of caching on a query string level, becomes literally impossible, without risking returning old and invalid data – Hence the result becomes that even though you wanted less HTTP requests, and more scalability, you ended up with more HTTP requests, and less scalability.
If you instead retrieve all roles during startup of your application, you can reduce your “users” endpoint to only return data from your users table, for then to decorate your users roles on the client side – Significantly reducing the server-load, and allowing you to use a much more aggressive caching strategy. Of course, there are situations where you really, really need to return graph objects – But in my experiences, this tends to be overused and abused by inexperienced developers, resulting in slower applications, with less scalability, and burning an unnecessary amount of CPU time as a consequence of wanting to return graph objects.
It seems so simple, creating an AutoMapper association, a DTO mapping from your Entity Framework type, to your View Model, hidden within some Data Repository. However, behind that “simple line of code”, a bunch of potential scalability problems often exists, resulting in that your end product becomes less scalable, even though your intentions was to make it more scalable …
I’ll probably end up creating some sort of Angular HTTP service interceptor, doing this automagically for you – But at least for now, you can watch the above video, to see how easily an extremely aggressive caching strategy can be implemented using Magic. And if you starts looking at your database, I’ll be surprised if you didn’t conclude with that at least some 40-60 percent of your tables so rarely change, that implementing an extremely aggressive caching strategy isn’t easily within reach for you – However, only as long as you don’t become tempted of returning too complex graph objects from your server.
Premature optimisation is the root of all evil – Donald Knuth, the “father” of programming
And returning rich graph objects, to optimise your data strategy, is almost always premature optimisation …