
As a followup on the previous article I want to show you, how a micro-optimization can look like. My colleague Miroslav Smiljanic found that there is a significant difference in the time it takes to compute these statements (1) and (2).
Node node = … Session session = node.getSession(); String parentPath = node.getParent().getPath(); Node p1 = node.getParent(); // (1) Node p2 = session.getNode(parentPath); // (2) assertEquals(p1,p2);
He did the whole writeup in the context of a suggested improvement in Sling, and proved it with impressive numbers.
Is this change important? Just by itself it is not, because going the resource/node tree upwards is not that common compared to going downwards the tree. So replacing a single call might yield only in an improvement of a fraction of a milisecond, even if the case (2) is up to 200 times faster than (1)!
But if we can replace the code in all cases where the getParent() can be used with the performant getParent() call, especially in the lowlevel areas of AEM and Sling, all areas might benefit from it. And then we don’t execute it only once per page rendering, but maybe a hundred times. And then we might end up with tens of miliseconds of improvement already, for any request!
And in special usecases the effect might be even higher (for example if your code is constantly traversing the tree upwards).
Another example of such an micro-optimization, which is normally quite insignificant but can yield huge benefits in special cases can be found in SLING-10269, where I found that a built-in caching of the isResourceType() results reduces the rendering times of some special requests by 50%, because it is done thousands of times.
Typically micro-optimizations have these properties:
- In the general case the improvement is barely visible (< 1% improvement of performance)
- In edge cases they can be a life saver, because they reduce execution time by a much larger percentage.
The interesting part is, that these improvements accumulate over time, and that’s where it is getting interesting. When you have implemented 10 of these in low-level routines the chances are high that your usecase benefits from it as well. Maybe by 10 times 0.5% performance improvement, but maybe also a 20% improvement, because you hit the sweet spot of one of these.
So it is definitely worth to pay attention to these improvements.
My recommendation for you: Read the entry in the Oak “Do’s and Don’ts” page and try to implement this learning in your codebase. And if you find more of such cases in the Sling codebase the community appreciates a ticket.
(Photo by KAL VISUALS on Unsplash)
You must be logged in to post a comment.