Cluster aware coding in AEM

With AEM as a Cloud Service quite a number of small things have changed; and next to others you also get real clustering support in the authoring environment. Which is nice, because it gives you downtime-less authoring during deployments.

But this cluster also comes with a few gotchas, and one of them is that your application code needs to be cluster-aware. But what does that mean? What consequences does it have and what code do you have to change if you have never paid attention to this aspect?

The most important aspect is to do “every change only once“. It doesn’t make sense that 2 cluster nodes are importing the same set of data. A special version of this aspect is “avoid concurrent writes to the same node“, which can happen when a scheduled job is kicked off at the same time on all nodes, and this job is trying to change something in the repository. In this case you don’t only have overhead, but very likely a lot of exceptions.

And there is a similar aspect, which you should pay attention to: connections to external systems. If you have a cluster, running the same code and configs, it’s not always wanted that each cluster node reaches out to that external system. Maybe you need to the update it with the latest content only once, because it triggers some expensive processing on their side, and you don’t want to have that triggered two or three times, probably pretty much at the same time.

I have mentioned you 2 cases where a clustered application can be behave differently than a single-node environment, now let me show you how you can make your application cluster-aware.

Scheduled jobs

Scheduled jobs are a classic tool to execute certain jobs at a certain time. Of course we could use the Sling Scheduler directly, but to make the execution more robust, you should wrap it into a Scheduled Sling Job.

See the Sling Jobs website for the documentation and some example (although the Javadocs are missing the ScheduleBuilder class, but here’s the code). And of course you should check out Kaushal Mall’s post with even more examples.

Jobs give you the guarantee, that this job is going to be executed only at least once.

Use the Sling Scheduler only for very frequent jobs (e.g. once every 5 minutes), where it doesn’t matter if one execution is skipped, e.g. because the instance was just restarting. To limit the execution of such a job to a single node, you can annotate the job runner with this annotation:

@Property (name="scheduler.runOn", value="SINGLE")

(see the docs)

What about caches?

In-memory caches are often used to speed up operations. Most often they contain the results of previous operations which are then reused; cache elements are either actively purged or expire using a time-to-live.

Normally such caches are not affected by clustering. They might contain different items with potentially different values in the cluster nodes, but that must never be a problem. If that is a problem, you have to look for a different approach, e.g. persisting the data to the repository (if they are not already coming from there) or externalizing the cache (e.g to a redis or memcached instance).

Also, having a simpler application instead of the highest-cache-hit ration possible is often a good trade-off.

Ok, these were the topics I wanted to discuss here. But expect a blog post about one of my favorite topics: “Long running sessions and clustering”.

Slow deployments on AEM 6.4/6.5

A recent post on the AEM forums challenged me to look into an issue I observed myself but did not investigate further.

The observation is that during deployments maintenance tasks are stopped and started a lot, and this triggers a lot of other activities, including a lot of healtcheck executions. This slows down the deployment times and also pollutes the logfiles during deployments.

The problem is that the AEM Maintenance TaskScheduler is supposed to react on changes on some paths in the repository (where the configuration is stored), but unfortunately it also reacts on any change of ResourceProviders (and every Servlet is implemented as a single ResourceProvider). And because this causes a complete reload/restart of the maintenance tasks (and some healthchecks as well), it’s causing quite some delay.

But this behaviour is controlled via OSGI properties, which are missing by default, so we can add them on our own 🙂

Just create a OSGI configuration for com.adobe.granite.maintenance.impl.TaskScheduler and add a single multi-value property named “resource.change.types” with  the values “ADDED”, “CHANGED”, “REMOVED”.

(please also report this behavior via Daycare and refer to GRANITE-29609, so we hopefully get a fix for it, instead of applying this workaround).

Writing unittests for AEM (part 4): OSGI services mock services

In the last parts of this small series (part 1, part 2, part 3) I covered some basic approaches how you can use the Sling and AEM mocking libraries to ease writing unittests. The examples were quite basic and focussed, but in reality many test cases turn out to be much more complex.

And especially when your code has dependencies to other OSGI services, tests can get tricky. So today I want to walk you through some unittest I wrote some time ago, it’s a unittest for the EnsureOakIndex functionality (EnsureOakIndexJobHandlerTest).

The interesting part is that the required EnsureOakIndex service references 4 other services in total; if they are not present, my EnsureOakIndex service will never start properly. Thus you have to fullfill all service requirements of an OSGI service in the unittest as well (at least if you want to use SlingContext like I do here).

The easiest way to solve this is to rely on predefined services which are part of the SlingMocks or AemMocks. The second best way is to create simple mocks and register them a service, so the dependency is fulfilled. That’s definitely a convenient way if your tests do not invoke any of the service methods at all.

Thus the setup() method of my unittests are often pretty large, because there I prepare and inject all other services which I need to make my software-under-test work.

And because this setup works quite well and reliably, I always use AemContext for my unittests (or SlingContext, but as I haven not yet observed any difference in test execution time, I often prefer just AemContext because it comes with some more sevices). Just if I don’t need resources, nodes and no OSGI, I stick with plain junit. For everything else AemContext removes the necessity for mocking a lot.

Optimizing Sling Models (updated)

A few days ago I found that interesting blog post at https://sourcedcode.com/blog/aem/aem-sling-model-field-injection-vs-constructor-injection-memory-consumption, which makes the claim that Constructor injection with Sling Models is much more memory efficient than the “standard” field-based injection. The claim is, that the constructor injection-approach “saves 1800% in bytes” (152 bytes vs 8 bytes in the example).

Well, that result is not correct, because the example implementations of the SlingModels used there are not identical. Because in the case of field-based injection the references are available during the complete lifetime of that SlingModel, not just during the @PostConstruct method call, thus these references consume memory.

While with the example of constructor-based injection, the references are just available during the constructor call; they are not available in any other method. If you want to achieve the same behavior as in the field-injection example, you have to store the references in the global fields and then the memory consumption of that SlingModel increases.

But Justin Edelson pointed out correctly, that you gain from constructor-based injection, if you need the references just in the constructor to compute some results (which are then stored in fields), and in no other method. That’s indeed a small optimization.

But let’s be honest: If we are talking about an additional memory overhead of 100 bytes per a complex SlingModel, that’s a negligible number. Because it’s not typical that hundreds of these models are created per second. And even in that case, when they are created to render a page, the models are garbage collected immediately after when the request is completed. It doesn’t matter if 100 bytes more or less are allocated and collected. Thus the overhead is normally not even measurable.

But well, you might hit the edge case, where this really makes a difference.

Update June 8th: I got informed that the referenced blog article has been updated. It now contains a more reasonable example which makes the sling models comparable. Basically it reflects now the optimization Justin already mentioned. And the difference in object size is now only 40 bytes vs 24 bytes.

Best practices for AEM unittests

Some time ago I already wrote some posts (1, 2, 3) about unit testing with AEM, especially in combination with SlingMocks / AEM Mocks.

In the last months I also spent quite some time in improving the unittests of ACS AEM Commons, mostly in the context of updating the Mockito framework from 1.9x to a more recent version (which is a pre-requisite to make the complete build working with Java 11). During that undertaking I reviewed a lot of unit tests which required adjustments; and I came across some patterns which I also find (often?) in AEM projects. I don’t think that these patterns are necessarily wrong, but they make tests hard to understand, hard to change and often these tests make production code overly complex.

I will list a few of these patterns, which I consider problematic. I won’t go that far and call them anti-patterns, but I will definitely look closely at every instance I come across.

Unittests don’t matter, only test coverage matters.
Sometimes I get the impression, that the quality of the tests don’t matter, but only the resulting test coverage (as indicated by the test coverage tools like jacoco). That paying attention to the code quality of the tests and investing time into refactoring tests is wasted time. I beg to differ.
Although unit tests are not deployed into a production environment, the usual quality measures should be applied to unit tests as well, because it makes them easier extensible and understandable. And the worst which can happen to production code is that a bugfix is not developed in a TDD (build a failing testcase first to prove your error is happening) way because it is to much work to extend the existing tests.

Mocking Sling Resources and/or JCR nodes
With the presence of AEM Mocks there should not be any need to manually mock Sling Resources and JCR nodes. It’s a lot of work to do that, especially if you compare it to load a JSON structure into an in-memory repository. Same with ResourceResolvers and JCR sessions. So don’t mock Sling resources and JCR nodes! That’s a case for AemMocks!

Using setters in services to set references
When you want to test services, the AEM Mock framework handles injections as well, you just need to use the default constructor of your service to instantiate it, and then pass it to the context.registerInjectActivate() method. If required create the referenced services before as mocks and register them as well. AemMocks comes with ways to test OSGI services and components in a very natural way (including activations and injection of references), so please use it.
There is no need to use setter methods for the service references in the production code just for this usecase!

If you are looking for an example how these suggestions can be implemented, you can have a look the example project I wrote last year.

Of course this list is far from being complete; if you have suggestions or more (anti-) patterns for unittests in the AEM area, please leave me a comment.

How to properly delete a page

A relevant aspect of any piece of content is the livecycle, the process of creation, modification, using and finally deletion of that content. And although the deletion of any page in AEM sounds quite easy, there are quite a few aspects which need to be dealt with. For example:

  • Create of a version of the page, so it can be restored.
  • Update the MSM structures (if required)
  • De-activate the page from publishing.
  • Create an entry in the audit.log

All this happens when you use one of the pagemanager.delete() function to remove the page. If you are not using it, the most obvious problem you’ll face afterwards is the fact, that you have published pages which you cannot delete anymore (because the page is missing on authoring), and you have to use a workaround for it.

So, please remember: The pagemanager might have overhead in many areas, but there is a reason for it to exist. Taking care of all these mentioned activities is one of it. So whenever you deal with pages (creating/moving/renaming/deleting), first check the pagemanager API before you start using the JCR or Sling API.

Safe handling of ResourceResolvers

Just digging through my posts of the last years, I found that my last post to ResourceResolvers and JCR sessions is more than a year old. But unfortunately that does not mean, that this aspects seems widely understood; I still see a lot of improper use of these topics, when I review project code as part of my job.

But instead of explaining again and again, that you should never forget to close them, I want to introduce a different pattern, which can help you to avoid the “old pattern” of opening and closing completely. It’s a pattern, which encapsulates the opening and closing of a ResourceResolver, and your code is executed then as a Consumer or Function within. The ResourceResolver cannot leak, and you cannot do anything wrong. The only pre-requisite is Java 8, but that must not be a problem in 2020.

// does not return anything
public void withResourceResolver (Map<String,Object> authenticationInfo, Consumer<ResourceResolver> consumer) {
   try (ResourceResolver resolver = ResourceResolverFactory.getResourceResolver(authenticationInfo);) {
     consumer.accept (resolver);
   } catch (Exception e) {
     LOGGER.error ("Exception happend while opening ResourceResolver",e);
   }
}

Same is possible with a function to return a value

// return a value from the lambda
public <T> T withResourceResolver (Map<String,Object> authenticationInfo, Function<ResourceResolver,T> function, T defaultValue) {
   try (ResourceResolver resolver = ResourceResolverFactory.getResourceResolver(authenticationInfo);) {
     return function.apply(resolver);
   } catch (Exception e) {
     LOGGER.error ("Exception happend while opening ResourceResolver",e);
   }
   return defaultValue;
}

// convenience function
public <T> T withResourceResolver (Map<String,Object> authenticationInfo, Function<ResourceResolver,T> function) {
   return withSession(authenticationInfo,function, null;)
}

So if you are not familiar with the functional style of Java 8, some small examples how to use these methods:

Map<String,Object> authenticationInfo = …
withResourceResolver(authenticationInfo, resolver -> {
   Resource res = resolver.getResource("/");
   // do something more useful, but return nothing 
});

// return a value from the lambda 
Map<String,Object> authenticationInfo = …
String result = withResourceResolver(authenticationInfo, resolver -> {
   Resource res = resolver.getResource("/");
   return res.getPath();
});

As you can easily see, you don’t need to deal anymore with the lifecycle of ResourceResolvers anymore. And if your authenticationInfo map is always the same, you can even hardcode it within the withSession() methods, so the only parameter remains the consumer or the function.

Prevent workflow launchers from starting a workflow

Workflow launchers are the standard way to trigger workflows based on changes in the content respository. The most prominent workflow which is triggered that way is the “Asset Update Workflow”, which does all the heavy lifting regarding asset processing. And it’s important to note that this workflow is executed on all changes to an asset itself, its renditions or on metadata.

But often this is not required. If you add more or custom meta data to an asset or even do it in a batch mode, you don’t want to this workflow to run at all; these metatadate changes are not relevant to assets themselves, but just to the way they should be handled in the specific context of your application.

The typical way to make the workflow not to start is to disable the workflow launcher (setting the “enabled” flag to “false”). But this is a global setting which affects all possible invocations, that means also the regular ingestion; and in that case the workflow has to run. So you need a way to specifically disable the workflow to start.

Fortunately there are a few ways how to achieve that, if you have the code under control, which performs the changes, and after which you don’t want the workflow to start again. This is key, because there is a feature available in the workflow launcher (sidenote: I just found that it has been documented; so it often makes sense to check documentation if there have been updates).

You can configure on the workflow launcher an exclusion property in the format “event-user-data:randomString”; this ignores all changes made by a JCR session which has a user-property “randomString” set.

How can you set that property? That’s quite easy:

Session session = ...;
session.getWorkspace().getObservationManager().setUserData("randomString");
// do you work with the session
session.save();

And by default the “Asset Update Workflow” is configured with “event-user-data:changedByWorkflowProcess”, so if your batch asset-operation sets the user-data to this string “changedByWorkflowProcess”, the “Asset Update Workflow” is not triggered anymore, without disabling the workflow launcher for it.

That’s it. And if you ever wanted to channel data from a saving session to the process which handles the observation events for it (the workflow launchers are just a very convenient way around the JCR Observation API): Just use event.getUserData().

How to use Runmodes correctly (update)

Runmodes are an essential concept within AEM; they form the main and only way to assign roles to AEM instances; the primary usecase is to distinguish between the author and publish role, and another common usecase is also to split between PROD, Staging and Development environments. Technically it’s just a set of strings which are assigned to an instance, and which are used by the Sling framework at a few occassions, the most prominent being the Sling JCR Installer (which handles the /apps/myapp/config,/apps/myapp/config.author, etc. directories).

But I see other usecases; usecases where the runmodes are fetched and compared against hardcoded strings. A typical example for it:

boolean isAuthor() {
return slingSettingsService.getRunmodes().contains("author");
}

From a technical point of view this is fully correct, and works as expected. The problem arises when some code is based on the result of this method:

if (isAuthor()) {
// do something
}

Because now the execution of this code is hardcoded to the author environment; which can get problematic, if this code must not be executed on the DEV authoring instances (e.g. because it sends email notifications). It is not a problem to change this to:

if (isAuthor() && !isDevelopmentEnvironment()) {
// do something
}

But now it is hardcoded again 😦

The better way is to rely on the OSGI framework soley. Just make your OSGI components require some configuration and define the configuration for the runmodes required.

@Component(configurationPolicy=ConfigurationPolicy.REQUIRED)
public class myServiceImpl implements myService {
//...

This case requires NO CODING at all, instead you can just use the functionality provided by Sling. And this component does not even activate if the configuration is not present!

Long story short: Whenever you see a reference to SlingSettingsService.getRunmodes(), it’s very likely used wrongly. And we can generalize it to “If you add a reference to the SlingSettingsService, you are doing something wrong”.

There are only a very few cases where the information provided by this service is actually useful for non-framework purposes. But I bet you are not writing a framework 🙂

Update (Oct 17, 2019): In a Twitter discussion Ahmed Musallam and Justin Edelson pointed out, that there are usecases around where this actually useful and the right API to use. Possibly yes, I cannot argue about that, but these are the few cases I mentioned above. I have never encountered them personally. And as a general rule of thumb it’s still applicable. Because every rule has its exceptions.

You think that I have written on that topic already? Yes, I did, actually 2 times already (here and here). But it seems that only repetition helps to get this message through. I still find this pattern in too many codebases.

A “no custom code challenge” for AEM?

My colleague Jan Exner initiated a “no custom code challenge” for the Analytics area earlier this year; and in the followup of this the people of 33sticks posted a good summary why it would be much better if you could avoid any custom code in the analytics world.

I am wondering if this holds true for AEM systems as well. On the one hand side customization is required. For example you need to style the components according to the requirements and styleguides. But on the other hand siede, excessive customization (overlays and adaptions/changes to ootb functionality) leads to maintenance and upgrade issues. But maybe we should not use the term “customization” anymore in the AEM world, but choose a more appropriate one, maybe “application development on AEM”, because that’s what we do in reality quite often.

And the application development part is the one which makes software expensive. It requires design, architecture, implementors, tests, automated tests, deployments. It requires management and comes with risk. The more application development we have, the higher the risk and the costs.

If you were able to avoid any application development in an AEM project, and just live with the core components components and brand them accordingly, that would be great. We would only focus on style and branding of the components, no need to Java developers and code deployments. Just pure frontend, and a clever use of the out-of-the-box tools AEM offers you.

I am truly convinced that you can build a standard marketing site (multi-site, multi-language, integrated translation etc) with this approach. It requires dicussion with the business and more important, you as a developer or architect need to urge yourself not write any code.

Of course, it’s probably getting a very basic site, but it can serve 2 purposes:

  • We identify what should really be part of AEM (which is something we can and should add asap)
  • We challenge ourselves to think in much simple structures and less customizations. I always wonder how easy the statement “then let’s overlay it” comes out of the mouth of an AEM consultant in a discussion, and I am no exception to this.

Yes, can we join Jan’s initiative. With AEM it’s definitely harder to achieve this than with other solutions of the Adobe Experience Cloud, but it’s doable. And honestly, we should accept such challenges more often. Even if we eventually fail.

But the learning is immense.