Writing integration tests for AEM (part 2)

This a part of my ongoing series about writing integration tests with AEM.

In the last blog post I gave you a quick overview over the integration test framework you have at hand and what chances it gives you.

Now let’s get our hands dirty and create our first integration test. We will write a simple test which is connecting to the local author instance and tests that the wknd homepage is completely loaded and that all referenced files (images, javascript, css, …) are present.

This is where we start — just us and a lot of space to fill with good tests
Photo by Neven Krcmarek on Unsplash

Prerqusite is that you have the wknd-package fully installed (clone the wknd github repo, build it and install the package in the “all” module should do the trick). There is no specific requirement on AEM itself, so AEM 6.4 or newer should suffice.

Basic structure

When you have started with the maven archetype for AEM, you should have a it.tests maven module, which contains all integration tests. Although they are tests, they are stored in src/java. That means that the whole test suite is created as build-artifact, and thus can be easily executed also outside of the maven build process.

Another special thing to remember: All test class names must end with “IT” (like “IntegrationTest”), otherwise they are ignored.

A custom client

(I have all that code ready on github, so you can just clone it and start playing.)

As a first step we will create a custom test client, which is able to handle the parse a rendered page. As a basis I started with HtmlUnit, but that turned out to be a bit unflexible regarding multiple calls, so I switched over to jsoup for that.
That means our first piece of code is a JsoupClient. It extends the standard CQClient, and for that we are able to use the “doRequest()” method to fetch the page content.

That’s the basis, because from now on we just deal with jsoup specific structures (Document, Node). Then we add the actual test class (AuthorHomepageValidationIT), which has first some boilerplate code:

The basis for all is the CQAuthorClassRule, and based on that we create a jsoupClient object, which is itself using an “AdminClient” (that means using the admin user for the tests). And now we can easily start and create simple tests with this jsoupClient instance.

(Please check the files in the github repo to get the complete picture, I omitted here quite a bit for brevity.)

We are using the standard tooling for unit tests here to create an integration test, that means using the @Test annotation plus the usual set of asserts. But we are doing integration tests, that means that we are just validating the operation which is executed by AEM. If you are start to use a mocking framework here, you are wrong!

OK, how do I run this integration test?

Now, as we have written our integration test, we need to execute it. To do that, use your command line and execute this command in the it.tests module:

mvn clean install -Peaas-local -Dit.author.url=http://localhost:4502

(You need to specify the author url as parameter because my personal default of port 6602 for my local authoring instance might not work on your local instance. Check the pom.xml for all details, it is not that complicated.)

The output will look like this:

[INFO] --- maven-failsafe-plugin:2.21.0:integration-test (default-integration-test) @ de.joerghoh.aem.it.tests ---
[INFO]
[INFO] -------------------------------------------------------
[INFO] T E S T S
[INFO] -------------------------------------------------------
[INFO] Running integrationtests.it.tests.AuthorHomepageValidationIT
[main] INFO com.adobe.cq.testing.junit.rules.ConfigurableInstance - Using Basic Auth as default. Index lane detection: false
[main] INFO org.apache.sling.testing.junit.rules.instance.util.ConfigurationPool - Reading initial configurations from the system properties
[main] INFO org.apache.sling.testing.junit.rules.instance.util.ConfigurationPool - Found 1 instance configuration(s) from the system properties
[main] INFO org.apache.sling.testing.junit.rules.instance.ExistingInstanceStatement - InstanceConfiguration (URL: http://localhost:6602, runmode: author) found for test integrationtests.it.tests.AuthorHomepageValidationIT
[main] WARN com.adobe.cq.testing.client.CQClient - Cannot resolve path //fonts.googleapis.com/css?family=Source+Sans+Pro:400,600|Asar&display=swap: Illegal character in query at index 57: //fonts.googleapis.com/css?family=Source+Sans+Pro:400,600|Asar&display=swap
[main] INFO integrationtests.it.tests.AuthorHomepageValidationIT - skipping linked resource from another domain: https://wknd.site/content/wknd/language-masters/en.html
[main] INFO integrationtests.it.tests.AuthorHomepageValidationIT - validated 148 linked resources
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.787 s - in integrationtests.it.tests.AuthorHomepageValidationIT
[INFO] Running integrationtests.it.tests.GetPageIT
[main] INFO com.adobe.cq.testing.junit.rules.ConfigurableInstance - Using LoginToken Auth as default. Index lane detection: false
[main] INFO com.adobe.cq.testing.junit.rules.ConfigurableInstance - Using LoginToken Auth as default. Index lane detection: false
[main] INFO org.apache.sling.testing.junit.rules.instance.ExistingInstanceStatement - InstanceConfiguration (URL: http://localhost:6602, runmode: author) found for test integrationtests.it.tests.GetPageIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 s - in integrationtests.it.tests.GetPageIT
[INFO] Running integrationtests.it.tests.CreatePageIT
[main] INFO com.adobe.cq.testing.junit.rules.ConfigurableInstance - Using LoginToken Auth as default. Index lane detection: false
[main] INFO org.apache.sling.testing.junit.rules.instance.ExistingInstanceStatement - InstanceConfiguration (URL: http://localhost:6602, runmode: author) found for test integrationtests.it.tests.CreatePageIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.3 s - in integrationtests.it.tests.CreatePageIT
[INFO]
[INFO] Results:
[INFO]
[WARNING] Tests run: 3, Failures: 0, Errors: 0, Skipped: 1

I marked the relevant output with blue. It shows that my test was reaching out to my local AEM instance at port 6602 and validated 148 resources in total. If you want to get more details what exactly was validated, add an info log message here.

Congratulations, you have just run your first integration test!

I leave it to you to provoke a failure of that integration tests; all you have to do is to have a image or a clientlib referenced on the wknd homepage (specified here) which does not return a HTTP status code 200. And of course this test is quite generic, as it does not mandate that a specific clientlibrary is there or that even the page footer is working. But as you have the power of JSOUP at hand, it should not be too hard to write even more assertions to check these additional requirements.

In the next blog post I will elaborate a bit more around running integration tests and configuring them properly, before we start to explore the possibilities offered to us by the AEM testing clients.

(Update 2020-12-18: Changed the profile name to match CloudManager behavior)

Writing integration tests for AEM (part 1)

This a part of my ongoing series about writing integration tests with AEM.

Building tests is an integral part of software development, and does not only include unit test but also integration and frontend tests. With AEM as a Cloud Service integration tests are getting more and more important, as it allows you to run automated tests on “real” cloud service instances as part of the Cloudmanager Pipeline. See the documentation of CloudManager.

If you check the details, you will find that the overall structure for integration tests are part of all projects which are created based on the AEM Project Archetype since at least version 11 (April 2017). So technically everyone is able to implement integration tests based on that structure yet, but I haven’t seen them to have received proper attention. I ignored these integration tests also for most of the time…

A vintage implementation of a HTTP Client with 3 threads (symbol photo)
Photo by Pavan Trikutam on Unsplash

Recently I worked with my colleague Valentin Olteanu on creating a small integration test suite, and I was honestly surprised how easy it can be. And because integration tests are now an official part of the Cloud Manager pipeline and the first place where your code can be tested on an real CM instance.

So I want to give you a short overview of the capabilities of the Integration-Test framework for AEM. In the next blog post I will show a real-life usecase where such Integration tests can really help.

Ok, what are these integration tests and what can we do with these tests?

Integration tests are running outside of AEM, as part of the deployment/test pipeline. They test the interaction of your custom application (which you have validated with your unittests) with everything else, most prominently AEM itself. You can test the complete page rendering, you can test custom integrations, background processes and everything where you need the full AEM stack, and where mocks are not sufficient.

The test framework itself provides you proper abstraction to perform a lot of operations in very convenient way.  For example

  • There is an AssetClient which allows you to upload assets into AEM
  • Functionality to create/delete/modify pages (as part of the CQClient)
  • Functionaity to replicate content
  • and much more (see the whole list of clients)

And everything wrapped in java, so you don’t have to deal with underlying HTTP requests. So this is an effective way to remote  control AEM from within java code. But of course there’s also a raw preconfigured HTTP Client (with hostnames, authentication etc alreay set) which you can use to perform custom actions. And the testing framework around is still the junit framework we are all used to.

But be aware: This integration test suite cannot directly access the JCR and Sling API, because it is running externally. If you want to create nodes or read their status, you have to rely on other means.

It is also no Selenium Test! If you want to do proper UI testing, please check the documentation on UI testing (still in beta, expect the general availability soon). I plan to create a blog post about it.

A very simple integration (basically just a validation of a page which has been created with a Page rule) can look like this (the full code)

    @Test
    public void testCreatePageAsAuthor() throws InterruptedException {
        // This shows that it exists for the author user
        userRule.getClient().pageExistsWithRetry(pageRule.getPath(), TIMEOUT);
    }

This integration test class itself comes with a bit of boilerplate code, mostly Junit rules to setup the connection and prepare the environment (for example to create the page for which we test the existence).

And the best thing: You don’t need to take care of URLs and authentication, because these parameters are specified outside of your code and are normally provided via Maven properties. This keeps the code very portable, and gives you the chance to execute it both locally and as part of the Cloudmanager pipeline.

Iin the next blog post I want to demonstrate how easy it can be to validate that a page in the AEM author renders correctly.

Long running sessions and SegmentNotFoundExceptions

If you search this blog, you find one recurring theme over the years: The lifecycle of JCR sessions and Sling ResourceResolvers. That you should not keep them open for a long time. And that you definitely have to close them. But I never gave you an example what can happen if you don’t follow this recommendation. Until now.

These days I learned that was is actual problem which can arise because of it. And the problem is called “SegmentNotFoundException”.

In the past a SegmentNotFoundException was a clear indication of a corrupt JCR repository. The recommendation was always either to fix it or to restore from backup. Both operations are tedious, require downtimes and possibly also mean a loss of data. That’s probably also the reason why this specific problem is often taken for the sign of such a repository exception. So let’s systematically look at it.

The root cause

With AEM 6.4 the feature of “tail-compaction” was introduced, which is a version of the online compaction feature. It is less efficient but takes less time than the full compaction. By default in AEM the tail compaction runs daily and the online compaction once a week.

But from what I understood, this tail compaction has a problem with long running sessions, and it can happen, that tar files are compacted and removed, which are still referenced. That means, that it’s not really a on-disk corruption which needs to be fixed, but rather that some “old sessions” (read about MVCC in the previous post) are referencing data which is not there (anymore).

An unclosed session – a symbol photo (by engin akyurt on Unsplash)

Validate the symptoms

This problem I describe in this post happens under some special circumstances, which you should check first before you start the hunt for long-running sessions:

  • You get SegmentNotFoundExceptions (always with the same segment ID).
  • A repository check doesn’t find any inconsistency.
  • If you restart the instance, the error is gone, but appears again after some time (mostly at least a day).
  • You are running AEM 6.4 or AEM 6.5 (SP doesn’t seem to matter).

In the case I observed, only a single workflow step was affected, but not all the time and only after some time, which made me believe that it was related to the compaction. But it was very hard to track down the error, because the workflow step itself was complex, but safe.

The solution

Fix any long-running session in your application (unless you are registering an ObservationListener in there, which takes care of the refreshs by design). Really all. Use the JMX webconsole plugin and check the list of registered session mbeans every day on a production instance. Count them. Look at the timestamps when the session was opened.

 In the case I observed, the long running session was in a different area of the application, but was working on the same data (user profiles) as the failing workflow step. But the 2 areas in the code were totally unrelated to each other, so that was the only way to track it down.

Final words

Some other notes, which I consider as important in this context:

  • When you encounter a SegmentNotFoundException, please always open a support ticket, just in case. If it’s a different issue than described here, it’s better if you have that ticket open already.
  • If you see exactly this issue, and changing your application code makes this problem go away, please also raise a support ticket. That bug should get fixed (even if long-running sessions are not recommended since years).
  • As mentioned, when you encounter this issue, it’s not a persisted corruption. Restarting will cause the issue not to appear for some time, but that should only buy you time to identify and fix the long running sessions.
  • And AEM as a Cloud Service is not affected by this problem, because neither Online Compaction nor Tail Compaction are used. Instead the Golden Master is offline compacted before cloning.

Long running sessions and clustering

In the last blog post I briefly talked about the basics what to consider when you are writing cluster-aware code. The essence is to be aware of your write activities, and make sure that the scheduled activities are running only on a single cluster node and not on many or all of them.

Today’s focus is on the behavior of JCR sessions with respect to clustering. From a conceptual point of view there is hardly a difference to a single-node cluster (or standalone instance), but the presence of more cluster nodes add a new angle of potential problems to it.

When I talk about JCR, I am thinking of the Apache Oak implementation, which is implemented on top of the MVCC pattern. (The previous Jackrabbit implementation is using a different approach, so this whole blog post does not apply there.) The basic principle of MVCC is that each session is clearly separated from any other session which is open in parallel. Also any changes performed on a session is not visible to other sessions unless

  • the other session is invoking session.refresh() or
  • the other session is opened after the mentioned session is closed.

This behavior applies to all sessions of a JCR repository, no matter if the are opened on the same cluster node or not. The following diagram visualizes this

Diagram showing how 2 sessions are performing changes to the repository whithout seeing the changes of the other as long as they don’t use session.refresh()

We have 2 sessions A1 and B1 which are initiated at the same time t0, and which perform changes independently of each other on the repository, so session B1 cannot see the changes performed with A1_1 (and vice versa). At time t1 session A1 is refreshed, and now it can see the changes B1_1 and B1_2. And afterwards B1 is refreshed as well, and can now see the changes A1_1 and A1_2 as well.

But if a session is not refreshed (or closed and a new session is used), it will never see the changes which happened on the repository after the session has been opened.

As said before, these sessions do not need to run on 2 separate cluster nodes, you get the same behavior on a single cluster node as well. But I mentioned, that multiple cluster nodes are a special problem here. Why is that case?

That problem are OSGI services in the background, which perform a certain job and write data to the JCR repository. In a single-node cluster this not a problem, because all of these activities go through that single service; and if that service uses a long-running JCR session for it, that will never be a problem. Because this service is responsible for all changes, and the service can read and write all the relevant data. In a cluster with more than 2 nodes, each cluster node might have that service running, and the invocations of the services might be random. And as in the diagram above, on cluster node A the data A1_1 is written. And on cluster node 2 the data point B1_1 is written. But they don’t see each other’s changes if they don’t refresh the session! And in most applications, which are written for single-node AEM instances, session.refresh() is barely used, because in such situations there’s simply no need for it, as this problem never occurred.

So when you are migrating your application to AEM as a Cloud Service, review your applications and make sure that you find all long-running ResourceResolvers and JCR sessions. The best option is then to remove these long-running sessions and replace them with short-living ones, which are closed if the job is done. The second-best option is to introduce a session.refresh(), so the session sees any updates which happend to the repository in the meanwhile. (And btw: if you registering an ObservationListener in that session, you don’t need a manual refresh, as this refresh is done by the ObservationListener method anyway; what would it be for if not for reporting changes to the repository, which happen after opening the session?)

That’s all right now regarding cluster-aware coding. But I am sure that there is more to come 🙂

Cluster aware coding in AEM

With AEM as a Cloud Service quite a number of small things have changed; and next to others you also get real clustering support in the authoring environment. Which is nice, because it gives you downtime-less authoring during deployments.

But this cluster also comes with a few gotchas, and one of them is that your application code needs to be cluster-aware. But what does that mean? What consequences does it have and what code do you have to change if you have never paid attention to this aspect?

The most important aspect is to do “every change only once“. It doesn’t make sense that 2 cluster nodes are importing the same set of data. A special version of this aspect is “avoid concurrent writes to the same node“, which can happen when a scheduled job is kicked off at the same time on all nodes, and this job is trying to change something in the repository. In this case you don’t only have overhead, but very likely a lot of exceptions.

And there is a similar aspect, which you should pay attention to: connections to external systems. If you have a cluster, running the same code and configs, it’s not always wanted that each cluster node reaches out to that external system. Maybe you need to the update it with the latest content only once, because it triggers some expensive processing on their side, and you don’t want to have that triggered two or three times, probably pretty much at the same time.

I have mentioned you 2 cases where a clustered application can be behave differently than a single-node environment, now let me show you how you can make your application cluster-aware.

Scheduled jobs

Scheduled jobs are a classic tool to execute certain jobs at a certain time. Of course we could use the Sling Scheduler directly, but to make the execution more robust, you should wrap it into a Scheduled Sling Job.

See the Sling Jobs website for the documentation and some example (although the Javadocs are missing the ScheduleBuilder class, but here’s the code). And of course you should check out Kaushal Mall’s post with even more examples.

Jobs give you the guarantee, that this job is going to be executed only at least once.

Use the Sling Scheduler only for very frequent jobs (e.g. once every 5 minutes), where it doesn’t matter if one execution is skipped, e.g. because the instance was just restarting. To limit the execution of such a job to a single node, you can annotate the job runner with this annotation:

@Property (name="scheduler.runOn", value="SINGLE")

(see the docs)

What about caches?

In-memory caches are often used to speed up operations. Most often they contain the results of previous operations which are then reused; cache elements are either actively purged or expire using a time-to-live.

Normally such caches are not affected by clustering. They might contain different items with potentially different values in the cluster nodes, but that must never be a problem. If that is a problem, you have to look for a different approach, e.g. persisting the data to the repository (if they are not already coming from there) or externalizing the cache (e.g to a redis or memcached instance).

Also, having a simpler application instead of the highest-cache-hit ration possible is often a good trade-off.

Ok, these were the topics I wanted to discuss here. But expect a blog post about one of my favorite topics: “Long running sessions and clustering“.

Slow deployments on AEM 6.4/6.5

A recent post on the AEM forums challenged me to look into an issue I observed myself but did not investigate further.

The observation is that during deployments maintenance tasks are stopped and started a lot, and this triggers a lot of other activities, including a lot of healtcheck executions. This slows down the deployment times and also pollutes the logfiles during deployments.

The problem is that the AEM Maintenance TaskScheduler is supposed to react on changes on some paths in the repository (where the configuration is stored), but unfortunately it also reacts on any change of ResourceProviders (and every Servlet is implemented as a single ResourceProvider). And because this causes a complete reload/restart of the maintenance tasks (and some healthchecks as well), it’s causing quite some delay.

But this behaviour is controlled via OSGI properties, which are missing by default, so we can add them on our own 🙂

Just create a OSGI configuration for com.adobe.granite.maintenance.impl.TaskScheduler and add a single multi-value property named “resource.change.types” with  the values “ADDED”, “CHANGED”, “REMOVED”.

(please also report this behavior via Daycare and refer to GRANITE-29609, so we hopefully get a fix for it, instead of applying this workaround).

Writing unittests for AEM (part 4): OSGI services mock services

In the last parts of this small series (part 1, part 2, part 3) I covered some basic approaches how you can use the Sling and AEM mocking libraries to ease writing unittests. The examples were quite basic and focussed, but in reality many test cases turn out to be much more complex.

And especially when your code has dependencies to other OSGI services, tests can get tricky. So today I want to walk you through some unittest I wrote some time ago, it’s a unittest for the EnsureOakIndex functionality (EnsureOakIndexJobHandlerTest).

The interesting part is that the required EnsureOakIndex service references 4 other services in total; if they are not present, my EnsureOakIndex service will never start properly. Thus you have to fullfill all service requirements of an OSGI service in the unittest as well (at least if you want to use SlingContext like I do here).

The easiest way to solve this is to rely on predefined services which are part of the SlingMocks or AemMocks. The second best way is to create simple mocks and register them a service, so the dependency is fulfilled. That’s definitely a convenient way if your tests do not invoke any of the service methods at all.

Thus the setup() method of my unittests are often pretty large, because there I prepare and inject all other services which I need to make my software-under-test work.

And because this setup works quite well and reliably, I always use AemContext for my unittests (or SlingContext, but as I haven not yet observed any difference in test execution time, I often prefer just AemContext because it comes with some more sevices). Just if I don’t need resources, nodes and no OSGI, I stick with plain junit. For everything else AemContext removes the necessity for mocking a lot.

Optimizing Sling Models (updated)

A few days ago I found that interesting blog post at https://sourcedcode.com/blog/aem/aem-sling-model-field-injection-vs-constructor-injection-memory-consumption, which makes the claim that Constructor injection with Sling Models is much more memory efficient than the “standard” field-based injection. The claim is, that the constructor injection-approach “saves 1800% in bytes” (152 bytes vs 8 bytes in the example).

Well, that result is not correct, because the example implementations of the SlingModels used there are not identical. Because in the case of field-based injection the references are available during the complete lifetime of that SlingModel, not just during the @PostConstruct method call, thus these references consume memory.

While with the example of constructor-based injection, the references are just available during the constructor call; they are not available in any other method. If you want to achieve the same behavior as in the field-injection example, you have to store the references in the global fields and then the memory consumption of that SlingModel increases.

But Justin Edelson pointed out correctly, that you gain from constructor-based injection, if you need the references just in the constructor to compute some results (which are then stored in fields), and in no other method. That’s indeed a small optimization.

But let’s be honest: If we are talking about an additional memory overhead of 100 bytes per a complex SlingModel, that’s a negligible number. Because it’s not typical that hundreds of these models are created per second. And even in that case, when they are created to render a page, the models are garbage collected immediately after when the request is completed. It doesn’t matter if 100 bytes more or less are allocated and collected. Thus the overhead is normally not even measurable.

But well, you might hit the edge case, where this really makes a difference.

Update June 8th: I got informed that the referenced blog article has been updated. It now contains a more reasonable example which makes the sling models comparable. Basically it reflects now the optimization Justin already mentioned. And the difference in object size is now only 40 bytes vs 24 bytes.

Best practices for AEM unittests

Some time ago I already wrote some posts (1, 2, 3) about unit testing with AEM, especially in combination with SlingMocks / AEM Mocks.

In the last months I also spent quite some time in improving the unittests of ACS AEM Commons, mostly in the context of updating the Mockito framework from 1.9x to a more recent version (which is a pre-requisite to make the complete build working with Java 11). During that undertaking I reviewed a lot of unit tests which required adjustments; and I came across some patterns which I also find (often?) in AEM projects. I don’t think that these patterns are necessarily wrong, but they make tests hard to understand, hard to change and often these tests make production code overly complex.

I will list a few of these patterns, which I consider problematic. I won’t go that far and call them anti-patterns, but I will definitely look closely at every instance I come across.

Unittests don’t matter, only test coverage matters.
Sometimes I get the impression, that the quality of the tests don’t matter, but only the resulting test coverage (as indicated by the test coverage tools like jacoco). That paying attention to the code quality of the tests and investing time into refactoring tests is wasted time. I beg to differ.
Although unit tests are not deployed into a production environment, the usual quality measures should be applied to unit tests as well, because it makes them easier extensible and understandable. And the worst which can happen to production code is that a bugfix is not developed in a TDD (build a failing testcase first to prove your error is happening) way because it is to much work to extend the existing tests.

Mocking Sling Resources and/or JCR nodes
With the presence of AEM Mocks there should not be any need to manually mock Sling Resources and JCR nodes. It’s a lot of work to do that, especially if you compare it to load a JSON structure into an in-memory repository. Same with ResourceResolvers and JCR sessions. So don’t mock Sling resources and JCR nodes! That’s a case for AemMocks!

Using setters in services to set references
When you want to test services, the AEM Mock framework handles injections as well, you just need to use the default constructor of your service to instantiate it, and then pass it to the context.registerInjectActivate() method. If required create the referenced services before as mocks and register them as well. AemMocks comes with ways to test OSGI services and components in a very natural way (including activations and injection of references), so please use it.
There is no need to use setter methods for the service references in the production code just for this usecase!

If you are looking for an example how these suggestions can be implemented, you can have a look the example project I wrote last year.

Of course this list is far from being complete; if you have suggestions or more (anti-) patterns for unittests in the AEM area, please leave me a comment.

How to properly delete a page

A relevant aspect of any piece of content is the livecycle, the process of creation, modification, using and finally deletion of that content. And although the deletion of any page in AEM sounds quite easy, there are quite a few aspects which need to be dealt with. For example:

  • Create of a version of the page, so it can be restored.
  • Update the MSM structures (if required)
  • De-activate the page from publishing.
  • Create an entry in the audit.log

All this happens when you use one of the pagemanager.delete() function to remove the page. If you are not using it, the most obvious problem you’ll face afterwards is the fact, that you have published pages which you cannot delete anymore (because the page is missing on authoring), and you have to use a workaround for it.

So, please remember: The pagemanager might have overhead in many areas, but there is a reason for it to exist. Taking care of all these mentioned activities is one of it. So whenever you deal with pages (creating/moving/renaming/deleting), first check the pagemanager API before you start using the JCR or Sling API.