Performance tests modelling (part 2)

This is is the second blog post in the series about performance test modelling. You can find the overview over this series and links to all its articles in the post “Performance tests modelling (part 1)“.

In this blog post I want to cover the aspect of “concurrent users”, what it means in the context of a performance test and why its important to clearly understand its impact.

Concurrent users is an often used measure to indicate the the load put to a system, expressed by usage in a definition, how many users are concurrently using that system. And for that reason many performance tests provide as quantitative requirement: “The system should be able to handle 200 concurrent users”. While that seems to be a good definition on first sight, it leaves many questions:

  • What does “concurrent” mean?
  • And what does “user” mean?
  • Are “200 concurrent users” enough?
  • Do we always have “200 concurrent users”?

Definition of concurrent

Let’s start with the first question: What does “concurrent” really mean on a technical level? How can we measure that our test indeed does “200 concurrent users” and not just 20 or 1000?

  • Are there any server-side sessions which we can count and which directly give this number? And that we setup our test in a way to hit that number?
  • Or do we have to rely on more vague definitions like “users are considered concurrent when they do a page load less than 5 minutes apart”? And that we design our test in that way?

Actually it does not matter at all, which definition you choose. It’s just important that you explicitly define which definition you use. And what metric you choose to understand that you hit that number. This is an important definition when it comes to implementing your test.

And as a side-note: Many commercial tools have their own definition of concurrent, and here the exact definition does not matter as well, as long as you are able to articulate it.

What is a user?

The next question is about “the user” which is modeled in the test; to simplify the test and test executions one or more “typical” user personas are created, which visit the site and perform some actions. Which is definitely helpful, but it’s just that: A simplification, because otherwise our model would explode because of the sheer complexity and variety of user behavior. Also sometimes we don’t even know what a typical “user” does on our site, because that system will be brand-new.

So this is a case, where we have a huge variance in the behavior of the users, which we should outline in our model as a risk: The model is only valid if the majority of the users are behaving more or less as we assumed.

But is this all? Are really all users do at least 10% of the actions we assume they do?

Let’s brainstorm a bit and try to find answers for these questions:

  • Does the google bot behave like that? All the other bots of the search engines?
  • What about malware scanners which try to hit a huge list of WordPress/Drupal/… URLs on your site?
  • Other systems performing (random?) requests towards your site?

You could argue, that this traffic has less/no business value, and for that reason we don’t test for it. Also it could be assumed that this is just a small fraction of the overall user traffic, and can be ignored. But that is just an assumption, and nothing more. You just assume that it is irrelevant. But often these requests are not irrelevant, not all all.

I encountered cases where not the “normal users” were bringing down a system, but rather this non-normal type of “user”. An example for that are cases where the custom 404 handler was very slow, and for that reason the basic undocumented assumption “We don’t need to care about 404s, as they are very fast” was violated and brought down the site. All performance tests passed, but the production system failed nevertheless.

So you need to think about “user” in a very broad sense. And even if you don’t implement the constant background noise of the internet in your performance test, you should list it as factor. If you know that a lot of this background noise will trigger a HTTP statuscode 404, you are more likely to check that this 404 handler is fast.

Are “200 concurrent users” enough?

One information every performance has is the number of concurrent users which the system must be able to handle. But even if we assume, that “concurrent” and “users” are both defined as well, is this enough?

First, on what data is this number based on? Is it a number based on data derived from another system, which the new system should replace? That’s probably the best data you can get. Or when you build a new system, is it based on good marketing data (which would be okay-ish), based on assumptions of the expected usage or just numbers we would like to see (because we assume that a huge number of concurrent users means a large audience and a high business value)?

So probably this is the topic which will be discussed the most. But the number and the way how that number is determined should be challenged and vetted. Because it’s one the corner-stones of the whole performance test model. It does not make sense to build a high performance and scalable system when afterwards you find out that the business numbers we grossly overrated, and a smaller and cheaper solution would have delivered the same results.

What about time?

A more important is aspect which is often overlooked is the timing; how many users are working on the site at every moment? Do you need to expect the maximum number 8 hours every day or just during the peak days of the year? Do you have a more or less constant usage or only during business hours in Europe?

This heavily depends on the type of your application and the distribution of your audience. If you build an intranet site for a company only located in Europe, the usage during the night is pretty much “zero”, and it will start to increase at 0600 in the morning (probably the Germans going to work early :-)), hitting the max usage between 09 and 16 o’clock and going to zero at latest at 22 o’clock. The contrast to it is a site visited world-wide by customers, where we can expect a higher and almost flat line; of course with variations depending on the number of people being up.

This influences your tests as well, because in both cases you don’t need to simulate spikes, that means a 500% increase of users within 5 minutes. On the other hand, if you plan for large marketing campaigns addressing millions of users, this might exactly be the situation you need to plan and test for. Not to mention if you book a slot during the Superbowl break.

Why is this important? Because you need to test only scenarios which you will expect to see in production. And ignore scenarios which we don’t have any value for you. For example it’s a waste of time and investment to test for a sudden spike in the above mentioned intranet case for the European company, while it’s essential for marketing campaigns to test a scenario, where such a spike comes on top of the normal traffic.

Summary

“N concurrent users” itself is not much information; and while it can serve as input, your performance test model should contain a more detailed understanding of that definition and what it means to the performance test. Otherwise you will focus just on a given number of users of this idealistic type and ignore every other scenario and case.

In the next blog post I will cover how the system and the test data itself will influence the result of the performance test.

Sling Model Exporter & exposing ResourceResolver information

Welcome to 2024. I will start this new year with a small advice regarding Sling Models, which I hope you can implement very easy on your side.

The Sling Model Exporter is based on the Jackson framework, and it can serialize an object graph, with the root being the requested Sling Model. For that it recursively serializes all public & protected members and return values of all simple getters. Properly modeled this works quite well, but small errors can have large consequences. While missing data is often quite obvious (if the JSON powers an SPA, you will find it not properly working), too much data being serialized is spotted less frequently (normally not at all).

I am currently exploring options to improve performance, and I am a big fan of the ResourceResolver.getPropertyMap() API to implement a per-resourceresolver cache. While testing such an potential improvement I found customer code, in which the ResourceResolver is serialized via the Sling Model Exporter into JSON. In that case the code looked like this:

@SlingModel
public class MyModel {
 @Self
 Resoruce resource;

 ResourceResolver resolver;

 @PostConstruct
 public void init() {
   resolver = resource.getResourceResolver();
 }
}

(see this good overview at Baeldung of the default serialization rules of Jackson.)

And that’s bad in 2 different aspects:

  • Security: The serialized ResourceResolver object contains next to the data returned by the public getters (e.g. the search paths, userId and potentially other interesting data) also the complete propertyMap. And this serialized cache is probably nothing you want to expose to the consumer of this JSON.
  • Exceptions: If the getProperty() cache contains instances of classes, which are not publicly exposed (that means these class definitions are hidden within some implementation packages), you will encounter ClassNotFound exceptions during serialization, which will break the export. And instead a JSON you get an internal server error or a partially serialized object graph.

In short: It is not a good idea to serialize a ResourceResolver. And honestly, I have not found a reason to say why this should be possible at all. So right now I am a bit hesitant to use the propertMap as cache, especially in contexts where the Sling Model Exporter might be used. And that blocks me to work on some interesting performance improvements 😦

To unblock this situation, we have introduced a 2 step mechanism, which should help to overcome this situation:

  1. In the latest AEM as a Cloud Service release 14697 (both in the cloud as well as in the SDK) a new WARN message has been added when your Model definition causes a ResourceResolver to be serialized. Search the logs for this message “org.apache.sling.models.jacksonexporter.impl.JacksonExporter A ResourceResolver is serialized with all its private fields containing implementation details you should not disclose. Please review your Sling Model implementation(s) and remove all public accessors to a ResourceResolver.
    It should contain also a referecene to the request path, where this is happening, so it should be easily possible to identify the Sling model class which triggers this serialization and change that piece of code so the ResourceResolver is not serialized anymore. Note, that the above message is just a warning, the behavior remains unchanged.
  2. As a second measure also functionality is implemented, which allows to block the serialization of ResourceResolver via the Sling Model Exporter completely. Enabling this is a breaking change for all AEM as a Cloud Service customers (even if I am 99.999% sure that it won’t break any functionality), and for that reason we cannot enable this change on the spot. But at some point this is step is necessary to guarantee that the above listed 2 problems will never happen.

Right now the first step is enabled, and you will see this log message. If you see this log message, I encourage you to adapt your code (the core components should be safe) so ResourceResolvers are no longer serialized.

In parallel we need to implement step 2; right now the planning is not done yet, but I hope to activate step 2 some time later in 2024 (not before mid of the year). But before this is done, there will be formal announcements in the AEM release notes. And I hope that with this blog post and the release notes all customers have adapted their implementation, so that setting this switch will not change anything.

Update (January 19, 2024): There is now a piece of official AEM documentation covering this situation as well.

3 rules how to use an HttpClient in AEM

Many AEM applications consume data from other systems, and in the last decade the protocol of choice turned out to the HTTP(S). And there are a number of very mature HTTP clients out, which can be used together with AEM. The most frequently used variant is the Apache HttpClient, which is shipped with AEM.

But although the HttpClient is quite easy to use, I came across a number of problems, many of them result in service outages. In this post I want to list the 3 biggest mistakes you can make when you use the Apache HttpClient. While I observed the results in AEM as a Cloud Service, the underlying effects are the same on-prem and in AMS, the resulting effects can be a bit different.

Reuse the HttpClient instance

I often see that a HttpClient instance is created for a single HTTP request, and in many cases it’s not even closed properly afterwards. This can lead to these consequences:

  • If you don’t close the HttpClient instance properly, the underlying network connection(s) will not be closed properly, but eventually timeout. And until then the network connections stays open. If you using a proxy with a connection limit (many proxies do that) this proxy can reject new requests.
  • If you re-create a HttpClient for every request, the underlying network connection will get re-established every time with the latency of the 3-way handshake.

The reuse of the HttpClient object and its state is also recommended by its documentation.

The best way to make that happen is to wrap the HttpClient into an OSGI service, create it on activation and stop it when the service is deactivated.

Set agressive connection- and read-timeouts

Especially when an outbund HTTP request should be executed within the context of a AEM request, performance really matters. Every milisecond which is spent in that external call makes the AEM request slower. This increases the risk of exhausting the Jetty thread pool, which then leads to non-availability of that instance, because it cannot accept any new requests. I have often seen AEM CS outages because a backend was not responding slowly or not at all. All requests should finish quickly, and in case of errors must also return fast.

That means, timeouts should not exceed 2 second (personally I would prefer even 1 second). And if your backend cannot respond that fast, you should reconsider its fitness for interactive traffic, and try not to connect to it in a synchronous request.

Implement a degraded mode

When your backend application responds slowly, returns errors or is not available at all, your AEM application should be react accordingly. I had the case a number of times that any problem on the backend had an immediate effect on the AEM application, often resulting in downtimes because either the application was not able to handle the results of the HttpClient (so the response rendering failed with an exception), or because the Jetty threadpool was totally consumed by those requests.

Instead your AEM application should be able to fallback into a degraded mode, which allows you to display at least a message, that something is not working. In the best case the rest of the site continues to work as usual.

If you implement these 3 rules when you do your backend connections, and especially if you test the degraded mode, your AEM application will be much more resilient when it comes to network or backend hiccups, resulting in less service outages. And isn’t that something we all want?

Recap: AdaptTo 2023

It was adapTo() time again, the first time again in an in-person format since 2019. And it’s definitely much different from the virtual formats we experienced during the pandemic. More personal, and allowing me to get away from the daily work routine; I remember that in 2020 and 2021 I constantly had work related topics (mostly Slack) on the other screen, while I was attending the virtual conference. That’s definitely different when you are at the venue 🙂

And it was great to see all the people again. Many of the people which are part of the community for years, but also many new faces. Nice to see that the community can still attract new people, although I think that the golden time of the backend-heavy web-development is over. And that was reflected on stage as well, with Edge Delivery Services being quite a topic.

As in the past years, the conference itself isn’t that large (this year maybe 200 attendees) and it gives you plenty of chances to get in touch and chat about projects, new features, bugs and everything else you can imagine. The location is nice, and Berlin gives you plenty of opportunities to go out for dinner. So while 3 days of conference can definitely be exhausting, I would have liked to spend much more dinners with attendees.

I got the chance to come on stage again with one of my favorite topics: Performance improvement in AEM, a classic backend topic. According to the talk feedback, people liked it 🙂
Also, the folks of the adaptTo() recorded all the talks and you can find both the recording and the slide deck on the talk’s page.

The next call for papers is already announced to start in February ’24), and I will definitely submit a talk again. Maybe you as well?

AEM CS & dedicated egress IP

Many customers of AEM as a Cloud Service are used to perform a first level of access control by allowing just a certain set of IP addresses to access a system. For that reason they want that their AEM instances use a static IP address or network range to access their backend systems. AEM CS supports with this with the feature called “dedicated egress IP address“.

But when testing that feature there is often the feedback, that this is not working, and that the incoming requests on backend systems come from a different network range. This is expected, because this feature does not change the default routing for outgoing traffic for the AEM instances.

The documentation also says

Http or https traffic will go through a preconfigured proxy, provided they use standard Java system properties for proxy configurations.

The thing is that if traffic is supposed to use this dedicated egress IP, you have to explicitly make it use this proxy. This is important, because by default not all HTTP Clients do this.

For example, the in the Apache HTTP Client library 4.x, the HttpClients.createDefault() method does not read the system properties related proxying, but the HttpClients.createSystem() does. Same with the java.net.http.HttpClient, for which you need to configure the Builder to use a proxy. Also okhttp requires you to configure the proxy explicitly.

So if requests from your AEM instance is coming from the wrong IP address, check that your code is actually using the configured proxy.

AEM article review December 2022

I am doing this blog now for quite some time (the first article in this blog dates back to December 2008! That was the time of CQ 5.0! OMG), and of course I am not the only one writing on AEM. Actually the number of articles which are produced every months is quite large, but I am often a bit disappointed because many just reproduce some very basic aspects of AEM, which can be found at many places. But the amount of new content which describe aspects which have barely been covered by other blog posts or the official product documentation is small.

For myself I try to focus on such topics, offer unique views on the product and provide recommendations how things can be done (better), all based on my personal experiences. I think that this type of content is appreciated by the community, and I get good feedback on it. To encourage the broader community to come up with more content covering new aspects I will do a little experiment and promote a few selected articles of others. I think that these article show new aspects or offer a unique view on certain on AEM.

Depending on the feedback I will decide i will continue with this experiment. If you think that your content also offers new views, uncovers hidden features or suggests best practices, please let me know (see the my contact data here). I will judge these proposals on the above mentioned criteria. But of course it will be still my personal decision.

Let’s start with Theo Pendle, who has written an article on how to write your own custom injector for Sling Models. The example he uses is a real good one, and he walks you through all the steps and explains very well, why that is all necessary. I like the general approach of Theos writing and consider the case of safely injecting cookie values as a valid for such a injector. But in general I think that there are not many other cases out there, where it makes sense to write custom injectors.

Also on a technical level John Mitchell has his article “Using Sling Feature Flags to Manage Continous Releases“, published on the Adobe Tech Blog. He introduces Sling Features and how you can use them to implement Feature Flags. And that’s something I have not seen used yet in the wild, and also the documentation is quite sparse on it. But he gives a good starting point, although a more practical example would be great 🙂

The third article I like the most. Kevin Nenning writes on “CRXDE Lite, the plague of AEM“. He outlines why CRXDE Lite has gained such a bad reputation within Adobe, that disabling CRXDE Lite is part of the golive checklist for quite some time. But on the other hand he loves the tool because it’s a great way for quick hacks on your local development instance and for a general read-only tool. This is an article every AEM developer should read.
And in case you haven’t seen it yet: AEM as a Cloud Service offers the repository browser in the developer console for a read-only view on your repo!

And finally there is Yuri Simione (an Adobe AEM champion), who published 2 articles discussing the question “Is AEM a valid Content Services Plattform?” (article 1, article 2). He discusses an implementation which is based on Jackrabbit/Oak and Sling (but not AEM) to replace an aging Documentum system. And finally he offers an interesting perspective on the future of Jackrabbit. Definitely a read if you are interested in a more broader use of AEM and its foundational pieces.

That’s it for December. I hope you enjoy these articles as much as I did, and that you can learn from them and get some new inspiration and insights.

OSGi DS & Metatype & SCR properties

When I wrote the last blog post on migrating to OSGi annotations, I already mentioned that the property annotations you used to use with SCR annotations cannot be migrated 1:1, but instead you have to decide if you add it the OCD or if you add them as properties to the @Component annotation.

I was remembered of that when I worked on the fixing the process label properties for the workflow steps contained in ACS AEM Commons (PR 1645). And the more I think about it, the more I get the impression that this might be causing some confusion in the adoption of the OSGI annotations.

Excursion to the OSGi specification

First of all, in OSGi there 2 specifications, which are important in this case: Declarative Services (DS, chapter 112 in the OSGI r6 enterprise specification, sometimes also referenced as Service Component Model) and the Metatype Services (chapter 105 in the OSGI r6 enterprise specification). See the OSGI website for download (unfortunately there is no HTML version available for r6, only PDFs).

Declarative Services deals with the services and components, their relations and the required things around it. Quoting from the spec (112.1):

The service component model uses a declarative model for publishing, finding and binding to OSGi services. This model simplifies the task of authoring OSGi services by performing the work of registering the service and handling service dependencies. This minimizes the amount of code a programmer has to write; it also allows service components to be loaded only when they are needed.

Metatype Services care about configuration of services. Quoting from chapter 105.1

The Metatype specification defines interfaces that allow bundle developers to describe attribute types in a computer readable form using so-called metadata. The purpose of this specification is to allow services to specify the type information of data that they can use as arguments. The data is based on attributes, which are key/value pairs like properties.

Ok, how does this relate to the @Property annotation in the Felix SCR annotations? I would say, that this annotation cannot be clearly attributed to either DS or Metatype, but it served both.

You could add @Properties as annotation to class, or you could add it as with an @Property to a field. You could add a property “metatype=true” to the annotation, and then it appeared in the OSGI console (then it was a “real” metatype property in the sense of the Metatype specification).

But in either way, all the properties were provided through the ServiceContext.getProperties() method; that means, in reality it never really made a difference how you defined the property, if a the class level or to properties, if you added the metatype=true property or not. That was nice and in most times also very convenient.

This changed with the OSGI annotations. Because now the properties are described in a class annotated with the @ObjectClassDefinition annotation. Type-safe and named. But here it’s clearly a Metatype thing (it’s configuration), and it cannot be used with the Declarative Service (the services and components thing) in parallel. Now you have to make the decision: Is it a configuration item (something I use in the code) or is it a property which influences the component itself?

As an example, with SCR annotations you could write

@Component @Service
@Properties({
@Property(name="sling.servlets.resourcetypes",value="project/components/header"),
@Property(name="sling.servlets.selector",value="foo")
})
public class HeaderServlet implements SlingSafeMethodsServet { ...

Now, as these properties were visible via metatype as well, you could also overwrite them using some OSGI configuration and register the servlet on a different selector just by configuration. Or you could read these properties from the ServiceContext. That was not a problem (and hopefully noone ever really used it …).

With OSGI annotations this is no longer possible. You have configuration and component properties. You can change the configuration, but not the component properties any more.

What does that mean for you?

Mostly, don’t change all properties blindly to properties of the @ObjectClassDefinition object. For example the label of a workflow step is not configuration, but rather a property. That means there you should use something like this:

@Component(properties= {
"process.label=My process label"
}
public class MyWorkflowProcess implements WorkflowProcess {...

Disclaimer: I am not an OSGI expert, this is just my impression from dealing with that stuff a lot. Carsten, David, feel free to to correct me 🙂


Why you need to start EnsureOakIndex manually

Struggling with new index definitions is one of the most … annoying actions I do in AEM development. Because dropping indexing definitions with contentpackages directly into /oak:index is always a bit problematic (you need to have set the property “reindex=true” in your package, and what happens if you deploy the same package twice? Reindexing starts again, although nothing has changed), I greatly improved the EnsureOakIndex functionality in ACS AEM Commons a while back. I needed a reliable and automated way to add indexes to a huge number of AEM instances. It works quite well and is able to handle our indexing modification requirements.

Today I got the “complaint” from one of my colleagues, that they always need to restart the bundle to start this feature. Well, there’s a good reason for it. And I already had to explain a few times, so I thought it’s a good idea to post it here.

A production deployment is typically time-critical. That means it should complete within a certain amount of time (typically within 1hour or so). If you add an index to the system and the reindexing for this new index is enforced immediately at deployment time, it can have a massive impact on the deployment itself and the time needed for it.That’s mostly because of the impact of reindexing on the overall system performance; and depending on the index and the repository size this can take from seconds to days.  That means that the immediate start of reindexing has a good chance to break your schedule. Therefor the indexes are not applied automatically. But you can always trigger it via JMX (search for “Ensure Oak Index”) to start the process of applying the changed index definitions.

But I understand, that this makes the handling of new indexes much more cumbersome for some people. If you interested in a (configurable) feature to apply new index definitions immediately, drop me a comment here or (better) raise a feature request on ACS AEM Commons Github page. Let’s see if we can make it happen 🙂

 

 

AEM technical conferences in 2018

In the last years I attended a few conferences in the AEM space and I never got disappointed. On all of them I attended brilliant talks and presentation, meet with a lot of people and in general enjoyed well organized conferences.

In 2018 I will try again to visit at least one conference, and for convenience I collected here the AEM conferences I know. In case I missed one please leave a comment.
I ignore the local meetups here, because typically you won’t take a flight to attend such a meetup 🙂

In no particular order I found these conferences covering technical AEM topics in 2018:

  • Adobe Summit US (although it covers muuuuuuch more than just AEM) in Las Vegas, NV
  • Immerse 2018, a virtual conference
  • Evolve in San Diego, CA
  • AdaptTo in Potsdam, Germany; in 2018 at a larger venue than in the years before

At the moment I can already announce, that I will participate at the Adobe Summit (giving again a lab session). I am not sure if I will present at Immerse.

If you miss any conference with AEM related content, just speak up, I’ll add it to the list.

What’s new in Sling with AEM 6.3

AEM 6.3 is out. Besides the various product enhancements and new features it also includes updated versions of many open source libraries. So it’s your chance to have a closer at the version numbers and find out what changed. And again, quite a lot.

For your convenience I collected the version information of the sling bundles which are part of the initial released version (GA). Please note that with Servicepacks and Cumulative Hotfixes new bundle versions might be introduced.
If a version number is marked red, it has changed compared to the previous AEM versions. Bundles which are not present in the given version are marked with a dash (-).

For the versions 5.6 to 6.1 please see this older posting of mine.

Symbolic name of the bundle AEM 6.1 (GA) AEM 6.2 (GA) AEM 6.3 (GA)
org.apache.sling.adapter 2.1.4 2.1.6 (Changelog) 2.1.8 (Changelog)
org.apache.sling.api 2.9.0 2.11.0 (Changelog) 2.16.2 (Changelog)
org.apache.sling.atom.taglib 0.9.0.R988585 0.9.0.R988585 0.9.0.R988585
org.apache.sling.auth.core 1.3.6 1.3.14 (Changelog) 1.3.24 (Changelog)
org.apache.sling.bgservlets 0.0.1.R1582230 1.0.6 (Changelog)
org.apache.sling.bundleresource.impl 2.2.0 2.2.0 2.2.0
org.apache.sling.caconfig.api 1.1.0
org.apache.sling.caconfig.impl 1.2.0
org.apache.sling.caconfig.spi 1.2.0
org.apache.sling.commons.classloader 1.3.2 1.3.2 1.3.8 (Changelog)
org.apache.sling.commons.compiler 2.2.0 2.2.0 2.3.0 (Changelog)
org.apache.sling.commons.contentdetection 1.0.2 1.0.2
org.apache.sling.commons.fsclassloader 1.0.0 1.0.2 (Changelog) 1.0.4 (Changelog)
org.apache.sling.commons.html 1.0.0 1.0.0 1.0.0
org.apache.sling.commons.json 2.0.10 2.0.16 (Changelog) 2.0.20 (Changelog)
org.apache.sling.commons.log 4.0.2 4.0.6 (Changelog) 5.0.0 (Changelog)
org.apache.sling.commons.log.webconsole 1.0.0
org.apache.sling.commons.logservice 1.0.4 1.0.6 (Changelog) 1.0.6
org.apache.sling.commons.metrics 1.0.0 (Changelog) 1.2.0 (Changelog)
org.apache.sling.commons.mime 2.1.8 2.1.8 2.1.10 (Changelog)
org.apache.sling.commons.osgi 2.2.2 2.4.0 (Changelog) 2.4.0
org.apache.sling.commons.scheduler 2.4.6 2.4.14 (Changelog) 2.5.2 (Changelog)
org.apache.sling.commons.threads 3.2.0 3.2.6 (Changelog) 3.2.6
org.apache.sling.datasource 1.0.0 1.0.0 1.0.2 (Changelog)
org.apache.sling.discovery.api 1.0.2 1.0.2 1.0.4 (Changelog)
org.apache.sling.discovery.base 1.1.2 (Changelog) 1.1.6 (Changelog)
org.apache.sling.discovery.commons 1.0.10 (Changelog) 1.0.18 (Changelog)
org.apache.sling.discovery.impl 1.1.0
org.apache.sling.discovery.oak 1.2.6 (Changelog) 1.2.16 (Changelog)
org.apache.sling.discovery.support 1.0.0 1.0.0 1.0.0
org.apache.sling.distribution.api 0.1.0 0.3.0 0.3.0
org.apache.sling.distribution.core 0.1.1.r1678168 0.1.15.r1733486 0.2.6
org.apache.sling.engine 2.4.2 2.4.6 (Changelog) 2.6.6 (Changelog)
org.apache.sling.event 3.5.5.R1667281 4.0.0 (Changelog) 4.2.0 (Changelog)
org.apache.sling.event.dea 1.0.0 1.0.4 (Changelog) 1.1.0 (Changelog)
org.apache.sling.extensions.threaddump 0.2.2
org.apache.sling.extensions.webconsolesecurityprovider 1.1.4 1.1.6 (Changelog) 1.2.0 (Changelog)
org.apache.sling.featureflags 1.0.0 1.0.2 (Changelog) 1.2.0 (Changelog)
org.apache.sling.fragment.ws 1.0.2 1.0.2 1.0.2
org.apache.sling.fragment.xml 1.0.2 1.0.2 1.0.2
org.apache.sling.hapi 1.0.0 1.0.0
org.apache.sling.hc.core 1.2.0 1.2.2 (Changelog) 1.2.4 (Changelog)
org.apache.sling.hc.webconsole 1.1.2 1.1.2 1.1.2
org.apache.sling.i18n 2.4.0 2.4.4 (Changelog) 2.5.6 (Changelog)
org.apache.sling.installer.console 1.0.0 1.0.0 1.0.2 (Changelog)
org.apache.sling.installer.core 3.6.4 3.6.8 (Changelog) 3.8.6 (Changelog)
org.apache.sling.installer.factory.configuration 1.1.2 1.1.2 1.1.2
org.apache.sling.installer.factory.subsystems 1.0.0
org.apache.sling.installer.provider.file 1.1.0 1.1.0 1.1.0
org.apache.sling.installer.provider.jcr 3.1.16 3.1.18 (Changelog) 3.1.22 (Changelog)
org.apache.sling.javax.activation 0.1.0 0.1.0 0.1.0
org.apache.sling.jcr.api 2.2.0 2.3.0 (Changelog) 2.4.0 (Changelog)
org.apache.sling.jcr.base 2.2.2 2.3.2 (Changelog) 3.0.0 (Changelog)
org.apache.sling.jcr.compiler 2.1.0
org.apache.sling.jcr.contentloader 2.1.10 2.1.10 2.1.10
org.apache.sling.jcr.davex 1.2.2 1.3.4 (Changelog) 1.3.8 (Changelog)
org.apache.sling.jcr.jcr-wrapper 2.0.0 2.0.0 2.0.0
org.apache.sling.jcr.registration 1.0.2 1.0.2 1.0.2
org.apache.sling.jcr.repoinit 1.1.2
org.apache.sling.jcr.resource 2.5.0 2.7.4.B001 (Changelog) 2.9.2 (Changelog)
org.apache.sling.jcr.resourcesecurity 1.0.2 1.0.2 1.0.2
org.apache.sling.jcr.webdav 2.2.2 2.3.4 (Changelog) 2.3.8 (Changelog)
org.apache.sling.jmx.provider 1.0.2 1.0.2 1.0.2
org.apache.sling.launchpad.installer 1.2.0 1.2.2 (Changelog) 1.2.2
org.apache.sling.models.api 1.1.0 1.2.2 (Changelog) 1.3.2 (Changelog)
org.apache.sling.models.impl 1.1.0 1.2.2 (Changelog) 1.3.9.r1784960 (Changelog)
org.apache.sling.models.jacksonexporter 1.0.6
org.apache.sling.provisioning.model 1.8.0
org.apache.sling.repoinit.parser 1.1.0
org.apache.sling.resource.inventory 1.0.4 1.0.4 1.0.6 (Changelog)
org.apache.sling.resourceaccesssecurity 1.0.0 1.0.0 1.0.0
org.apache.sling.resourcecollection 1.0.0 1.0.0 1.0.0
org.apache.sling.resourcemerger 1.2.9.R1675563-B002 1.3.0 (Changelog) 1.3.0
org.apache.sling.resourceresolver 1.2.4 1.4.8 (Changelog) 1.5.22 (Changelog)
org.apache.sling.rewriter 1.0.4 1.1.2 (Changelog) 1.2.1.R1777332 (Changelog)
org.apache.sling.scripting.api 2.1.6 2.1.8 (Changelog) 2.1.12 (Changelog)
org.apache.sling.scripting.core 2.0.28 2.0.36 (Changelog) 2.0.44 (Changelog)
org.apache.sling.scripting.java 2.0.12 2.0.14 (Changelog) 2.1.2 (Changelog)
org.apache.sling.scripting.javascript 2.0.16 2.0.28 2.0.30
org.apache.sling.scripting.jsp 2.1.6 2.1.8 2.2.6
org.apache.sling.scripting.jsp.taglib 2.2.4 2.2.4 2.2.6
org.apache.sling.scripting.jst 2.0.6 2.0.6 2.0.6
org.apache.sling.scripting.sightly 1.0.2 1.0.18 1.0.32
org.apache.sling.scripting.sightly.compiler 1.0.8
org.apache.sling.scripting.sightly.compiler.java 1.0.8
org.apache.sling.scripting.sightly.js.provider 1.0.4 1.0.10 1.0.18
org.apache.sling.scripting.sightly.models.provider 1.0.0 1.0.6
org.apache.sling.security 1.0.10 1.0.18 1.1.2
org.apache.sling.serviceusermapper 1.2.0 1.2.2 1.2.4
org.apache.sling.servlets.compat 1.0.0.Revision1200172 1.0.0.Revision1200172 1.0.0.Revision1200172
org.apache.sling.servlets.get 2.1.10 2.1.14 2.1.22
org.apache.sling.servlets.post 2.3.6 2.3.8 2.3.15.r1780096
org.apache.sling.servlets.resolver 2.3.6 2.4.2 2.4.10
org.apache.sling.settings 1.3.6 1.3.8 1.3.8
org.apache.sling.startupfilter 0.0.1.Rev1526908 0.0.1.Rev1526908 0.0.1.Rev1764482
org.apache.sling.startupfilter.disabler 0.0.1.Rev1387008 0.0.1.Rev1387008 0.0.1.Rev1758544
org.apache.sling.tenant 1.0.2 1.0.2 1.1.0
org.apache.sling.tracer 0.0.2 1.0.2