Creating MBeans for your CQ5 application

-- Jörg Hoh

JMX is the de-facto standard for monitoring java processes and applications. A lot of monitoring systems have the ability to consume these data.

By default CQ 5.5 has a number of MBeans, which offer runtime information about internal state. Most interesting ones are the MBeans about the repository state and the replication MBeans. But it isn’t hard to create your own MBeans, so you provide information about the internal state of your application to the monitoring system; or you can monitor resources which are critical to your application and use case.

In Cq5 we are working in a OSGI environment, so we will use one of my favorite patterns, the OSGI whiteboard pattern. We will use the JMX-Whiteboard bundle of the Apache Aries project to register services to JMX. Also that implementation is very short and understandable and shows the power of the whiteboard pattern. (I already had a short blog entry on this last year.)

In this example I want to demonstrate it on an already existing counter, the total number of requests handled by sling.It requires CQ 5.5, where the JMX whiteboard bundle is already deployed by default; but if you install the JMX Whiteboard bundle yourself, you can also use older versions of CQ5.



Read the complete post at the Things on a Content Management System blog.

Author server says Loading; for a long, long, long time

- Sensei Martin

When the Author server restarts, it can look, to the users, like it is hanging because as soon as they click on a node in the left hand nav tree they see "Loading" in the central pane. It just sits there saying "Loading" for a long, long, long time.

This can be caused by having lots of old Workflow instances hanging around (check /etc/workflow/instances). In which case, you should ensure that you install the Workflow purge tool from Adobe.

The purge tool by default will clear out any Workflow instances which are less than 14 days old (well the version I have says > 1 day but the code was actually > 14 days). It can be configured to only purge COMPELTED workflow, or ALL of them. However, beware that the default configuration (in my version 1.6.1 anyway) will only purge the following workflow models :-

  • "/etc/workflow/models/dam/dam_asset_syncer_and"
  • "/etc/workflow/models/dam/update_asset"
  • "/etc/workflow/models/dam/delete_asset"
  • "/etc/workflow/models/dam/delete_dam_asset"



Read the complete post at the My Day CQ blog.

Two new best practice articles now live…

We’ve just published two new articles in the CQ best practice series:

A list of the five articles published so far is at this URL. In the days to come, we’ll post more best practices, tips, and tricks that you can apply to your work.

Stay tuned!


Read the original blog post at The Doc Fox.

CQ Cloud Manager July 2012 version released!

The July release of Adobe CQ Cloud Manager is now out! This version rolls out the following new features/enhancements:

  • Full support for Rackspace Cloud Hosting (backup, scale, and delete CQ clouds)
  • Scale Publish tiers with auto-replication
  • Support for 7 Amazon Web Services (AWS) regions and 4 instance types
  • You can now remove CQ clouds with backups
  • Some UI enhancements

Refer to the documentation for more information.


Read the original post at The Doc Fox.

Creating custom CQ email services

- Scott Macdonald

You can create a custom CQ email service that lets CQ users send email messages from a CQ web page. To create a CQ email service, you develop an OSGi bundle that uses the Java Mail API. You can also develop a JSP that uses JQuery that calls the OSGi service and passes data that is sent as an email message.

 Caption – A CQ email client

To follow along with this development article, you need to download the Java Mail API at the following URL:

The Java Mail API is used within the OSGi bundle that sends email messages when the client initiates a request. The CQ email service comprises of a client (shown in the previous illustration) developed by using JQuery and an OSGi bundle. 



Read the full post at Scott's Digital Community.

CQ 5.5: Changes to the startup

- Jörg Hoh

See here for the major changes brought with this release.

Amongst the hundreds of new features I would like to point out a single one, which is probably the most intersting features for admins and operation people.

With CQ 5.5 the quickstart does no longer start a servlet engine with 2 webapplications deployed in it (crx.war for the CRX and launchpad.war for the OSGI container including Sling and CQ5 itself). But as now CRX has been adapted work inside an OSGI container it is possible to drop the artifical differentiation between CRX and the other parts of the system, but handle them alike inside Apache Felix. The same applies to the CQSE servlet engine; it’s now an service embedded into OSGI (so the server.log file is gone). So the quickstart starts the Felix OSGI container which then takes care of starting the repository services, the servlet engine and all other services introduced by Sling and CQ5. This streamlined the startup process a lot. And — finally — there is only 1 place where you need to change the admin password.



Read the complete post on the Things on a content management system blog.

Midnight at the lost and found

- Sensei Martin

Have you worked out that TAR Optimiser does not run at midnight? Normally, the OOTB default for running the TAR optimiser is 2am - 5am. This can be changed in the repository.xml/workspace.xml files. But, if you specify a start time of 00:00 it won't run. I'm sure I've posted this elsewhere but just to re-iterate you can make the TAR optimiser run faster & do more work by reducing the optimizeSleep parameter. We've managed to get away with 0.25 without any noticeable performance impact to the live servers (CQ 5.3).


This tip was originally posted at the My Day CQ Blog.

Handling DELETEs which flush the dispatcher cache

- Sensei Martin

I have been working on the following problemette, which was posted to the DAY-CQ mailing list on :-

Hi CQ Community,

Does anyone know how to stop the dispatcher invalidating on a DELETE command down a path?

The reason why I ask is because we have a lot of usergenerated content which is being reverse replicated. When the UGC is moved, for security, from /content/usergenerated to /content/mysite/blah/blah, then the /content/usergenerated/... node is deleted on the publish server. Each of these delete commands triggers the flush agent.

I have tried defining a rep:policy to deny jcr:all on a user in /content/usergenerated/. This works for node additions but, deletions are not recognised. So I cannot stop it here.

I have tried to alter the configuration in the /invalidate section of dispatcher.any file to no avail. Is this sdection defining what objects get invalidated rather than what objects trigger an invalidation?

I also noticed that in the release notes of the dispatcher the following, which makes me think that invalidate on delete might be hard-wired ...

Issues resolved in 4.0.5:

25169 - Support flush on every write

Any help would be greatly appreciated!



Read the complete post at My Day CQ Blog.

Creating a Column Control in Adobe CQ

- Dan Klco

Column Controls in Adobe CQ allow authors to easily create and configure column-based layouts.  This guide shows advanced users and developers how to create and configure a column control.

Step 1: Add a Parsys to a Component JSP

Include a Paragraph System into the page’s component JSP.  You can use any node name for the path, the resource type is ‘foundation/components/parsys’

<cq:includepath="par"resourceType="foundation/components/parsys "/>

Step 2: Configure the Paragraph System

To add a Column Control to the Paragraph System, you will first need to update the design.  Select design mode on the sidekick.

Selecting Design Mode

Selecting Design Mode



Read the complete post on the Six Dimensions blog.

Reverse Replication woes – solved

- Sensei Martin

Hot off the press/keyboard (i.e. not fully tested). With the help of an Adobe support engineer in Basel and an on-site Adobe consultant we discovered what the root cause of the reverse replication problem was.

Namely, that when a user voted in a poll, the new vote AND ALL previous votes were being reverse replicated. This caused a MASSIVE workload on the Author because each node in the /var/replication/outbox did not contain 1 corresponding vote; it actually contained ALL of the votes including the new additional one. This explains why the Author would take 20 minutes to process just 10 nodes in the outbox.



Read the complete post at this URL.