Handling DELETEs which flush the dispatcher cache

- Sensei Martin

I have been working on the following problemette, which was posted to the DAY-CQ mailing list on Google.com :-

Hi CQ Community,

Does anyone know how to stop the dispatcher invalidating on a DELETE command down a path?

The reason why I ask is because we have a lot of usergenerated content which is being reverse replicated. When the UGC is moved, for security, from /content/usergenerated to /content/mysite/blah/blah, then the /content/usergenerated/... node is deleted on the publish server. Each of these delete commands triggers the flush agent.

I have tried defining a rep:policy to deny jcr:all on a user in /content/usergenerated/. This works for node additions but, deletions are not recognised. So I cannot stop it here.

I have tried to alter the configuration in the /invalidate section of dispatcher.any file to no avail. Is this sdection defining what objects get invalidated rather than what objects trigger an invalidation?

I also noticed that in the release notes of the dispatcher the following, which makes me think that invalidate on delete might be hard-wired ...

Issues resolved in 4.0.5:

25169 - Support flush on every write

Any help would be greatly appreciated!

...

---------

Read the complete post at My Day CQ Blog.

Disk space growing on the CQ author server?

- Sensei Martin

If the disk space on the CQ5 author server is growing at 1-2GBs per day, then check on the filesystem, to see where the growth is at.

If you find that the "journal" directory is GigaBytes then check out this article: http://dev.day.com/content/kb/home/Crx/Troubleshooting/JournalTooMuchDiskSpace.html.

Symptoms

With a default FileJournal configuration in place, depending on the activity on the repository, over time, many Journal log-files will be created. This eventually may cause a disk space issue and performance problems in applications that use CRX.

...

-----

Read the full post at this URL.

Reverse Replication woes – solved

- Sensei Martin

Hot off the press/keyboard (i.e. not fully tested). With the help of an Adobe support engineer in Basel and an on-site Adobe consultant we discovered what the root cause of the reverse replication problem was.

Namely, that when a user voted in a poll, the new vote AND ALL previous votes were being reverse replicated. This caused a MASSIVE workload on the Author because each node in the /var/replication/outbox did not contain 1 corresponding vote; it actually contained ALL of the votes including the new additional one. This explains why the Author would take 20 minutes to process just 10 nodes in the outbox.

...

-----------

Read the complete post at this URL.

Another useful Adobe hotfix

Adobe CQ hotfix for Replication Stabilization = 34595. Another must have hotfix!

Reverse Replication woes

- Sensei Martin

So, in my previous post I said how wonderful FP37434 is (the replication stabilisation FP). Unfortunately, it did not solve our problem and we now have a large volume of content to reverse replicate (~50k nodes in /var/replication/outbox across all our publish servers).

We are currently facing 2 problems. When the RR agent polls, the publish server with FP37434 exhibits a huge native memory leak (approx 8GB of native memory is being claimed) causing a great deal of paging on the system.

When we batch this down to only 10 items in the outbox, we noticed that the author takes 30 minutes to process 10 nodes.

Adding extra logging (com.day.cq.replication.content.durbo) at DEBUG level shows that the Author is doing valid work for 30 minutes processing just 10 nodes from the outbox.

...

---------

Read the complete post at this URL.

Replication Stablization hotfix

- Sensei Martin

If the flush agent on a publish server stops working, then you probably need cq-5.3.0-featurepack-37434. This featurepack is a nice cumulative one - so no painful installation of dependencies (phew)! And it fixes a LOT of bugs mainly around stabilizing the replication services. We tried installing feature-pack 37434 via CRX package manager but, it broke the instance in that it would just serve up 404 pages. However, following the below procedure, we were able to install the feature pack.

...

---------

Read the full blog post at this URL.