Posts in Category "Application"

Testbuilder Status’ Explained

When setting up an application-level health monitor (blog) on your LTM, you would point to the testbuilder diagnostic page at:

/servlet/testbuilder

As the previous article explains, ‘the testbuilder page will send back the “status-ok” string.  If there is any problem with the Connect server application, then testbuilder will not report the “status-ok” string’.  Expanding on this a little bit, the following (below) are the actual status’ and possible scenarios you may see:

STATUS_OK = 0;
STATUS_CRITICAL = 2;
STATUS_MAINT = 3;
STATUS_TEST = 4;

 

STATUS_OK = 0;
This means the server is fit to work (status-ok). Server status in PPS_ENUM_DATA_HOST table is neither ‘X’, ‘M’ nor ‘T’ and server is initialized.
This is what load balancers should look for health check.

STATUS_CRITICAL = 2;
Server is not fit to work (status-critical). Server is not yet initialized (during start up), or has server status of ‘X’ in PPS_ENUM_DATA_HOST table.
This is also triggered if no connection to database can be made.

STATUS_MAINT = 3;
Server is in maintenance mode (status-maintenance). Has server status of ‘M’ in PPS_ENUM_DATA_HOST table.
Active server can be put to maintenance mode and vice versa.  No new meetings will be run on this server, but currently active meetings will run until ended.

STATUS_TEST = 4;
Server is in “server isolation” mode (status-testing). Has server status of ‘T’ in PPS_ENUM_DATA_HOST table.
Used to put server in separate zone from other servers in cluster. This is hosted feature that is not actively used in production.

Connect Reports Never Returning Data in Firefox

reports
The Adobe Connect Reports module is Flex based and for longer queries (reports on courses or curricula with large enrollments for example), sometimes the report can take many minutes to return data back to the browser.  Previously we have worked on issues with the reporting module in which the busy cursor (spinning wheel) continues to spin indefinitely and doesn’t return data because the query took too long to return.  We have made adjustments to the DB views and code to fix the performance of the reports in the latest versions of Adobe Connect and up until recently, we had solved this problem for users running the latest versions of the application.

However recently we have seen with newer versions of the Firefox web browser, the reports once again spin indefinitely and not return data in the Flex based reports in some instances where the queries are large.  Investigation into this lead us to determine that after a period of 5 minutes, we saw a socket write error in the debug log like the one below:

[05-29 10:15:30,623] http-80-15 (INFO) Exception caught in Rows.parse(), e= org.xml.sax.SAXException

ClientAbortException:  java.net.SocketException: Software caused connection abort: socket write error

After changing various FireFox timeout settings to no avail, we noticed the following newer setting ‘network.http.response.timeout’, which was introduced in Firefox 29 (the current version is 30). The default value for this timeout is 300 seconds (5 minutes).  In previous versions there was no default value.

After changing it to a longer value, the reporting now works in our testing. With the current implementation of the reporting module, there is no way for Flex to detect that the http response has timed out. Until we can address this in the Flex code and provide a warning, we just have to be mindful of this setting in FireFox.

To change this setting, you simply type this in the Firefox browser address bar: about:config and hit enter.

You will see a page with all of the configurable settings.  Search for ‘network.http.response.timeout‘ to isolate just the one setting you need to change (there are a lot of settings to scroll through otherwise).  The default value is 300 seconds (5 minutes).  If you are running into the situation where your reports are not coming back with data (and you are running the latest version of Adobe Connect , 9.2 and above), you can adjust this setting to see if it helps (if you are using Firefox as a browser).  If you anticipate users having to run large queries (like curriculum reports with large enrollments in the 1000s of users), you will need to adjust this setting.

ffsetting1

Type ‘about:config’ in the address bar. Then search for ‘network.http.response.timeout’

ffsetting2

Modify the value by clicking on the 300 value itself and then entering the new value when prompted.

 

 

Adobe Connect 9.2.2 Patch Now Available

The Adobe Connect 9.2.2 On Premise (Licensed) patch is now available for download at:

http://helpx.adobe.com/adobe-connect/kb/connect-90-patches.html

This download includes deployment instructions. It is intended for installation on Adobe Connect servers already running 9.2, as this is a patch (not a full install).

 

 

Ensuring that Email Generated by Adobe Connect Servers is Received

There have been more than a few incidents reported on the topic of email messages from Adobe Connect servers not getting delivered. These include messages generated by the Adobe Connect Events module as well as system email messages such as that generated by changing a password.

The first thing to avoid to prevent this problem is that of using special characters in the host’s name. This is scheduled to be fixed in Connect 9.3, but currently, in 9.2, if you place a comma in the Event host’s name such as Frank D., El Presidente’  it will cause an error identified in our server logs by a messaging exception.

The second thing to avoid is inviting over 20,000 participants to an Event. This generates an operation size error and causes problems with email messages being sent out. We also plan to address this ceiling tentatively in Connect 9.3.

With specific reference to Adobe Connect Hosted accounts, we just made the following change to ameliorate email problems: As of April 28th, 2014, administrative email notifications sent from Adobe Connect servers will now be coming from admin@adobeconnect.com instead of admin@acrobat.com This will help by disassociating Connect-generated email from that of the Acrobat domain which could be potentially blocked by virtue of its identification with document storage. We also made some internal changes to the way the Connect servers handle email and we worked with our Web infrastructure partners to insure that Connect generated email was not being treated as SPAM on the Web.

There is a bit of a conundrum here. If Adobe Connect Events email invitations are sent out in massive mailings to those who perceive the email as SPAM, then the Adobe Connect servers could be tagged as producing SPAM by those end-user recipients. An overzealous Events manager may cause Gmail and other providers to treat Adobe Connect email as SPAM. When an Event that is capped at 500 participants sends out 5000 email invitations, it is expected that many recipients will at best ignore the inbound email traffic and many more may consider the traffic to be a nuisance. We are investigating possible approaches to ameliorate this problem and plan in 9.3  to add an opt-out option for Events invitations that will offer a convenient alternative to any SPAM reply option for recipients to invoke.

We love large Events and Adobe Connect handles them very well; this is a case when our success can potentially lead to some problems. Currently the Adobe Hosted Service is green for SPF record checks.  We pass all major email providers and are not blacklisted according to common checker tools on the internet.  This should resolve the lion’s share of current email issues and the upcoming changes in 9.3 will serve to harden this capability for future Events.

Connect 9.1.x on-premise server – “Send Invitations” checked by default

When creating a new meeting you are asked if you want to send out meeting invitations by email.

In Connect 9.1.x the option to send invitations is selected by default.

If you do not wish to send out invitations for your meetings you have to select “Do not send invitations” every time you create a new meeting.

sendInvitations

You can change this behavior to make  “Do not send invitations” the default when creating a new meeting.

To do so, edit the notify.xsl file which is located in \Connect\9.1.1\appserv\apps\meeting\  ( but please remember to take a backup copy of the file).

1. Open the notify.xsl in an xml-friendly editor such as notepad++

2. Find this section:

<table cellpadding=”0″ cellspacing=”0″>
<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘send-invitations’”/>
<xsl:with-param name=”name”    select=”‘date-scheduled’”/>
<xsl:with-param name=”type”    select=”‘radio’”/>
<xsl:with-param name=”value”   select=”/results/common/date”/>
<xsl:with-param name=”checked” select=”true()”/>
</xsl:call-template>

<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘no-invitations’”/>
<xsl:with-param name=”name”    select=”‘date-scheduled’”/>
<xsl:with-param name=”type”    select=”‘radio’”/>
<xsl:with-param name=”value”   select=”‘ignore’”/>
<xsl:with-param name=”checked” select=”false()”/>
</xsl:call-template>
</table>

3. Change “false” to “true” and “true” to “false” to swap the selection.

It should now look like this:

<table cellpadding=”0″ cellspacing=”0″>
<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘send-invitations’”/>
<xsl:with-param name=”name”    select=”‘date-scheduled’”/>
<xsl:with-param name=”type”    select=”‘radio’”/>
<xsl:with-param name=”value”   select=”/results/common/date”/>
<xsl:with-param name=”checked” select=”false()”/>
</xsl:call-template>

<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘no-invitations’”/>
<xsl:with-param name=”name”    select=”‘date-scheduled’”/>
<xsl:with-param name=”type”    select=”‘radio’”/>
<xsl:with-param name=”value”   select=”‘ignore’”/>
<xsl:with-param name=”checked” select=”true()”/>
</xsl:call-template>
</table>

 

4. Save the file and restart the services.

5. Check your changes by creating a new meeting. If you encounter any issues, restore the original file.

 

 

DB_PING_TIMEOUT Value Change

Recently we have discovered that a newer setting for on-premise (licensed) Adobe Connect servers may lead to a memory leak on the system in certain rare circumstances.  Here is some history and recommendations in case you believe you may be running into a memory leak problem in your Adobe Connect licensed environment and you are running a version newer than 9.0.3.

The DB_PING_TIMEOUT value was introduced back in Connect 7 (2008 timeframe).  It enables invalidated DB connections to be recognized quickly. In the absence of a reasonable value for this timeout, we have had instances in the past where critical CPS threads (e.g. the scheduler sweeper thread) have waited on a stale DB connection for too long, causing fastfails. This value had since always been set to ’0′ which means there is no time out.  Since the default host health check time out value is 40 seconds, it is recommended that the DB_PING_TIMEOUT default value be set to 30 seconds, so that it is under the limit that causes potential server fast-fails. This was a fairly minor change in the config.ini, where the DB_PING_TIMEOUT value was changed from 0 to 30.  This was done at the Connect 9.0.3 version.  So every version above 9.0.3 will have the default set to 30.  [Important note - this value is in seconds, not milliseconds]

Recent longevity tests in version 9.2 suggested that this might be triggering a memory leak in the driver. The going theory for why that behavior wasn’t seen in previous longevity tests (between 9.0.3 and 9.2) is that we only upgraded to JRE 7 in 9.2. So the setting we were running with previously suddenly seemed to be a problem once we also upgraded to 1.7.

That value of 30 was introduced for a reason, so we don’t suggest turning it off without knowing that it causes a problem. On the Adobe hosted clusters, we have made the decision to do so since there were signs of memory issues even previously and we didn’t want to compound that.

That said, there are known issues with our driver and JRE 1.7, but only under some circumstances. In the case of Adobe Connect system administrators observing  (continuous) increases in heap memory usage, this parameter value should be set back to 0.

This can be done by changing this value either in the config.ini  from 30 to 0 (DB_PING_TIMEOUT=0)  or by adding this value in the custom.ini (it won’t be there by default, but if you add it, it will take precedence over what is in the config.ini)

 

Adding Duration Field to Captivate Content in Adobe Connect

One common thing content publishers may notice is that Captivate content (by default) doesn’t get a ‘duration’ field in the Adobe Connect UI (as seen below in the first screenshot).  If you want this field (if it is applicable to your Captivate content), follow the steps below:

Notice no ‘Duration’ field information in the Captivate Project by default.

In order to get the duration field to populate for Captivate content (as of now with Captivate 7), you can manually add the ‘Duration’ field to the breeze-manifest.xml file before you upload / publish to Adobe Connect.

First, instead of publishing to Adobe Connect directly, you need to publish locally to the desktop using the SWF/HTML5 publish option (below):

In the Publish menu, select SWF/HTML5 option and publish locally.

 

Once you do this, you then need to navigate to the source project files and find the ‘breeze-manifest.xml‘ file that is included:

 

Navigate to where you saved the project locally and open the breeze-manifest.xml file.

 

When you open the xml file in an editor, you will see the following format (it will look different depending on your quiz content, etc):

 

By default, there will be no ‘duration’ field in the document XML node.

 

You would then add duration=”xx” ‘ to the ‘document’ XML node as follows:

 

Add ‘duration=’ to the xml as shown above and manually put your duration (in seconds).

 

You will need to manually calculate the duration of the entire Captivate presentation (again if applicable) and then put that value in here (in seconds).  When you do this, the slide count should also be visible in the published output.

Then, save the file and zip up your entire project again to a zip file (don’t zip up a folder full of the project files, but rather all the source files without a containing folder).  Upload the zip as Content manually in the Content directory.

 

You will see duration now (slides and time) in the final output.

Estimating the Size of Archive Meeting Recordings

I was recently asked if I had any test data showing how big a recording becomes based on the use case during the Connect Meeting being recorded. While plenty of anecdotal information exists,  I thought it prudent to begin a list of use cases and show what the size was after five minutes of each use case. This article will be a work in progress as I add different use cases in order to offer various concrete examples to use as a basis to estimate recording size based on what is being recorded, whether multiple Video pod camera feeds or screen-sharing or VoIP, etc. Among its purposes, this exercise will help meeting hosts to avoid exceeding the 2GB limit on Adobe hosted clusters for recording size.

Most relevant among the variables considered is the notion that recording size is affected by the streams present in the meeting being recorded. Typically a Video pod with VoIP (640X480) shared per hour will result in an FLV of around 200MB. Sharing a screen in a meeting (1680X1050) will result in an FLV size of around 150 MB. PPT/PPTX files uploaded to a meeting room and displayed while recording will not play a significant part in recording size because the recordings link to external content rather than contain that content intrinsically. For example,  a meeting with two Video pod streams could have recording size of around 400MB and a meeting having a single Video pod stream with VoIP and screen-sharing could end up around 350MB. The actual results may differ as the screen resolution of the publisher, the type of sharing and the amount of movement are all variables that can affect recording size: If there is little movement on screen or in the Video pod stream, the recording size will be less than it would be with a lot of movement.

Here are some concrete examples to use for planning; each recording is approximately five minutes in length:

A meeting with a single video feed for the Presenter to display and scroll through an uploaded PowerPoint file while using integrated telephony:Title: Recording Size Test_0
Type: Recording
Duration: 00:05:31
Disk usage: 8335.3 KB

rec-size1.fw

 

A recording of a meeting with six video feeds and an uploaded PowerPoint file
Title: Planning Troubleshooting and Support Meeting Room _15
Type: Recording
Duration: 00:05:48
Disk usage: 13873.8 KB

rec-size2.fw

 

A recording of a meeting with four video feeds and screen sharing an application with normal activity
Title: Planning Troubleshooting and Support Meeting Room _16
Type: Recording
Duration: 00:05:56
Disk usage: 21660.8 KB

rec-size3.fw

More examples to follow.

How to completely delete content from a Meeting room

To completely delete content from any meeting room there are at least two places and possibly a third place in Connect from which you must delete it.

The first place is the most obvious and it is in the Meeting room itself under the Pods menu and Manage Pods option:

del-con.fw

del-con1.fw

The second place is in the Uploaded Content directory under the Meeting Information. To get there from a Meeting room, a host and owner can click Manage Meeting Information under the Meeting tab:

del-con2.fw

Then go to the Uploaded Content tab:

del-con3.fw

The third possible place is the Content Library. If you uploaded or published to the Content Library and pointed the share pod to it, you will need to go to the Content Library to delete it. If you uploaded directly to the Meeting room then you may skip this step:

del-con4.fw

Adobe Connect Database Disaster Recovery Options

Having a good recovery strategy allows for recovery of data in case of unforeseen events such as user error, hardware failure, drone strikes and fecal tsunamis. There are three recovery models:

  • Simple Recovery
  • Bulked Logged Recovery
  • Full Recovery

Simple Recovery is the most rudimentary. When the DB recovery mode is set to simple, the transaction log does not get backed up. It is auto-truncated and you can only ever recover to a full db backup; this builds-in the potential for data loss as a point-in-time recovery is not possible. Generally, the Simple Recovery option is recommended for development or test environments where data recovery is not critical. It is also a good strategy for a novice DBA as you don’t have to worry about a detailed backup and restore plan/jobs. Mission critical databases should never be in simple mode, but for non-mission critical deployments it is a low-overhead alternative.

The Bulk Logged Mode is not very commonly used. When the DB recovery mode is set to Bulk Logged, bulk operations are only minimally logged (Select Into, Create Index, etc.). This results in in reduced log space consumption. The shortfall is that if the last transaction log has bulk operations in it, then point in time recovery is not possible; if it does not have bulk operations in it, then point in time recovery is possible. While it may be prudent to switch full recovery databases temporarily into Bulk Logged Mode for the purpose of re-indexing a very large database, be sure to always switch them back as critical databases probably shouldn’t be in Bulk Logged recovery mode.

Full Recovery Mode is the default recovery model and is the most granular. When the database recovery mode is set to full, everything get’s logged to the Transaction Log resulting in greater log space consumption. Point in time recovery is possible in full recovery mode. This is the recovery model most users should choose for production data. By using this recovery model with regularly scheduled full backups, differential backups and transaction log backups, it allows for quicker point in time recovery.

Choosing a backup and recovery plan is relevant to the following criteria:

  • How important is the Data? The more important the data, the more likely you will choose full recovery and schedule regular full backups, differential backups and log backups.
  • How often does the data change? How busy is the Connect server?
    If the data only changes frequently during normal business hours, scheduling log backups closer together during these times and further apart during non business hours might work out.
  • How much space do you have available for backups? This could determine how many backups will you store and how often will you back up.
  • How quickly do you need to recover data? If recovery speed is not important, but point in time is, you might choose not to do any differential backups and just do Full nightly backups and regular transaction log backups.

Based on the answers to the previous questions, you should be able to determine a backup plan that fits your needs. Remember to test the recovery of your backups regularly.  Backing up is useless if backups are corrupt or not working correctly.

Another important consideration is with the timing of backups. Keep in mind that performing backups is resource intensive.  To help determine an appropriate schedule of your backups, consider the ongoing activities on the Connect servers.

If  you want to focus on recovering data in case of fire or natural disaster then you you should consider storing the backups offsite.  Many savvy DBA’s they keep a predetermined number of current backups on site and also ship the backups offsite (tape or network).  They might choose to keep five current backups onsite and as many as 30 offsite.

SQL 2008 has backup compression allowing you to save on disk space, but it comes with a cost of speed. Choose the compression level that suits your speed of backup. Third-party products offer backup compression as well.

Consider also the various high availability options:

  • SQL clustering relies on Windows clustering. It clusters the entire server not just the database. The fail-over is slower than mirroring and doesn’t provide a fail-over against disk failure.
  • Mirroring (http://msdn.microsoft.com/en-us/library/ms189852.aspx) is a faster fail-over solution. The Connect SQL driver has the ability to choose a fail-over server. This can be done at the DB level.
  • Log Shipping ships completed transactions to the log shipped database; this can be done on the database level and requires manual intervention to fail-over as the log-shipped db is considered a warm DB

Note: Replication is not a recommended option.

Adobe’s Hosted infrastructure uses a hybrid high-availability strategy. We use database mirroring as the primary fail-over solution.It provides faster fail-over and does not have a single point of failure as does clustering which relies on the single disk. We also use log shipping as a secondary fail-over solution. In the extreme case that all mirrored databases go down, the log shipped database can be used with some user intervention: Break the log shipping, take the database out of standby mode and point the Connect server to it.