Posts in Category "Application"

Adobe Connect 9.2.2 Patch Now Available

The Adobe Connect 9.2.2 On Premise (Licensed) patch is now available for download at:

http://helpx.adobe.com/adobe-connect/kb/connect-90-patches.html

This download includes deployment instructions. It is intended for installation on Adobe Connect servers already running 9.2, as this is a patch (not a full install).

 

 

Ensuring that Email Generated by Adobe Connect Servers is Received

There have been more than a few incidents reported on the topic of email messages from Adobe Connect servers not getting delivered. These include messages generated by the Adobe Connect Events module as well as system email messages such as that generated by changing a password.

The first thing to avoid to prevent this problem is that of using special characters in the host’s name. This is scheduled to be fixed in Connect 9.3, but currently, in 9.2, if you place a comma in the Event host’s name such as Frank D., El Presidente’  it will cause an error identified in our server logs by a messaging exception.

The second thing to avoid is inviting over 20,000 participants to an Event. This generates an operation size error and causes problems with email messages being sent out. We also plan to address this ceiling tentatively in Connect 9.3.

With specific reference to Adobe Connect Hosted accounts, we just made the following change to ameliorate email problems: As of April 28th, 2014, administrative email notifications sent from Adobe Connect servers will now be coming from admin@adobeconnect.com instead of admin@acrobat.com This will help by disassociating Connect-generated email from that of the Acrobat domain which could be potentially blocked by virtue of its identification with document storage. We also made some internal changes to the way the Connect servers handle email and we worked with our Web infrastructure partners to insure that Connect generated email was not being treated as SPAM on the Web.

There is a bit of a conundrum here. If Adobe Connect Events email invitations are sent out in massive mailings to those who perceive the email as SPAM, then the Adobe Connect servers could be tagged as producing SPAM by those end-user recipients. An overzealous Events manager may cause Gmail and other providers to treat Adobe Connect email as SPAM. When an Event that is capped at 500 participants sends out 5000 email invitations, it is expected that many recipients will at best ignore the inbound email traffic and many more may consider the traffic to be a nuisance. We are investigating possible approaches to ameliorate this problem and plan in 9.3  to add an opt-out option for Events invitations that will offer a convenient alternative to any SPAM reply option for recipients to invoke.

We love large Events and Adobe Connect handles them very well; this is a case when our success can potentially lead to some problems. Currently the Adobe Hosted Service is green for SPF record checks.  We pass all major email providers and are not blacklisted according to common checker tools on the internet.  This should resolve the lion’s share of current email issues and the upcoming changes in 9.3 will serve to harden this capability for future Events.

Connect 9.1.x on-premise server – “Send Invitations” checked by default

When creating a new meeting you are asked if you want to send out meeting invitations by email.

In Connect 9.1.x the option to send invitations is selected by default.

If you do not wish to send out invitations for your meetings you have to select “Do not send invitations” every time you create a new meeting.

sendInvitations

You can change this behavior to make  “Do not send invitations” the default when creating a new meeting.

To do so, edit the notify.xsl file which is located in \Connect\9.1.1\appserv\apps\meeting\  ( but please remember to take a backup copy of the file).

1. Open the notify.xsl in an xml-friendly editor such as notepad++

2. Find this section:

<table cellpadding=”0″ cellspacing=”0″>
<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘send-invitations'”/>
<xsl:with-param name=”name”    select=”‘date-scheduled'”/>
<xsl:with-param name=”type”    select=”‘radio'”/>
<xsl:with-param name=”value”   select=”/results/common/date”/>
<xsl:with-param name=”checked” select=”true()”/>
</xsl:call-template>

<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘no-invitations'”/>
<xsl:with-param name=”name”    select=”‘date-scheduled'”/>
<xsl:with-param name=”type”    select=”‘radio'”/>
<xsl:with-param name=”value”   select=”‘ignore'”/>
<xsl:with-param name=”checked” select=”false()”/>
</xsl:call-template>
</table>

3. Change “false” to “true” and “true” to “false” to swap the selection.

It should now look like this:

<table cellpadding=”0″ cellspacing=”0″>
<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘send-invitations'”/>
<xsl:with-param name=”name”    select=”‘date-scheduled'”/>
<xsl:with-param name=”type”    select=”‘radio'”/>
<xsl:with-param name=”value”   select=”/results/common/date”/>
<xsl:with-param name=”checked” select=”false()”/>
</xsl:call-template>

<xsl:call-template name=”input”>
<xsl:with-param name=”title”   select=”‘no-invitations'”/>
<xsl:with-param name=”name”    select=”‘date-scheduled'”/>
<xsl:with-param name=”type”    select=”‘radio'”/>
<xsl:with-param name=”value”   select=”‘ignore'”/>
<xsl:with-param name=”checked” select=”true()”/>
</xsl:call-template>
</table>

 

4. Save the file and restart the services.

5. Check your changes by creating a new meeting. If you encounter any issues, restore the original file.

 

 

DB_PING_TIMEOUT Value Change

Recently we have discovered that a newer setting for on-premise (licensed) Adobe Connect servers may lead to a memory leak on the system in certain rare circumstances.  Here is some history and recommendations in case you believe you may be running into a memory leak problem in your Adobe Connect licensed environment and you are running a version newer than 9.0.3.

The DB_PING_TIMEOUT value was introduced back in Connect 7 (2008 timeframe).  It enables invalidated DB connections to be recognized quickly. In the absence of a reasonable value for this timeout, we have had instances in the past where critical CPS threads (e.g. the scheduler sweeper thread) have waited on a stale DB connection for too long, causing fastfails. This value had since always been set to ‘0’ which means there is no time out.  Since the default host health check time out value is 40 seconds, it is recommended that the DB_PING_TIMEOUT default value be set to 30 seconds, so that it is under the limit that causes potential server fast-fails. This was a fairly minor change in the config.ini, where the DB_PING_TIMEOUT value was changed from 0 to 30.  This was done at the Connect 9.0.3 version.  So every version above 9.0.3 will have the default set to 30.  [Important note – this value is in seconds, not milliseconds]

Recent longevity tests in version 9.2 suggested that this might be triggering a memory leak in the driver. The going theory for why that behavior wasn’t seen in previous longevity tests (between 9.0.3 and 9.2) is that we only upgraded to JRE 7 in 9.2. So the setting we were running with previously suddenly seemed to be a problem once we also upgraded to 1.7.

That value of 30 was introduced for a reason, so we don’t suggest turning it off without knowing that it causes a problem. On the Adobe hosted clusters, we have made the decision to do so since there were signs of memory issues even previously and we didn’t want to compound that.

That said, there are known issues with our driver and JRE 1.7, but only under some circumstances. In the case of Adobe Connect system administrators observing  (continuous) increases in heap memory usage, this parameter value should be set back to 0.

This can be done by changing this value either in the config.ini  from 30 to 0 (DB_PING_TIMEOUT=0)  or by adding this value in the custom.ini (it won’t be there by default, but if you add it, it will take precedence over what is in the config.ini)

 

Adding Duration Field to Captivate Content in Adobe Connect

One common thing content publishers may notice is that Captivate content (by default) doesn’t get a ‘duration’ field in the Adobe Connect UI (as seen below in the first screenshot).  If you want this field (if it is applicable to your Captivate content), follow the steps below:

Notice no ‘Duration’ field information in the Captivate Project by default.

In order to get the duration field to populate for Captivate content (as of now with Captivate 7), you can manually add the ‘Duration’ field to the breeze-manifest.xml file before you upload / publish to Adobe Connect.

First, instead of publishing to Adobe Connect directly, you need to publish locally to the desktop using the SWF/HTML5 publish option (below):

In the Publish menu, select SWF/HTML5 option and publish locally.

 

Once you do this, you then need to navigate to the source project files and find the ‘breeze-manifest.xml‘ file that is included:

 

Navigate to where you saved the project locally and open the breeze-manifest.xml file.

 

When you open the xml file in an editor, you will see the following format (it will look different depending on your quiz content, etc):

 

By default, there will be no ‘duration’ field in the document XML node.

 

You would then add duration=”xx” ‘ to the ‘document’ XML node as follows:

 

Add ‘duration=’ to the xml as shown above and manually put your duration (in seconds).

 

You will need to manually calculate the duration of the entire Captivate presentation (again if applicable) and then put that value in here (in seconds).  When you do this, the slide count should also be visible in the published output.

Then, save the file and zip up your entire project again to a zip file (don’t zip up a folder full of the project files, but rather all the source files without a containing folder).  Upload the zip as Content manually in the Content directory.

 

You will see duration now (slides and time) in the final output.

Estimating the Size of Archive Meeting Recordings

I was recently asked if I had any test data showing how big a recording becomes based on the use case during the Connect Meeting being recorded. While plenty of anecdotal information exists,  I thought it prudent to begin a list of use cases and show what the size was after five minutes of each use case. This article will be a work in progress as I add different use cases in order to offer various concrete examples to use as a basis to estimate recording size based on what is being recorded, whether multiple Video pod camera feeds or screen-sharing or VoIP, etc. Among its purposes, this exercise will help meeting hosts to avoid exceeding the 2GB limit on Adobe hosted clusters for recording size.

Most relevant among the variables considered is the notion that recording size is affected by the streams present in the meeting being recorded. Typically a Video pod with VoIP (640X480) shared per hour will result in an FLV of around 200MB. Sharing a screen in a meeting (1680X1050) will result in an FLV size of around 150 MB. PPT/PPTX files uploaded to a meeting room and displayed while recording will not play a significant part in recording size because the recordings link to external content rather than contain that content intrinsically. For example,  a meeting with two Video pod streams could have recording size of around 400MB and a meeting having a single Video pod stream with VoIP and screen-sharing could end up around 350MB. The actual results may differ as the screen resolution of the publisher, the type of sharing and the amount of movement are all variables that can affect recording size: If there is little movement on screen or in the Video pod stream, the recording size will be less than it would be with a lot of movement.

Here are some concrete examples to use for planning; each recording is approximately five minutes in length:

A meeting with a single video feed for the Presenter to display and scroll through an uploaded PowerPoint file while using integrated telephony:Title: Recording Size Test_0
Type: Recording
Duration: 00:05:31
Disk usage: 8335.3 KB

rec-size1.fw

 

A recording of a meeting with six video feeds and an uploaded PowerPoint file
Title: Planning Troubleshooting and Support Meeting Room _15
Type: Recording
Duration: 00:05:48
Disk usage: 13873.8 KB

rec-size2.fw

 

A recording of a meeting with four video feeds and screen sharing an application with normal activity
Title: Planning Troubleshooting and Support Meeting Room _16
Type: Recording
Duration: 00:05:56
Disk usage: 21660.8 KB

rec-size3.fw

More examples to follow.

How to completely delete content from a Meeting room

To completely delete content from any meeting room there are at least two places and possibly a third place in Connect from which you must delete it.

The first place is the most obvious and it is in the Meeting room itself under the Pods menu and Manage Pods option:

del-con.fw

del-con1.fw

The second place is in the Uploaded Content directory under the Meeting Information. To get there from a Meeting room, a host and owner can click Manage Meeting Information under the Meeting tab:

del-con2.fw

Then go to the Uploaded Content tab:

del-con3.fw

The third possible place is the Content Library. If you uploaded or published to the Content Library and pointed the share pod to it, you will need to go to the Content Library to delete it. If you uploaded directly to the Meeting room then you may skip this step:

del-con4.fw

Adobe Connect Database Disaster Recovery Options

Having a good recovery strategy allows for recovery of data in case of unforeseen events such as user error, hardware failure, drone strikes and fecal tsunamis. There are three recovery models:

  • Simple Recovery
  • Bulked Logged Recovery
  • Full Recovery

Simple Recovery is the most rudimentary. When the DB recovery mode is set to simple, the transaction log does not get backed up. It is auto-truncated and you can only ever recover to a full db backup; this builds-in the potential for data loss as a point-in-time recovery is not possible. Generally, the Simple Recovery option is recommended for development or test environments where data recovery is not critical. It is also a good strategy for a novice DBA as you don’t have to worry about a detailed backup and restore plan/jobs. Mission critical databases should never be in simple mode, but for non-mission critical deployments it is a low-overhead alternative.

The Bulk Logged Mode is not very commonly used. When the DB recovery mode is set to Bulk Logged, bulk operations are only minimally logged (Select Into, Create Index, etc.). This results in in reduced log space consumption. The shortfall is that if the last transaction log has bulk operations in it, then point in time recovery is not possible; if it does not have bulk operations in it, then point in time recovery is possible. While it may be prudent to switch full recovery databases temporarily into Bulk Logged Mode for the purpose of re-indexing a very large database, be sure to always switch them back as critical databases probably shouldn’t be in Bulk Logged recovery mode.

Full Recovery Mode is the default recovery model and is the most granular. When the database recovery mode is set to full, everything get’s logged to the Transaction Log resulting in greater log space consumption. Point in time recovery is possible in full recovery mode. This is the recovery model most users should choose for production data. By using this recovery model with regularly scheduled full backups, differential backups and transaction log backups, it allows for quicker point in time recovery.

Choosing a backup and recovery plan is relevant to the following criteria:

  • How important is the Data? The more important the data, the more likely you will choose full recovery and schedule regular full backups, differential backups and log backups.
  • How often does the data change? How busy is the Connect server?
    If the data only changes frequently during normal business hours, scheduling log backups closer together during these times and further apart during non business hours might work out.
  • How much space do you have available for backups? This could determine how many backups will you store and how often will you back up.
  • How quickly do you need to recover data? If recovery speed is not important, but point in time is, you might choose not to do any differential backups and just do Full nightly backups and regular transaction log backups.

Based on the answers to the previous questions, you should be able to determine a backup plan that fits your needs. Remember to test the recovery of your backups regularly.  Backing up is useless if backups are corrupt or not working correctly.

Another important consideration is with the timing of backups. Keep in mind that performing backups is resource intensive.  To help determine an appropriate schedule of your backups, consider the ongoing activities on the Connect servers.

If  you want to focus on recovering data in case of fire or natural disaster then you you should consider storing the backups offsite.  Many savvy DBA’s they keep a predetermined number of current backups on site and also ship the backups offsite (tape or network).  They might choose to keep five current backups onsite and as many as 30 offsite.

SQL 2008 has backup compression allowing you to save on disk space, but it comes with a cost of speed. Choose the compression level that suits your speed of backup. Third-party products offer backup compression as well.

Consider also the various high availability options:

  • SQL clustering relies on Windows clustering. It clusters the entire server not just the database. The fail-over is slower than mirroring and doesn’t provide a fail-over against disk failure.
  • Mirroring (http://msdn.microsoft.com/en-us/library/ms189852.aspx) is a faster fail-over solution. The Connect SQL driver has the ability to choose a fail-over server. This can be done at the DB level.
  • Log Shipping ships completed transactions to the log shipped database; this can be done on the database level and requires manual intervention to fail-over as the log-shipped db is considered a warm DB

Note: Replication is not a recommended option.

Adobe’s Hosted infrastructure uses a hybrid high-availability strategy. We use database mirroring as the primary fail-over solution.It provides faster fail-over and does not have a single point of failure as does clustering which relies on the single disk. We also use log shipping as a secondary fail-over solution. In the extreme case that all mirrored databases go down, the log shipped database can be used with some user intervention: Break the log shipping, take the database out of standby mode and point the Connect server to it.

Adobe Connect Database Performance and Monitoring

Following SQL database performance best practices and monitoring the health of you Connect database will help to insure a responsive Connect server providing excellent end user experience.

It is best to always place the operating system, data and log directories on separate disk drives; this will result in improved performance. If you must put Connect on the same server as the DB (never a best practice but sometimes a practical necessity), you should ensure that the Connect installation and content directories are on a different disk drive than is the database data. The Temp DB should also be on a separate disk drive. Putting the SQL data on striped disks,  provides a tuning benefit as well.

Be sure to aggressively re-index and update statistics. De-fragment the operating system data and log files on a regular schedule. Ensure that there is minimal latency between the Connect server and the SQL Server. Be wary of  network maintenance and backups that can produce latency between the Connect server and the SQL server and be sure to avoid heavy Connect use during any such maintenance.

Make sure that the SQL server has plenty of RAM; the more RAM the better.  Everything works much faster in memory.  The more of the database that you can keep in memory the better off you will be. Only virtualize the DB server if absolutely required.While Connect runs fine on supported VMWare servers, the SQL database server is best run on a dedicated platform

With reference to the use of separate disks, here is a prioritized list of what should have its own disk:

  1. Operating System
  2. Adobe Connect (Separate Server if possible)
  3. SQL database
  4. Data
  5. Log
  6. TempDB
  7. Transaction logs

For best performance, set the initial size of the transaction log file based on estimated use.  This avoids unnecessary fragmentation. The transaction log should be on a different drive than is your data file, temp database and operating system. Manually shrink the transaction log files based on monitoring.  If you try to do this as a nightly or weekly job, you will end up with unnecessary fragmentation. De-fragment the transaction log file as necessary and consider putting transaction logs on striped disks. Ensure regular backups as transaction log backups empty the space inside the log file and prevent it from continuing to grow.

Manage the memory by setting the minimum server memory for SQL server.  Remember to leave enough for the operating system and any other applications running on the server:

db3.fw

SQL Server uses the tempdb database as a working area for temporary tables, sorting, sub-queries etc.; the tempdb should be stored on its own drive away from other DBs whenever possible.  The default location is on the SQL install disk. Increase the size of the tempdb database based on expected usage and available space. SQL Server automatically adjusts the size over time, but each change causes a performance hit and causes fragmentation. By increasing the size, you avoid constant growth. SQL 2008 uses the tempdb more than prior versions of SQL. Never try to backup the tempdb.

Monitor the disk space of the data files and log files. Disk space is inexpensive when compared with the benefits it provides when available in abundance.  You should aim to keep at least 30% free disk space in case you need to expand the data/log files, or if they are set to autogrow.  Sudden increases in size should  be cause for investigation.

Monitor the fragmentation levels. If the database and log were set to autogrow at small intervals, there is a high likelihood that they are fragmented. If you regularly shrink the DB data files or log files, that could also lead to fragmentation

Monitor for slow queries; you can see slow queries in the Connect debug log.  Just search for Slow Query. Query times are returned in msec. Also look for lock timeouts in the debug log.  Generally this is a sign of database problems. A lock timeout is a query that attempts to get a lock on a database resource.  It times out because something else is already holding a lock. A lock is usually held until the transaction has completed, so if there is a long running query it could cause lock timeouts. You can also run traces against the database to gather information on long running queries.  In SQL 2008 you can query dynamic management views to get this information.

Monitor indexes liberally keeping in mind that re-indexing regularly should decrease the need to monitor indexes. Sometimes re-indexing may start taking too long to complete and you will want to be more selective about what to target. Knowing which tables or indices are most fragmented allows you to only re-index them. You can query dynamic management views in SQL Server to get this information (see SQL Server books online). Many 3rd party products offer monitoring of SQL server and you might consider these products if you want a more automated GUI interface to monitoring indexes. Some of the products offer monitoring for other areas of SQL Server as well.

Windows performance monitor or perfmon is useful; you can use perfmon to monitor SQL counters.  Here are 3 common counters which, if they reveal something will warrant further scrutiny.

  • Pages/sec  –  How much your SQL server is paging in and out of memory
  • Disk Queues –  If the write or read disk queues are too high, you will need more RAM
  • CPU Queue length –  If the CPU queue is consistently over 2 per CPU for an extended period of time, you might have a CPU bottleneck.

Be aware of  load and activity when monitoring with perfmon as database backups and other maintenance activities can cause spikes in these numbers. It is best to connect to the server from a different PC if you intend to monitor it with perfmon.

A good maintenance plan will include scheduled re indexing during off hours. Fragmented indices can cause Connect to become sluggish and might even cause fast-fails in a Connect cluster. If you start to see a lot of slow queries in the debug log, you should ensure that the Connect DB is being re-indexed regularly: Index maintenance is one of the easiest ways to keep your DB healthy and SQL Server provides wizards that help make index maintenance easy.

Open SQL Server Management Studio and open the management folder.

  • Right Click on the Maintenance Folder
  • Choose Maintenance Wizard

Give the Maintenance plan a name:

db4.fw

Choose the desired maintenance tasks: Rebuild Index & Update Statistics

db5.fw

Choose the Database you want to re-index:

db6.fw

Reorganize with the default amount of free space; the default amount is what it was initially created with.

db7.fw

Choose the same database to update statistics after you re-index.

db8.fw

Schedule a job to run the maintenance plan; provide a name and choose a schedule that suits your infrastructure:

db6.fw

Database performance and monitoring best practices will insure a responsive Connect server providing excellent end user experience.

FAQs on Adobe Connect SQL Database Installation, Startup, Connection and Pooling

The following is a summary of Adobe Connect 9 database installation tips

1. What do I need to start?

Always check the updated system requirements page prior to installing: http://www.adobe.com/products/adobeconnect/tech-specs.html
As of the writing of this article it reads: Microsoft SQL Server 2008 SP3, 2008 R2

While it is best to have sa permissions, you are required to use a username and password with dbcreator privileges.  We highly recommend recommend using an sa account. After the install you may use a dbo account for normal use, but during any upgrade or updater application, you must switch back to sa.

2. When does the installer create the database for Connect?

All current Connect versions (after 7.5SP1) create the database during installation. Typically the DB creation process takes about 50 seconds. First the schema get created and then the seed data are inserted. After the DB is created, Connect is still not fully functional until you download and apply the license.txt file. The license file will insert additional seed data into the Connect database including templates and folders.

3. How should I troubleshooting database login failures during installation?

db1.fw

This error can mean several things:

  • The username incorrect
  • The password could be incorrect
  • SQL Server Authentication might not be on.

Entries in the debug.log will provide some answers:

db2.fw

  • java.sql.SQLException… Login failed for user ‘sa’ usually means it is a mistype in the username or password
  • java.sql.SQLException… Login failed for user ‘sa’. The user is not associated with a trusted SQL Server connection usually indicates SQL Authentication is disabled
  • java.sql.SQLException…Cannot open database “dbname” requested by the login,  usually indicates that the login exists, but does not have permission to open the DB
  • java.sql.SQLException…CREATE TABLE permission denied in database ‘dbname, this usually indicates the login has permission to login to the DB, but does not have permission to create schema objects.

Note: During install and upgrade and during minor updates of point releases, the DB user must have permissions create, alter or drop schema objects.

Note also that log errors are discussed on page 83 of the Adobe Connect Installation Guide: http://help.adobe.com/en_US/connect/9.0/installconfigure/connect_9_install.pdf

If you encounter any of these errors, stop all of the Connect services, correct the user privileges in SQL and start the services again.

4. What happens during a successful startup?

During start-up, Connect tries to login to the SQL database, if it can’t connect, the service stays running but enters into a dormant state. You will be able to gain access to local port 8510 to configure the Connect server through its wizard, but  not the application front end. If it the connection is successful then Connect
makes multiple connections to the SQL database (connection pool). The initial connection pool and max connection pool is configurable. Connect checks the DB Version and determines if it needs to apply updates and then the Connect Host updates a row in the DB (PPS_ENUM_DATA_HOSTS) and sets itself active.

5. How does Connect monitor the health of the SQL database? What is the HealthCheck function for?

Connect relies heavily on the SQL database. it is safe to call the SQL database the heart of any Adobe Connect installation. Connect constantly checks to see if there is a valid connection to the SQL database. Loss of connection can lead to data corruption. To avoid this, Connect runs a health-check on the SQL database; it pings the SQL Server and checks to see if it has been more than 40 seconds since the Connect Server has updated the PPS_ENUM_DATA_HOSTS table. If it is greater than 40 seconds, the Connect Pro Host is marked inactive and the services for that Connect server will restart and then reattempt  to connect to the SQL database.

If you are running the Connect SQL database in a SQL cluster rather than in a mirrored environment, you will want to make sure that Connect makes multiple database connection attempts during SQL fail-over. If Connect loses its SQL database, the entire Connect cluster will go down and it will wait for an administrator to manually reconnect to the database through launching the Connect configuration console on port 8510. Add the following to the custom.ini file to support any delays in clustered SQL fail-over:

DB_URL_CONNECTION_RETRY_COUNT = 15
DB_URL_CONNECTION_RETRY_DELAY= 30

The actual JDBC string is in the config.ini file so you do not need to put it into the custom.ini; double check the config.ini if you are running into any problems with the JDBC reconnection string:

DB_URL=jdbc:macromedia:sqlserver://{DB_HOST}:{DB_PORT};databaseName={DB_NAME};user={DB_USER};password={DB_PASSWORD};ConnectionRetryCount={DB_URL_CONNECTION_RETRY_COUNT};ConnectionRetryDelay={DB_URL_CONNECTION_RETRY_DELAY}

6. What is the purpose of the Connection Pool and why do it the way we do?

Adobe Connect makes use of a connection pool. Every time the Application needs to communicate with the SQL database, it checks for the next available idle connection and uses it. If there isn’t one available, it will create a new connection unless it has reached the connection pool max. Once the application has finished it’s transaction, it releases the connection back into the pool. These settings are found in \appserv\conf\Catalina\localhost\root.xml

  •               minPoolSize=”20″
  •               maxPoolSize=”25″
  •               initialPoolSize=”20″

This prevents the overhead of creating new connections each time a call to the SQL database is required. The connections are made at start-up. Since Connect relies heavily on the DB, having available connections is essential.

7. How do I change my Adobe Connect license and Serial Key if needed?

This is something rarely done. An example might be if  you have a trial license and then purchase a production license and instead of converting your trial license into a production license, you receive a new license and serial key. If this happens, you will need to update the serial key in several places.

  • The custom.ini file in the Connect root installation directory
  • pps_enum_data_hosts
  • pps_config

db.fw

After that, download and apply the license.