Posts in Category "Clustering"

Testbuilder Status’ Explained

When setting up an application-level health monitor (blog) on your LTM, you would point to the testbuilder diagnostic page at:

/servlet/testbuilder

As the previous article explains, ‘the testbuilder page will send back the “status-ok” string.  If there is any problem with the Connect server application, then testbuilder will not report the “status-ok” string’.  Expanding on this a little bit, the following (below) are the actual status’ and possible scenarios you may see:

STATUS_OK = 0;
STATUS_CRITICAL = 2;
STATUS_MAINT = 3;
STATUS_TEST = 4;

 

STATUS_OK = 0;
This means the server is fit to work (status-ok). Server status in PPS_ENUM_DATA_HOST table is neither ‘X’, ‘M’ nor ‘T’ and server is initialized.
This is what load balancers should look for health check.

STATUS_CRITICAL = 2;
Server is not fit to work (status-critical). Server is not yet initialized (during start up), or has server status of ‘X’ in PPS_ENUM_DATA_HOST table.
This is also triggered if no connection to database can be made.

STATUS_MAINT = 3;
Server is in maintenance mode (status-maintenance). Has server status of ‘M’ in PPS_ENUM_DATA_HOST table.
Active server can be put to maintenance mode and vice versa.  No new meetings will be run on this server, but currently active meetings will run until ended.

STATUS_TEST = 4;
Server is in “server isolation” mode (status-testing). Has server status of ‘T’ in PPS_ENUM_DATA_HOST table.
Used to put server in separate zone from other servers in cluster. This is hosted feature that is not actively used in production.

Adobe Connect Database Disaster Recovery Options

Having a good recovery strategy allows for recovery of data in case of unforeseen events such as user error, hardware failure, drone strikes and fecal tsunamis. There are three recovery models:

  • Simple Recovery
  • Bulked Logged Recovery
  • Full Recovery

Simple Recovery is the most rudimentary. When the DB recovery mode is set to simple, the transaction log does not get backed up. It is auto-truncated and you can only ever recover to a full db backup; this builds-in the potential for data loss as a point-in-time recovery is not possible. Generally, the Simple Recovery option is recommended for development or test environments where data recovery is not critical. It is also a good strategy for a novice DBA as you don’t have to worry about a detailed backup and restore plan/jobs. Mission critical databases should never be in simple mode, but for non-mission critical deployments it is a low-overhead alternative.

The Bulk Logged Mode is not very commonly used. When the DB recovery mode is set to Bulk Logged, bulk operations are only minimally logged (Select Into, Create Index, etc.). This results in in reduced log space consumption. The shortfall is that if the last transaction log has bulk operations in it, then point in time recovery is not possible; if it does not have bulk operations in it, then point in time recovery is possible. While it may be prudent to switch full recovery databases temporarily into Bulk Logged Mode for the purpose of re-indexing a very large database, be sure to always switch them back as critical databases probably shouldn’t be in Bulk Logged recovery mode.

Full Recovery Mode is the default recovery model and is the most granular. When the database recovery mode is set to full, everything get’s logged to the Transaction Log resulting in greater log space consumption. Point in time recovery is possible in full recovery mode. This is the recovery model most users should choose for production data. By using this recovery model with regularly scheduled full backups, differential backups and transaction log backups, it allows for quicker point in time recovery.

Choosing a backup and recovery plan is relevant to the following criteria:

  • How important is the Data? The more important the data, the more likely you will choose full recovery and schedule regular full backups, differential backups and log backups.
  • How often does the data change? How busy is the Connect server?
    If the data only changes frequently during normal business hours, scheduling log backups closer together during these times and further apart during non business hours might work out.
  • How much space do you have available for backups? This could determine how many backups will you store and how often will you back up.
  • How quickly do you need to recover data? If recovery speed is not important, but point in time is, you might choose not to do any differential backups and just do Full nightly backups and regular transaction log backups.

Based on the answers to the previous questions, you should be able to determine a backup plan that fits your needs. Remember to test the recovery of your backups regularly.  Backing up is useless if backups are corrupt or not working correctly.

Another important consideration is with the timing of backups. Keep in mind that performing backups is resource intensive.  To help determine an appropriate schedule of your backups, consider the ongoing activities on the Connect servers.

If  you want to focus on recovering data in case of fire or natural disaster then you you should consider storing the backups offsite.  Many savvy DBA’s they keep a predetermined number of current backups on site and also ship the backups offsite (tape or network).  They might choose to keep five current backups onsite and as many as 30 offsite.

SQL 2008 has backup compression allowing you to save on disk space, but it comes with a cost of speed. Choose the compression level that suits your speed of backup. Third-party products offer backup compression as well.

Consider also the various high availability options:

  • SQL clustering relies on Windows clustering. It clusters the entire server not just the database. The fail-over is slower than mirroring and doesn’t provide a fail-over against disk failure.
  • Mirroring (http://msdn.microsoft.com/en-us/library/ms189852.aspx) is a faster fail-over solution. The Connect SQL driver has the ability to choose a fail-over server. This can be done at the DB level.
  • Log Shipping ships completed transactions to the log shipped database; this can be done on the database level and requires manual intervention to fail-over as the log-shipped db is considered a warm DB

Note: Replication is not a recommended option.

Adobe’s Hosted infrastructure uses a hybrid high-availability strategy. We use database mirroring as the primary fail-over solution.It provides faster fail-over and does not have a single point of failure as does clustering which relies on the single disk. We also use log shipping as a secondary fail-over solution. In the extreme case that all mirrored databases go down, the log shipped database can be used with some user intervention: Break the log shipping, take the database out of standby mode and point the Connect server to it.

Adobe Connect Database Performance and Monitoring

Following SQL database performance best practices and monitoring the health of you Connect database will help to insure a responsive Connect server providing excellent end user experience.

It is best to always place the operating system, data and log directories on separate disk drives; this will result in improved performance. If you must put Connect on the same server as the DB (never a best practice but sometimes a practical necessity), you should ensure that the Connect installation and content directories are on a different disk drive than is the database data. The Temp DB should also be on a separate disk drive. Putting the SQL data on striped disks,  provides a tuning benefit as well.

Be sure to aggressively re-index and update statistics. De-fragment the operating system data and log files on a regular schedule. Ensure that there is minimal latency between the Connect server and the SQL Server. Be wary of  network maintenance and backups that can produce latency between the Connect server and the SQL server and be sure to avoid heavy Connect use during any such maintenance.

Make sure that the SQL server has plenty of RAM; the more RAM the better.  Everything works much faster in memory.  The more of the database that you can keep in memory the better off you will be. Only virtualize the DB server if absolutely required.While Connect runs fine on supported VMWare servers, the SQL database server is best run on a dedicated platform

With reference to the use of separate disks, here is a prioritized list of what should have its own disk:

  1. Operating System
  2. Adobe Connect (Separate Server if possible)
  3. SQL database
  4. Data
  5. Log
  6. TempDB
  7. Transaction logs

For best performance, set the initial size of the transaction log file based on estimated use.  This avoids unnecessary fragmentation. The transaction log should be on a different drive than is your data file, temp database and operating system. Manually shrink the transaction log files based on monitoring.  If you try to do this as a nightly or weekly job, you will end up with unnecessary fragmentation. De-fragment the transaction log file as necessary and consider putting transaction logs on striped disks. Ensure regular backups as transaction log backups empty the space inside the log file and prevent it from continuing to grow.

Manage the memory by setting the minimum server memory for SQL server.  Remember to leave enough for the operating system and any other applications running on the server:

db3.fw

SQL Server uses the tempdb database as a working area for temporary tables, sorting, sub-queries etc.; the tempdb should be stored on its own drive away from other DBs whenever possible.  The default location is on the SQL install disk. Increase the size of the tempdb database based on expected usage and available space. SQL Server automatically adjusts the size over time, but each change causes a performance hit and causes fragmentation. By increasing the size, you avoid constant growth. SQL 2008 uses the tempdb more than prior versions of SQL. Never try to backup the tempdb.

Monitor the disk space of the data files and log files. Disk space is inexpensive when compared with the benefits it provides when available in abundance.  You should aim to keep at least 30% free disk space in case you need to expand the data/log files, or if they are set to autogrow.  Sudden increases in size should  be cause for investigation.

Monitor the fragmentation levels. If the database and log were set to autogrow at small intervals, there is a high likelihood that they are fragmented. If you regularly shrink the DB data files or log files, that could also lead to fragmentation

Monitor for slow queries; you can see slow queries in the Connect debug log.  Just search for Slow Query. Query times are returned in msec. Also look for lock timeouts in the debug log.  Generally this is a sign of database problems. A lock timeout is a query that attempts to get a lock on a database resource.  It times out because something else is already holding a lock. A lock is usually held until the transaction has completed, so if there is a long running query it could cause lock timeouts. You can also run traces against the database to gather information on long running queries.  In SQL 2008 you can query dynamic management views to get this information.

Monitor indexes liberally keeping in mind that re-indexing regularly should decrease the need to monitor indexes. Sometimes re-indexing may start taking too long to complete and you will want to be more selective about what to target. Knowing which tables or indices are most fragmented allows you to only re-index them. You can query dynamic management views in SQL Server to get this information (see SQL Server books online). Many 3rd party products offer monitoring of SQL server and you might consider these products if you want a more automated GUI interface to monitoring indexes. Some of the products offer monitoring for other areas of SQL Server as well.

Windows performance monitor or perfmon is useful; you can use perfmon to monitor SQL counters.  Here are 3 common counters which, if they reveal something will warrant further scrutiny.

  • Pages/sec  -  How much your SQL server is paging in and out of memory
  • Disk Queues -  If the write or read disk queues are too high, you will need more RAM
  • CPU Queue length -  If the CPU queue is consistently over 2 per CPU for an extended period of time, you might have a CPU bottleneck.

Be aware of  load and activity when monitoring with perfmon as database backups and other maintenance activities can cause spikes in these numbers. It is best to connect to the server from a different PC if you intend to monitor it with perfmon.

A good maintenance plan will include scheduled re indexing during off hours. Fragmented indices can cause Connect to become sluggish and might even cause fast-fails in a Connect cluster. If you start to see a lot of slow queries in the debug log, you should ensure that the Connect DB is being re-indexed regularly: Index maintenance is one of the easiest ways to keep your DB healthy and SQL Server provides wizards that help make index maintenance easy.

Open SQL Server Management Studio and open the management folder.

  • Right Click on the Maintenance Folder
  • Choose Maintenance Wizard

Give the Maintenance plan a name:

db4.fw

Choose the desired maintenance tasks: Rebuild Index & Update Statistics

db5.fw

Choose the Database you want to re-index:

db6.fw

Reorganize with the default amount of free space; the default amount is what it was initially created with.

db7.fw

Choose the same database to update statistics after you re-index.

db8.fw

Schedule a job to run the maintenance plan; provide a name and choose a schedule that suits your infrastructure:

db9.fw

Database performance and monitoring best practices will insure a responsive Connect server providing excellent end user experience.

FAQs on Adobe Connect SQL Database Installation, Startup, Connection and Pooling

The following is a summary of Adobe Connect 9 database installation tips

1. What do I need to start?

Always check the updated system requirements page prior to installing: http://www.adobe.com/products/adobeconnect/tech-specs.html
As of the writing of this article it reads: Microsoft SQL Server 2008 SP3, 2008 R2

While it is best to have sa permissions, you are required to use a username and password with dbcreator privileges.  We highly recommend recommend using an sa account. After the install you may use a dbo account for normal use, but during any upgrade or updater application, you must switch back to sa.

2. When does the installer create the database for Connect?

All current Connect versions (after 7.5SP1) create the database during installation. Typically the DB creation process takes about 50 seconds. First the schema get created and then the seed data are inserted. After the DB is created, Connect is still not fully functional until you download and apply the license.txt file. The license file will insert additional seed data into the Connect database including templates and folders.

3. How should I troubleshooting database login failures during installation?

db1.fw

This error can mean several things:

  • The username incorrect
  • The password could be incorrect
  • SQL Server Authentication might not be on.

Entries in the debug.log will provide some answers:

db2.fw

  • java.sql.SQLException… Login failed for user ‘sa’ usually means it is a mistype in the username or password
  • java.sql.SQLException… Login failed for user ‘sa’. The user is not associated with a trusted SQL Server connection usually indicates SQL Authentication is disabled
  • java.sql.SQLException…Cannot open database “dbname” requested by the login,  usually indicates that the login exists, but does not have permission to open the DB
  • java.sql.SQLException…CREATE TABLE permission denied in database ‘dbname, this usually indicates the login has permission to login to the DB, but does not have permission to create schema objects.

Note: During install and upgrade and during minor updates of point releases, the DB user must have permissions create, alter or drop schema objects.

Note also that log errors are discussed on page 83 of the Adobe Connect Installation Guide: http://help.adobe.com/en_US/connect/9.0/installconfigure/connect_9_install.pdf

If you encounter any of these errors, stop all of the Connect services, correct the user privileges in SQL and start the services again.

4. What happens during a successful startup?

During start-up, Connect tries to login to the SQL database, if it can’t connect, the service stays running but enters into a dormant state. You will be able to gain access to local port 8510 to configure the Connect server through its wizard, but  not the application front end. If it the connection is successful then Connect
makes multiple connections to the SQL database (connection pool). The initial connection pool and max connection pool is configurable. Connect checks the DB Version and determines if it needs to apply updates and then the Connect Host updates a row in the DB (PPS_ENUM_DATA_HOSTS) and sets itself active.

5. How does Connect monitor the health of the SQL database? What is the HealthCheck function for?

Connect relies heavily on the SQL database. it is safe to call the SQL database the heart of any Adobe Connect installation. Connect constantly checks to see if there is a valid connection to the SQL database. Loss of connection can lead to data corruption. To avoid this, Connect runs a health-check on the SQL database; it pings the SQL Server and checks to see if it has been more than 40 seconds since the Connect Server has updated the PPS_ENUM_DATA_HOSTS table. If it is greater than 40 seconds, the Connect Pro Host is marked inactive and the services for that Connect server will restart and then reattempt  to connect to the SQL database.

If you are running the Connect SQL database in a SQL cluster rather than in a mirrored environment, you will want to make sure that Connect makes multiple database connection attempts during SQL fail-over. If Connect loses its SQL database, the entire Connect cluster will go down and it will wait for an administrator to manually reconnect to the database through launching the Connect configuration console on port 8510. Add the following to the custom.ini file to support any delays in clustered SQL fail-over:

DB_URL_CONNECTION_RETRY_COUNT = 15
DB_URL_CONNECTION_RETRY_DELAY= 30

The actual JDBC string is in the config.ini file so you do not need to put it into the custom.ini; double check the config.ini if you are running into any problems with the JDBC reconnection string:

DB_URL=jdbc:macromedia:sqlserver://{DB_HOST}:{DB_PORT};databaseName={DB_NAME};user={DB_USER};password={DB_PASSWORD};ConnectionRetryCount={DB_URL_CONNECTION_RETRY_COUNT};ConnectionRetryDelay={DB_URL_CONNECTION_RETRY_DELAY}

6. What is the purpose of the Connection Pool and why do it the way we do?

Adobe Connect makes use of a connection pool. Every time the Application needs to communicate with the SQL database, it checks for the next available idle connection and uses it. If there isn’t one available, it will create a new connection unless it has reached the connection pool max. Once the application has finished it’s transaction, it releases the connection back into the pool. These settings are found in \appserv\conf\Catalina\localhost\root.xml

  •               minPoolSize=”20″
  •               maxPoolSize=”25″
  •               initialPoolSize=”20″

This prevents the overhead of creating new connections each time a call to the SQL database is required. The connections are made at start-up. Since Connect relies heavily on the DB, having available connections is essential.

Content Availability & Replication among Clustered Connect Servers

This article addresses how to make sure every URL int a Connect cluster pops up like your grandmother’s trustworthy old pop-up-toaster.  If some published content will not play or display properly from your Connect cluster, do the following:

  • Make sure that the load balancing device in front the Connect cluster is not using sticky sessions or session awareness.
  • If you are using a Big-IP LTM, make sure that Nagle’s Algorithm is turned off .
  • Make sure that Clustered servers communicate with each other on ports 8506 and 8507. A simple netstat -an from the command prompt on each server will make certain each server is listening on 8507.
  • Don’t try to cheat on the number of documented VIPs required on the load-balancing device or SSL accelerator. Only one VIP is not the right answer.
  • When the servers cannot communicate, trouble ensues.Clustered servers need to be able to communicate with each other on port 8507. When they cannot communicate, the links to content will be broken and end users may experience hanging videos or unavailable content.
  • Search in your debug log for “cluster-”
    • [01-31 18:20:23,729] cluster-8507-696969 (INFO) MirrorHandler: waiting for commands”    2014-02-10T16:30:23.739-0600….
    • If you do  not see cluster-8507 then there is a replication communication problem on 8507
  • To help facilitate troubleshooting, during setup and during upgrades, if possible, allow the servers to have external access – at least allow the monitored and screened ability to toggle external access on and off by special request if needed. This will allow you access to Adobe’s license server and also to troubleshooting tools.
  • Install telnet on each server. Telnet is a great test tool. Placing telnet on each server and actually testing connectivity to each and from each server on ports 8506 and 8507 is prudent.
  • It is also prudent to have a useful browser on each server. I often install FireFox on each server along with the latest available Flashplayer. You may want to install Flashplayer on IE as well even though enhanced security will render it a challenge to use for any useful troubleshooting purposes. Useful flash-enabled browsers will facilitate direct tests on any problematic content on the servers from the servers thereby eliminating network and load-balancing variables when isolating issues.

Configuring Adobe Connect to take Advantage of Database Mirroring

Full redundancy requires that the Connect database be either mirrored or clustered; Adobe uses mirroring as the preferred solution.

The following example settings in the custom.ini file are needed to configure Connect to take advantage of SQL Mirroring:

DB_NAME=ConnectDBName

DB_HOST=ConnectDBPrimaryHostName

DB_BACKUP_HOST=ConnectDBSecondaryHostName

DB_URL=jdbc:macromedia:sqlserver://{DB_HOST}:{DB_PORT};databaseName={DB_NAME};user={DB_USER};password={DB_PASSWORD};AlternateServers=({DB_BACKUP_HOST}:{DB_PORT};DatabaseName={DB_NAME});ConnectionRetryCount=12;ConnectionRetryDelay=10;FailoverMode=extended;FailoverPreconnect=false;FailoverGranularity=atomic

Note: Change the first three variables as appropriate, but do not make any changes to the DB_URL.  It is all one line and it pulls the values from the other three entries in custom.ini:

The follwoing setting is always pudent whether using mirroring or clustering, but it is particularly important if you are clustering SQL. If you are running the Connect SQL database in a SQL cluster rather than in a mirrored environment, you will want to make sure that Connect makes multiple database connection attempts during SQL fail-over. If Connect loses its SQL database, the entire Connect cluster will go down and it will wait for an administrator to manually reconnect to the database through launching the Connect configuration console on port 8510. Add the following to the custom.ini file to support any delays in clustered SQL fail-over:

DB_URL_CONNECTION_RETRY_COUNT = 15
DB_URL_CONNECTION_RETRY_DELAY= 30

The actual JDBC string That invokes these variables is in the config.ini file:

DB_URL=jdbc:macromedia:sqlserver://{DB_HOST}:{DB_PORT};databaseName={DB_NAME};user={DB_USER};password={DB_PASSWORD};ConnectionRetryCount={DB_URL_CONNECTION_RETRY_COUNT};ConnectionRetryDelay={DB_URL_CONNECTION_RETRY_DELAY}

Save the custom.ini and cycle the services.

 

 

Improving the Performance of Connect Meeting Clustered Fail-over

To improve the performance of Connect Meeting fail-over in a cluster, make the following changes to the custom.ini file and cycle the FMS and Connect services or reboot the connect servers:

The installation documentation for clustering originally prescribed setting the MEETING_TIMEOUT variable to 0 as shown:

#For Connect 8&9 Cluster fail-over add the following
MEETING_TIMEOUT=0
MEETING_CONNECT_BACK_TIME=1

We have discovered that setting the MEETING_TIMEOUT variable to 1 insures that a meeting will only have one active session while setting the MEETING_TIMEOUT variable to 0 opens the possibility (although rare) of a meeting having more than one active session. The result is a bit odd. Everyone could be happily in an ongoing Connect meeting while one attendee is alone in what appears to be the same meeting. This is usually triggered by an some form interruption in the isolated users meeting session such as from a brief outage in the network connection of the isolated user.

The new recommended custom.ini setting looks follows:

#For Connect 9 Cluster fail-over add the following
MEETING_TIMEOUT=1
MEETING_CONNECT_BACK_TIME=1

Save the custom.ini after making the change and cycle the Flash Management Sever and Adobe Connect Server services.

In Connect version 9.2, the meeting timeout session may be set back to 0 as we have made the appropriate code changes to insure the functionality works as originally intended even if a user experiences network interruptions and must reconnect to the meeting.

Adobe Connect Server Licensing for Disaster Recovery

This question is commonly asked: Does my license for On-Premise Adobe Connect allow me install Adobe Connect servers for disaster recovery purposes?

First let’s define the terms: Disaster Recovery Environment refers to your technical environment designed solely to allow you to respond to an interruption in service due to an event beyond your control that creates an inability on your part to provide critical business functions for a material period of time. That is to say, it refers to a secondary site that would not be utilized in production unless the primary site went offline due to a natural or human-inflicted disaster that is beyond your control. Use of Adobe Connect servers in Disaster Recovery Environments is within the scope of your license and no additional fees are due to Adobe Systems Incorporated. For example, for the architecture depicted here, you would need four Adobe Connect server licenses. 

 

Connect_DR_cluster

 

However, adding one or more Adobe Connect servers to a local cluster is outside the scope of your license, and you will need to purchase additional licenses from Adobe Systems Incorporated to accomplish this.  Additional licenses are needed when adding any Adobe Connect servers that increase scalability in the form of:

  • Availability — What percentage of time is Connect available to geographically distributed users?
  • Reliability — How often does Connect experience problems that affect availability?
  • Performance — How fast does Connect consistently and qualitatively respond to user requests?
  • Concurrency – How many users can a Connect deployment handle concurrently?

Information around cluster expansion is here: Adobe® Connect™ server pools/clusters and hardware-based load-balancing devices with SSL acceleration

If you were to geographically distribute an active Connect cluster by placing Adobe Connect servers into two separate data centers, that would also require additional licensing. Connect servers in a cluster cannot have more than 2-3ms of latency between and among Connect servers.  Generally you would not geographically distribute Adobe Connect servers into different data centers, however, there is a chapter in the aforementioned clustering article on the topic. With that said, the architecture depicted below, is an example of a distributed active Adobe Connect cluster that is is spread between two local data-centers with nominal latency between those data-centers (less than 3ms of latency). All four servers are in production and all are actively hosting meetings and serving on-demand content.  This Connect architecture example depicted in the diagram below requires a four-server Connect cluster license:

 

Cross-DC-CLUSTER

 

Vantage Point is not just about Bandwidth

Vantage Point from Refined Data works with Connect and provides remote control of Cameras, Microphones, Telephones, Volumes, Tech Support, Motion Detection, Mouse Detection, Continuous attendance tracking and reporting and much more so it’s not just about bandwidth reduction.

On the bandwidth front, Vantage Point publishes streams to Refined Data servers at 100Kbps for each participant in the room; this is less than Connect in most cases, but the Host only consumes as many streams as they can view at one time. The host can see as few as 5 or 6 students at one time or as many as 50 or more depending on their screen resolution, window size, Vantage Point settings, connectivity and bandwidth availability.

This means that even with 100 Participants in a Connect room and one Host the bandwidth consumption looks like this:

  • Participant Load: 100kbps Up (to publish their own camera to Vantage Point), 100kbps down (to view the Host in Connect). This is a small signature on the network.
  • Host Load: 100kbps Up (to publish their own camera in Connect), 2.5mbps down (assuming they view 25 participant cameras in Vantage Point at one time). The Host is the only one who needs a really good connection.
  • Total Load on Adobe Connect: 1 publish stream + 99 subscribers

The host can always reduce their own load simply by viewing fewer simultaneous Video pods in Vantage Point. The bandwidth load for students or participants is not affected at all by class size. Bandwidth load for Hosts rises linearly with class size but can be limited by the host at any time based on the maximum number of cameras they view at one time.

In Connect, the bandwidth load rises with the number of cameras being shared:

  • 4 Cameras: 16 Connections on the Server, each user publishes 100kbps and consumes 300kbps
  • 10 Cameras: 100 Connections on the Server, each user publishes 100kbps and consumes 900kbps
  • 20 Cameras: 400 Connections on the Server, each user publishes 100kbps and consumes 1.9mbps

Even if Connect could technically support 50 or 100 simultaneous web cams in a single meeting (2,500 streams risks significant latency), consider the requirement that participants would need 5-10mbps of bandwidth to support the load, before accounting for VoIP, screen-sharing and basic overhead. Anything above 10 simultaneous web cameras may be difficult for a host to manage and apart from any other considerations, there may not be enough real-estate for content if you are showing 10 or more web cams in the meeting room.

Vantage Point only publishes at 100kbps, most of the time; DSL and Standard quality is already more than twice this load in Connect and can easily rise higher if the room is set to use the Highest video quality at 16:9. With Vantage Point, Adobe Connect saves the server load, participants are not affected by class size, Hosts can see all of their students, all of the time and enjoy unparalleled control of the classroom environment.

Check it out at Refined Data: http://vantagepoint.refineddata.com/

Connect on VMWare – some deployment tips

Issue: VMWare is ubiquitous in the enterprise and while it opens up huge potential for management of the Connect infrastructure, it must be planned and executed with an eye toward robustness.

This advice is gleaned from conversations with senior persons on our operations team as well as from support cases generated by various customers with on-premise VMWare deployments of Connect.

One of the most important and often overlooked variables about virtualization is to make certain that  VMware is compatible with all the underlying components of the server and network architecture. The infrastructure supporting VMWare must be verified by VMware under their Hardware Certification Program or Partner Verified and Supported Products (PSVP) program; be sure to use certified hardware.

Here is the link to the compatibility reference:  http://www.vmware.com/resources/compatibility

With Connect you must consider both Tomcat and  FMS; the former can run on most anything, while the latter is a bit more demanding; RTMP can be acutely;y affected by latency and packet transmissions. If you notice unpredicted latency or a surprise crash of FMS with Connect 9.1, a good test would be to check the network components; sniff for packet transmission issues – have the vNIC of the guest VMs configured to use VMXNET3; this is a good place to start.

With reference to recommendations and best practices, it really depends on the VMware infrastructure adopted. The following references serve as a guide for an enhanced environment:

Enterprise Java Applications on VMware – Best Practices Guide: http://www.vmware.com/resources/techresources/1087

Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs: https://www.vmware.com/resources/techresources/10220

Performance Best Practices for VMware vSphere 5.1: https://www.vmware.com/resources/techresources/10329

The key with Network Storage is speed. If you lose connectivity to the shared storage then only what is cached on the origins will be available.

Shared storage requirements

  • Disk specs: 10,000–15,000 RPM — Fibre Channel preferred
  • Network link: TCP/IP — 1GB I/O throughput or better
  • Controller: Dual controllers with Active/Active multipatch capability
  • Protocol: CIFS or equivalent

Avoid, virtualizing the Connect database if possible.

I have seen that in some customer-based VMWare environments that are overtaxed, that latency among the servers on 8507 (and 8506), can cause problems. Intra-cluster latency (server to server communication) should never exceed 2-3ms. When it does we see intermittent crashes. I had one customer who had a particularly weak infrastructure and for whom I could predict his crashes; he was doing back-ups and running other tasks at a certain time weekly that would tax and hamper network connectivity for about an hour; these tasks were so all-consuming on the network, they turned every cluster resource into an individual asset on its own island. The log traces bore this out and we knew with precision what was going on. He knew he needed to upgrade his infrastructure and in the meantime we worked out a reaction plan to deal with the issue; it included:

  1. Place a higher than normal percentage of cache on each server to limit invoking shared storage
  2. Set the JDBC driver reconnection string for Database connectivity
  3. Plan Connect usage around these maintenance activities and when possible, do Connect maintenance activities at the same time as well – not very difficult as these were after hours, but being a  global operation, still not a given.