Upgrading VMware Identity Manager (vIDM)

Upgrading VMware Identity Manager (vIDM) or WorkspaceOne appliances can result in some difficult issues, so in order to save you having to bug GSS, below is the detail for upgrading from 2.9.1 to 3.3 using the offline upgrade method. Make sure to follow all of the pre-requisite and post upgrade sections.


You can upgrade VMware Identity Manager online or offline. In version 2.9.x and below the only officially supported route was using the online or local web host method.

By default the VMware Identity Manager appliance uses the VMware web site for the upgrade procedure, which requires the appliance to have Internet connectivity. You must also configure proxy server settings for the appliance, if applicable. 

Thanks to an unadvertised Knowledge Base article KB2147931https://kb.vmware.com/s/article/2147931 it is now possible to upgrade via the offline upgrade script which was previously only offered in 3.1 and above.

The following procedure can be used to upgrade vIDM appliances from 2.9.1 through to 3.3. When necessary please download the relevant updates and associated offline upgrade scripts found in the appendix of this document.

Due to additional VMware solutions deployed the following sequence is recommended.
(note: there is not direct upgrade path from to 3.3)

  1. Upgrade vIDM from 2.9.1 to
  2. Upgrade vRealize Operations Manager to 6.7
  3. Upgrade vRealize Log Insight to 4.6
  4. Upgrade vIDM from to 3.1
  5. Upgrade vIDM from 3.1 to

    • Please refer to the 3.2 upgrade pre-requisites before upgrading from 3.1 to 3.2
  6. Upgrade vIDM from to 3.3
    • Please refer to the 3.3 upgrade pre-requisites before upgrading from 3.2 to 3.3

Available upgrade options:

  1. Online upgrade (requires connectivity to vmware.com).
  2. Offline upgrade with local Web Server hosting update files.
  3. Offline upgrade with script and update binaries.
  4. Offline upgrade with manual steps and update binaries if method 3 fails.

Due to security restrictions imposed, we will be using option 3, or option 4 if that fails.

Important: Expect some downtime during the upgrade process as all services are stopped during the upgrade.


  1. Verify that VMware Identity Manager is properly configured.
  2. Verify that at least 4 GB of disk space is available on the primary root partition of the virtual appliances
  3. If you are using an external database, take a snapshot or backup of the database. If you are unable to perform application consistent backup, shutdown component before taking snapshot.
  4. Download required upgrade files and update script.
    1. Download the following Upgrade Files from my.vmware.com :
      1. identity-manager-
      2. identity-manager- 
      3. identity-manager- 
      4. identity-manager- 
    2. Download Offline upgrade script: https://kb.vmware.com/s/article/2147931
  5. Copy both sets of upgrade files (script + update repo) to vIDM nodes.
    1. Copy upgrade script to : /usr/local/horizon/update/
    2. Copy upgrade zip to: /var/tmp/

NOTE: please refer to separate pre-requisites for 3.2 and 3.3 upgrades

vIDM 3.2 Upgrade Pre-Requisite Procedure

  1. Download the elastic search migration plugin from:


2. Copy the elastic search migration plugin to /var/tmp

3. Install the migration plugin on one of the VMware Identity Manager nodes:

export JAVA_HOME=/usr/java/jre-vmware
/opt/vmware/elasticsearch/bin/plugin -i migration -u file:///var/tmp/v1.19/elasticsearch-migration-1.19.zip
service elasticsearch restart 

4. View the Elasticsearch log file:

tail -f /opt/vmware/elasticsearch/logs/horizon.log 

Look for the message that verifies that the migration sites plugin was loaded, such as the following:                 

[2018-04-16 13:28:03,593][INFO ][plugins  ] [Crusader] loaded [discovery-idm], sites [migration]

NOTE: ElasticSearch uses a randomly assigned name from a list of 3000 Marvel characters.

5. Wait until Elasticsearch is up.

Messages similar to the following appear:    

[2018-04-16 13:28:13,282][INFO ][node     ] [Crusader] started
[2018-04-16 13:29:16,468][INFO ][gateway  ] [Crusader] recovered [91] indices into cluster_state 

6. Temporarily allow port 9200 to be accessible from the outside to allow access to the plugin via a browser.

  • Edit the /usr/local/horizon/conf/iptables/elasticsearch file and add “9200” to the ELASTICSEARCH_tcp_all entry.

After you make the changes, the file should be similar to the following:

  • Apply the new iptables rule by running the following script:

7. Run the migration report.

  • In a browser, go to http://<ES_NODE_FQDN>:9200/_plugin/migration, where <ES_NODE_FQDN> is the fully-qualified domain name of the VMware Identity Manager node on which you installed the migration plugin.
  • Click the Run checks now button.

8. View the migration report and look for red indices.

Indices that are red because they are closed are expected items on the report and can be ignored. Any indices that are red for any reason other than that they are closed, for example, due to mapping conflicts, need to be deleted. Use the following command to delete the indices:

curl -XDELETE http://localhost:9200/<INDEX_NAME> 

9. Block port 9200 again.

  • Edit the /usr/local/horizon/conf/iptables/elasticsearch file and set ELASTICSEARCH_tcp_all to “”.
  • Run the following script:

10. Remove migration plugin

/opt/vmware/elasticsearch/bin/plugin remove migration

11. Restart the elastic search service

service elasticsearch restart 

12. Continue with upgrade procedure as per usual.

vIDM 3.3 Upgrade Pre-Requisite Procedure

There are two potential upgrade faults when upgrading to 3.3. Please apply these workarounds before completing the upgrade else there is a potential for the upgrade to fail and the only fix is to revert the snapshot.

“Certificate auth configuration update required” Error Workaround

  1. Log in to the VMware Identity Manager console.
  2. Navigate to Identity & Access Management > Setup.
  3. In the Connectors page, click the link in the Worker column.
  4. Click the Auth Adapters tab, then click CertificateAuthAdapter.
  5. In the Uploaded CA Certificates section, click the red X to remove the certificate.
  6. Click Save.
  7. In the Root and intermediate CA certificates section, click Select File to add the certificate back.
  8. Click Save.


  1. Either log in to the virtual appliance as the root user or log in as the sshuser and run the su command to switch to super user.
  2. Navigate to the following directory:
  • Back up the ifcfg-eth0 file to another directory.
  • Upgrade the virtual appliance but do not restart it.
  • Restore the ifcfg-eth0 file to the /etc/sysconfig/networking/devices directory.
  • Restart the virtual appliance:

Upgrade Preparation

  1. Take in-guest backups of the database and snapshots of the VMware Identity Manager nodes. 
    • See THIS post for how to find the master node
    • Recommendation: Shutdown appliances before taking snapshots if you cannot guarantee application consistency.
  2. Remove all nodes except one from the NSX load balancer. 
    1. Requires NSX management plane access
    2. From the Home menu of the vSphere Web Client, select Networking & Security
    3. In the Navigator, click NSX Edges
    4. From the NSX Manager drop-down menu, select <NSX Manager IP> and double-click the <NSX Edge providing LB functionality>  NSX Edge to open its network settings. 
    5. On the Manage tab, click the Load Balancer tab and click Pools
    6. Select the VIDM pool that contains the vRealize Identity Manager appliances and click Edit
    7. In the Edit Pool dialog box, select the secondary node, click Edit, select Disable from the State drop-down menu, and click OK
    8. In the Edit Pool dialog box, select NONE from the Monitors drop-down menu and click OK.

Upgrade Method 3: Perform Offline Upgrade using script and update files.

1 .Upgrade the node that is still connected to the load balancer. You can use the updateoffline.hzn script to perform an offline upgrade of the VMware Identity Manager virtual appliance

2. Run the updateoffline.hzn script as the root user. 

/usr/local/horizon/update/updateoffline.hzn [-r] -f upgradeFilePath

For example: 

/usr/local/horizon/update/updateoffline.hzn -f file:///var/tmp/identity-manager-

If upgrade fails proceed to Upgrade Method 4: Manual Steps.

2. If you did not use the -r option with the script, restart the virtual appliance after upgrade is complete. 


3. After the node is upgraded, leave it connected to the load balancer. This ensures that the VMware Identity Manager service is available while you upgrade the other nodes. 

4. Upgrade the other nodes one at a time. 

5. If upgrading to 3.2, complete post 3.2 upgrade tasks

Upgrade Method 4: Perform Offline Upgrade using manual steps and update files.

  1. Download the updaterepo.zip and copy under /var/tmp
mkdir /var/tmp/update
cd /var/tmp/update
unzip ../identity<fullfilename>.zip
iptables -A INPUT -p tcp --dport 8008 -m state --state NEW,ESTABLISHED -j ACCEPT
python -m SimpleHTTPServer 8008 2>/dev/null &
/usr/local/horizon/update/updatelocal.hzn seturl http://localhost:8008/ 

2. confirm from a browser that you can browse files for http://hostname:8008

3. Run the following commands to install the update

/usr/local/horizon/update/updatemgr.hzn updateinstaller
/usr/local/horizon/update/updatemgr.hzn update

4. Restart the virtual appliance after upgrade is complete. 


5. After the node is upgraded, leave it connected to the load balancer. This ensures that the VMware Identity Manager service is available while you upgrade the other nodes. 

6. Upgrade the other nodes one at a time. 

7. If upgrading to 3.2, complete post 3.2 upgrade tasks to fix ElasticSearch

8. Perform Upgrade Validation

9. If upgrade fails and there is a message referencing inode usage see this post HERE

Upgrade Validation

  1. After all the nodes are upgraded, add them back to the load balancer.
  2. Validate health of vIDM
  3. Validate vIDM functionality
  4. Validate health of elastic search cluster is green with no unallocated shards
 curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

5. Remove old update files and consolidate snapshots.

Post 2.9.2 Upgrade Validation

  1. Verify that RabbitMQ is not running in cluster mode in the upgraded nodes. 
  • In 2.9.1 and later releases, RabbitMQ clustering has been disabled. 
  • Follow these commands for each upgraded node. 
  1. Log in to the upgraded node. 
  2. Run the following command: 
rabbitmqctl cluster_status

The command should return status similar to the following: 

sva-1:~ # rabbitmqctl cluster_status
Cluster status of node 'rabbitmq@sva-1' ...

3. If the status includes references to any node other than the one on which you ran the rabbitmqctl cluster_status command, run the following commands: 

  • Stop RabbitMQ. 
rabbitmqctl stop_app
  • Reset RabbitMQ 
rabbitmqctl force_reset
  • Start RabbitMQ. 
rabbitmqctl start_app 

Post 3.2 Upgrade – Elastic Search fix.

  1. Starting with the master node
  2. Edit the /etc/init.d/elasticsearch file.

    Changing this :

To this :

export JAVA_OPTS="-Djavax.net.ssl.trustStore=${IDM_CA_KEYSTORE}"
  • Then change the permissions on idm-cacerts keystore
chmod 644 /usr/local/horizon/conf/idm-cacerts
chmod 775 /usr/local/horizon/conf /usr/local/horizon
  • Restart the services.
service elasticsearch restart
service horizon-workspace restart
  • Wait at least 15 minutes for the services to fully restart before continuing to the next node.


If you get the “there was a problem with analytics service” error in the Health status post upgrade it usually means there is an issue with ElasticSearch, and usually due to unassigned shards. 

1.  Run the following command to determine if you have unassigned shards.

curl http://localhost:9200/_cluster/health?pretty

2. Run the following command to view the unassigned shards.

curl -XGET ‘http://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason’ | grep UNASSIGNED

4. Delete the unassigned shards by running the following command

curl -XDELETE ‘http://localhost:9200/v3_YYYY-MM-DD/

Once all the unassigned shards have been deleted run the command

curl http://localhost:9200/_cluster/health?pretty 

Which should report 0 unassigned shards and status: green.
Refreshing the dashboard should then show the Analytics connection as successful.


Reference Documents:

vIDM upgrade guide https://docs.vmware.com/en/VMware-Identity-Manager/2.9.2/wsp-291-upgrade.pdf 
vIDM 2.x Offline upgrade script https://kb.vmware.com/s/article/2147931   
vIDM 3.1 upgrade guide https://docs.vmware.com/en/VMware-Identity-Manager/3.1/identitymanager-upgrade.doc/GUID-9A60AF97-787F-4234-BFC4-08C43BA440D7.html
vIDM 3.1 upgrade script https://docs.vmware.com/en/VMware-Identity-Manager/3.1/identitymanager-upgrade.doc/GUID-8744E48D-59D0-4AD0-B273-90F7B8E86C94.html
vIDM 3.2 upgrade guide https://docs.vmware.com/en/VMware-Identity-Manager/3.2/identitymanager-upgrade.doc/GUID-9A60AF97-787F-4234-BFC4-08C43BA440D7.html
vIDM 3.2 upgrade script https://docs.vmware.com/en/VMware-Identity-Manager/3.2/identitymanager-upgrade.doc/GUID-8744E48D-59D0-4AD0-B273-90F7B8E86C94.html
vIDM 3.3 upgrade guide https://docs.vmware.com/en/VMware-Identity-Manager/3.3/identitymanager-upgrade.doc/GUID-9A60AF97-787F-4234-BFC4-08C43BA440D7.html
vIDM 3.3 upgrade script https://docs.vmware.com/en/VMware-Identity-Manager/3.3/identitymanager-upgrade.doc/GUID-8744E48D-59D0-4AD0-B273-90F7B8E86C94.html