This topic explains the steps to configure migration channel.
To migrate data from Classic On-Premises to the Virtual Appliance, configure the NFS channel that is used to secure connection and mount the NFS server on both the Classic On-Premises environment and the Virtual Appliance. This setup enables data transfer between the two systems. Follow these steps to configure the NFS server for data migration:
Ensure that VA services are stopped, you can check using the following command:
The VA services status appears as Not Installed.
Sample Output:
+----------------------+---------------+
| Service Endpoint | Status |
+======================+===============+
| Controller | Not Installed |
+----------------------+---------------+
| Events | Not Installed |
+----------------------+---------------+
| EUM Collector | Not Installed |
+----------------------+---------------+
| EUM Aggregator | Not Installed |
+----------------------+---------------+
| EUM Screenshot | Not Installed |
+----------------------+---------------+
| Synthetic Shepherd | Not Installed |
+----------------------+---------------+
| Synthetic Scheduler | Not Installed |
+----------------------+---------------+
| Synthetic Feeder | Not Installed |
+----------------------+---------------+
| AD/RCA Services | Not Installed |
+----------------------+---------------+
| SecureApp | Not Installed |
+----------------------+---------------+
| OTIS | Not Installed |
+----------------------+---------------+
| ATD | Not Installed |
+----------------------+---------------+
| UIL | Not Installed |
+----------------------+---------------+
- Log in to the Virtual Appliance node and edit the
globals.yaml.gotmpl file.
cd /var/appd/config
vi globals.yaml.gotmpl
- Disable self-monitoring for Virtual Appliance, set the
enableClusterAgent parameter to false.
enableClusterAgent: false
- Enable data migration for Elasticsearch datastores, set the
migration.elasticsearch.fs.enabled parameter to true.
# Migration config for datastores
migration:
elasticsearch:
fs:
enabled: true
- Ensure the
license.lic file exists in /var/appd/config.
- Run the following command to establish secure connection and mount NFS server on Classic On-Premises and Virtual Appliance.
python3 ./migration_tool.py setup migration-channel
- Log in to the Virtual Appliance node and start the Virtual Appliance services:
appdcli start appd <profile>
If the Virtual Appliance is already installed, restart it using the following command:
appdcli stop appd
appdcli start appd <profile>
- (Optional) Verify the NFS channel before starting the migration tool.
python3 ./migration_tool.py validate nfs-channel
- Ensure there is no shard count mismatch for Elasticsearch indices between classic and VA. If the same index name exists in both environments with different shard counts, data migration for those indices will fail. Therefore, delete the index on the Virtual Appliance.
- Run the following command to list the indices and their shard counts from the classic Elasticsearch instance:
-
Classic
-
ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_common_classic_all appdynamics@10.115.85.187
"curl -s 'http://localhost:9200/_cat/indices?h=index,pri&v'
-u elastic:elastic_user_password | sort"
-
Virtual Appliance
-
-
Log in to Virtual Appliance and run following command:
appduser@appdva-test7-vm348:~$ kubectl get svc -n es
Sample output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
appd-es-http ClusterIP 10.152.183.210 <none> 9200/TCP 6d19h
appd-es-internal-http ClusterIP 10.152.183.56 <none> 9200/TCP 6d19h
appd-es-node ClusterIP None <none> 9200/TCP 6d19h
appd-es-transport ClusterIP None <none> 9300/TCP 6d19h
-
Run the following command to list the indices and their primary shard counts from the VA Elasticsearch instance:
ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_va_ip1 appduser@10.115.86.89
"curl -s 'http://10.152.183.115:9200/_cat/indices?h=index,pri&v'
-u elastic1:appDelastic1@123 | sort"
Note: Command flag options:
-
_cat/indices - Elasticsearch cat API for listing indices.
-
h=index,pri - shows only index name (index) and primary shard count (pri)
-
v - show column headers.
-
sort - alphabetical sort for easy comparison
- Delete the problematic index from the Virtual Appliance (VA).
curl -u username:password -X DELETE "http://10.0.0.1:9200/<index_name>"
Note: Obtain the
username and
password for this command from the
config.yaml file. Go to the
services.events section and copy the username and password.
username: va_es_user
password: va_es_password
The migration tool recreates the deleted indices in VA with the correct shard configuration from the Classic on-premises environment during data migration.
- Shutdown the Controller in Virtual Appliance.
kubectl scale deployment controller-deployment -n cisco-controller --replicas=0
- Run the command to edit the Controller deployment configuration.
kubectl edit deployment controller-deployment -n cisco-controller
- Update the
livenessProbe, readinessProbe, and startupProbe values.
livenessProbe:
failureThreshold: 10
...
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 120
readinessProbe:
failureThreshold: 3
...
initialDelaySeconds: 420
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 120
startupProbe:
failureThreshold: 300
...
initialDelaySeconds: 420
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 120