Configure the Migration Channel

This topic explains the steps to configure migration channel.

To migrate data from Classic On-Premises to the Virtual Appliance, configure the NFS channel that is used to secure connection and mount the NFS server on both the Classic On-Premises environment and the Virtual Appliance. This setup enables data transfer between the two systems. Follow these steps to configure the NFS server for data migration:

Ensure that VA services are stopped, you can check using the following command:

CODE
appdcli ping

The VA services status appears as Not Installed.

Sample Output:
CODE
+----------------------+---------------+
| Service Endpoint     | Status        |
+======================+===============+
| Controller           | Not Installed |
+----------------------+---------------+
| Events               | Not Installed |
+----------------------+---------------+
| EUM Collector        | Not Installed |
+----------------------+---------------+
| EUM Aggregator       | Not Installed |
+----------------------+---------------+
| EUM Screenshot       | Not Installed |
+----------------------+---------------+
| Synthetic Shepherd   | Not Installed |
+----------------------+---------------+
| Synthetic Scheduler  | Not Installed |
+----------------------+---------------+
| Synthetic Feeder     | Not Installed |
+----------------------+---------------+
| AD/RCA Services      | Not Installed |
+----------------------+---------------+
| SecureApp            | Not Installed |
+----------------------+---------------+
| OTIS                 | Not Installed |
+----------------------+---------------+
| ATD                  | Not Installed |
+----------------------+---------------+
| UIL                  | Not Installed |
+----------------------+---------------+
  1. Log in to the Virtual Appliance node and edit the globals.yaml.gotmpl file.
    CODE
    cd /var/appd/config
    vi globals.yaml.gotmpl
    1. Disable self-monitoring for Virtual Appliance, set the enableClusterAgent parameter to false.
      CODE
      enableClusterAgent: false
    2. Enable data migration for Elasticsearch datastores, set the migration.elasticsearch.fs.enabled parameter to true.
      CODE
      # Migration config for datastores
      migration:
        elasticsearch:
          fs:
            enabled: true
    3. Ensure the license.lic file exists in /var/appd/config.
  2. Run the following command to establish secure connection and mount NFS server on Classic On-Premises and Virtual Appliance.
    CODE
    python3 ./migration_tool.py setup migration-channel
  3. Log in to the Virtual Appliance node and start the Virtual Appliance services:
    CODE
    appdcli start appd <profile>

    If the Virtual Appliance is already installed, restart it using the following command:

    CODE
    appdcli stop appd
    appdcli start appd <profile>
  4. (Optional) Verify the NFS channel before starting the migration tool.
    CODE
    python3 ./migration_tool.py validate nfs-channel
  5. Ensure there is no shard count mismatch for Elasticsearch indices between classic and VA. If the same index name exists in both environments with different shard counts, data migration for those indices will fail. Therefore, delete the index on the Virtual Appliance.
    1. Run the following command to list the indices and their shard counts from the classic Elasticsearch instance:
      Classic
      CODE
      ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_common_classic_all appdynamics@10.115.85.187 
      
      "curl -s 'http://localhost:9200/_cat/indices?h=index,pri&v' 
      
      -u elastic:elastic_user_password | sort"
      Virtual Appliance
      1. Log in to Virtual Appliance and run following command:
        CODE
        appduser@appdva-test7-vm348:~$ kubectl get svc -n es
        Sample output:
        CODE
        NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
        appd-es-http            ClusterIP   10.152.183.210   <none>        9200/TCP   6d19h 
        appd-es-internal-http   ClusterIP   10.152.183.56    <none>        9200/TCP   6d19h
        appd-es-node            ClusterIP   None             <none>        9200/TCP   6d19h
        appd-es-transport       ClusterIP   None             <none>        9300/TCP   6d19h
      2. Run the following command to list the indices and their primary shard counts from the VA Elasticsearch instance:

        CODE
        ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_va_ip1 appduser@10.115.86.89 
        
        "curl -s 'http://10.152.183.115:9200/_cat/indices?h=index,pri&v' 
        
        -u elastic1:appDelastic1@123 | sort"
      Note: Command flag options:
      • _cat/indices - Elasticsearch cat API for listing indices.

      • h=index,pri - shows only index name (index) and primary shard count (pri)

      • v - show column headers.

      • sort - alphabetical sort for easy comparison

    2. Delete the problematic index from the Virtual Appliance (VA).
      CODE
      curl -u username:password -X DELETE "http://10.0.0.1:9200/<index_name>"
      Note: Obtain the username and password for this command from the config.yaml file. Go to the services.events section and copy the username and password.
      CODE
      username: va_es_user
      password: va_es_password
      The migration tool recreates the deleted indices in VA with the correct shard configuration from the Classic on-premises environment during data migration.
  6. Shutdown the Controller in Virtual Appliance.
    CODE
    kubectl scale deployment controller-deployment -n cisco-controller --replicas=0
  7. Run the command to edit the Controller deployment configuration.
    CODE
    kubectl edit deployment controller-deployment -n cisco-controller
  8. Update the livenessProbe, readinessProbe, and startupProbe values.
    CODE
    livenessProbe:
      failureThreshold: 10
      ...
      periodSeconds: 30
      successThreshold: 1
      timeoutSeconds: 120
    
    readinessProbe:
      failureThreshold: 3
      ...
      initialDelaySeconds: 420
      periodSeconds: 30
      successThreshold: 1
      timeoutSeconds: 120
    
    startupProbe:
      failureThreshold: 300
      ...
      initialDelaySeconds: 420
      periodSeconds: 30
      successThreshold: 1
      timeoutSeconds: 120