Forcing a re-sync of VMAPI

Modified: 28 Apr 2022 01:26 UTC

In rare instances, VMAPI can get out of sync with the actual contents of a compute node. One way this can manifest itself as an instance showing in a state of incomplete.

Whereas this is usually cleared up automatically, there are times when this situation can persist. This procedure explains how to force VMAPI to re-sync for a given instance.

Checking VMAPI

Using VMAPI or the Operations Portal, check the state of the instance. In this case, we have an instance named testvm01 which is showing in an incomplete state. This is a transitory state, so it should not be in this state. Most likely, this is a failed provision where VMAPI has not updated properly.

# sdc sdc-vmapi /vms/d112345b17-541c-1234-d6e6-cf15457ffcb4 | json -Ha uuid alias state
d112345b17-541c-1234-d6e6-cf15457ffcb4 testvm01 incomplete

Using the sdc-vmapi tool to query VMAPI above, we see that the instance is currently in a state of incomplete based on what VMAPI knows.

Checking vmadm

Log into the compute node that is the host for the instance and use the vmadm(1m) command to verify the state of the instance on this node:

# vmadm list

UUID                                 TYPE  RAM      STATE             ALIAS
ee53a6fa-9ed4-4f09-a6e5-123456789    OS    1024     running           testvm02
af6ff6e6-2635-cb64-9e7c-987654321    OS    4096     running           testvm03
93ae89f2-9a19-4d12-8bce-8765432456   OS    7168     running           testvm04
b9db9624-b2b5-46fa-8f24-uy76554332   OS    7168     running           testvm05

In the case above, instance testvm01 (uuid: d112345b17-541c-1234-d6e6-cf15457ffcb4) is not present on the compute node, however it is still showing in VMAPI with a state of incomplete.

Synchronizing VMAPI

To correct this, we need to force VMAPI to resync it's information about that particular instance. To do this, we need to log into the head node and communicate with the VMAPI endpoint. The format of this command is:

sdc sdc-vmapi /vms/<VM UUID>?sync=true

Using our testvm01 instance's UUID and running results in the following output:

# sdc sdc-vmapi /vms/d112345b17-541c-1234-d6e6-cf15457ffcb4?sync=true

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 3154
Content-MD5: Ny5TwUIsW171zBrXqc3wzg==
Date: Fri, 22 Aug 2014 15:35:11 GMT
Server: VMAPI
x-request-id: e77199f0-2a11-11e4-affd-ab5d44b1809c
x-response-time: 25
x-server-name: 1234567-f56a-4b0c-851c-1ac2143e6972
Connection: keep-alive

{

  "uuid": "d112345b17-541c-1234-d6e6-cf15457ffcb4",
  "alias": "testvm01",
  "autoboot": false,
  "brand": "kvm",
  "billing_id": "567lhre45-2ghf-65gh-56hgf-b56hgfdet567,
  "cpu_cap": 25,
  "cpu_shares": 25,
  "create_timestamp": "2014-08-17T03:24:13.621Z",
  "customer_metadata": {
    "root_authorized_keys": "ssh-rsa ...3r\n"
  },
  "datasets": [],
  "destroyed": "2014-08-22T15:35:11.390Z",
  "firewall_enabled": false,
  "internal_metadata": {},
  "last_modified": "2014-08-17T06:49:49.000Z",
  "limit_priv": "default,-file_link_any,-net_access,-proc_fork,-proc_info,-proc_session",
  "max_locked_memory": 1280,
  "max_lwps": 4000,
  "max_physical_memory": 1280,
  "max_swap": 2048,
  "nics": [
    {
      "interface": "net0",
      "mac": "90:b8:d0:48:6a:a0",
      "vlan_id": 767,
      "nic_tag": "external",
      "gateway": "165.225.154.1",
      "ip": "165.225.155.148",
      "netmask": "255.255.254.0",
      "network_uuid": "1234567-1234-1234-1234-12345555a409c",
      "model": "virtio",
      "primary": true
    },
      "interface": "net1",
      "mac": "90:b8:d0:e0:f5:7a",
      "vlan_id": 862,
      "nic_tag": "internal",
      "gateway": "192.168.24.1",
      "ip": "192.168.30.125",
      "netmask": "255.255.248.0",
      "network_uuid": "1234566-1234d-1234-1234-abcdefb15",
      "model": "virtio"
    }
  ],
  "owner_uuid": "09876543-6543-4567b-34564-a123456f6f53b",
  "platform_buildstamp": "20131218T184706Z",
  "quota": 10,
  "ram": 1024,
  "resolvers": [
    "8.8.8.8",
    "8.8.4.4",
    "216.52.1.1",
    "4.2.2.2"
  "server_uuid": "44454c4c-3800-1036-8059-b5c04f395231",
  "snapshots": [],
  "state": "destroyed",
  "tags": {},
  "zfs_io_priority": 100,
  "zone_state": "destroyed",
  "zpool": "zones",
  "package_name": "g3-standard-1-kvm",
  "package_version": "1.0.0",
  "vcpus": 1,
  "cpu_type": "host",
  "disks": [
      "path": "/dev/zvol/rdsk/zones/d112345b17-541c-1234-d6e6-cf15457ffcb4-disk0",
      "boot": false,
      "media": "disk",
      "image_size": 16384,
      "image_uuid": "9c4003c0-7c6a-6d12-acb3-813e0b087946",
      "image_name": "base-aberrant",
      "zfs_filesystem": "zones/d112345b17-541c-1234-d6e6-cf15457ffcb4-disk0",
      "zpool": "zones",
      "size": 16384,
      "compression": "off",
      "refreservation": 16384,
      "block_size": 8192
      "path": "/dev/zvol/rdsk/zones/d112345b17-541c-1234-d6e6-cf15457ffcb4-disk1",
      "size": 33792,
      "zfs_filesystem": "zones/d112345b17-541c-1234-d6e6-cf15457ffcb4-disk1",
      "refreservation": 0,
  ]
}

Verifying the state change

At this point, we can node verify the state of the instance via the VMAPI endpoint:

# sdc sdc-vmapi /vms/d112345b17-541c-1234-d6e6-cf15457ffcb4 | json -Ha uuid alias state
d112345b17-541c-1234-d6e6-cf15457ffcb4 testvm01 destroyed

As you can see, the output above matches the output from vmadm, in that the testvm01 instance is showing as destroyed, which means it does not exist in any form on the compute node.