Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • modify IDL files
  • re-stage IDL files (for example by ./bb -cforce_rebuild xenclient-idl && ./bb xenclient-idl)
  • regenerate boilerplate (./bb -cconfigure xenmgr)
  • compile xenmgr (./bb -ccompile xenmgr), fix errors, add implementation for new methods etc

VM templates

There's a bunch of json templates in the "templates" subfolder; on the device they are installed in /usr/share/xenmgr-1.0/templates. There are basically two types of templates:

  • service VM template, distinguished via "service-" prefix in file name. On each boot, the toolstack will create a VM instance based on these (which will be visible via xec-vm contrl utility). If the same instance was already present, its config will be overwritten each boot which generally makes for an easy upgrades of service VM config files (default NDVM or UIVM)

  • all other templates. VM instances based on these will only be created by manual request (either from UIVM creation wizard, or one of the xec create-vm-* functions)

It's sometimes handy to turn off the automatic overwrite of service VM templates each xenmgr start, so that the service VM can be reconfigured via manual xec-vm invocations. This can be achieved via

$ db-write /overwrite-<vmtype>-settings false

for example for NDVM: db-write /overwrite-ndvm-settings false

VM configuration

JSON vm configuration files are stored in /config/vms/ though xenmgr doesn't access them directly but rather through dbd's dbus API. Low level configuration files consumed by xenvm (which are output by xenmgr) reside in /tmp/xenmgr-xenvm-$VM_UUID files.

VM properties

VM properties are declared in idl.git, in interfaces/xenmgr_vm.xml. Most of them are documented. Any additional documentation should go to IDL files since that's also the place from which sdk docs are generated.

Dodgy VM properties

There are some VM properties which have non trivial repercussions and warrant additional clarification:

  • xci-cpuid-signature

Setting it to true changes CPUID signature reported by Xen to be xenclient specific. Effectively it hides Xen presence from the guest. We use it on Linux VMs to avoid the default PV drivers in upstream kernels from activating (so that we can use custom ones). However, toggling it off enables the usage of upstream drivers, if needed.

  • flask-label

This specifies the SELinux security context under which VM runs, if mismatched it can prevent the VM from being able to issue hypercalls required for its normal function.

  • provides-network-backend

Tells xenmgr to treat this VM as network VM; it will send notifications about VM, or VM state to the network daemon.

  • greedy-pciback-bind

This activates the behavior of greedily seizing devices which are configured for PCI passhthrough by any VM by pciback driver on OpenXT boot. This behavior is different from the upstream one, however it is useful to prevent Dom0 drivers using the devices which might later be used for pass-through.

Audio driver is a good example; if the audio card is not seized by pciback driver at boot, alsa drivers in Dom0 will take it over which might prevent pass-through from working correctly later when the PCI passthrough VM is started.

VM hooks

xenmgr supports offloading some of its functionality to either a script, or another dbus daemon (possibly running in different VM). This is done via the following VM hooks (implemented as regular VM properties):

  • run-post-create
  • run-pre-delete
  • run-pre-boot
  • run-insteadof-start
  • run-on-state-change
  • run-on-acpi-state-change

Each hook can be either a script name, for example: xec-vm -n foo set run-insteadof-start /bin/customstart

or a string specifying the dbus rpc call to make, example:

$ xec-vm -n foo set run-insteadof-start rpc:vm=$SOME_UUID,destination=com.citrix.some.service,interface=com.citrix.iface,member=com.citrix.some.method

both the script and rpc call gets passed to the affected VM uuid as its sole argument.

PCI passthrough rules

VM passthrough rules are specified by a set of matchers, which are evaluated when VM starts to find actual PCI devices which need to be PT-ed.

VM PCI passthrough rules are managed by the following dbus methods:

  • add_pt_rule Adds a new passthrough rules. each of the matchers can take either an ID or word "any" which means match on all devices. example:

    xec -n somevm add-pt-rule 0x680 any any

  • add_pt_rule_bdf Adds a new passthrough rule using BDF notation, example:

    xec -n somevm add-pt-rule-bdf 0000:00:16.0

  • delete_pt_rule, delete_pt_rule_bdf - removal of passthrough rules

  • list_pt_rules - lists current passthrough rules
  • list_pt_pci_devices - evaluates all vm passthrough rules and outputs lists of matching pci devices on host

V4V firewall rules

V4V firewall rules are managed by dbus api similarily to PCI passthrough rules and evaluated when the VM starts. They usually result in a series of calls to "viptables" command-line program to erect the firewall. On VM shutdowns, the entries added during VM start are torn down.

V4V firewall rules can be modified via add_v4v_firewall_rule / delete_v4v_firewall_rule methods. These take a string argument with a rule definition. The format of rule string is as follows:

<source> -> <destination>

the format of each of the source/destination endpoints is

( * | my-stubdom | myself | dom-type=<domaintype> | dom-name=<domainname> ) : <v4v port>

examples:

  • open connection from the VM to domain's 0 port 5555: myself-> 0:555
  • open connection from the VM to all domains of type "ndvm" port 5555: myself -> dom-type=ndvm:5555
  • open connection from VM's stubdom to domain 0 port 333: my-stubdom -> 0:333

VM measurement

If the "measured" property in the VM config tree is set, the first disk (ID 0) will is evaluated for checksum consistency and its hash is stored in the VM config tree. This is done by mounting the filesystem and computing a sha256 hash of the filesystem (as opposed to doing it on VHD file; because even readonly VHD files are modified due some bookkeeping information such as access timestamp).

If a hash inconsistency is detected, a measurement failure action will be invoked. It defaults to shutting down the system. It can be overridden via

$ db-write /xenmgr/measure-fail-action <powermanagementaction>

can be one of the following

  • sleep
  • hibernate
  • shutdown
  • forced-shutdown
  • reboot
  • nothing

and, as mentioned before, defaults to forced shutdown if not specified

VM dependencies

xenmgr supports a simple form of dependency tracking between VMs. If a VM is configured such that its network backend is not in Dom0, xenmgr will internally track that dependency. It's possible to list all the VMs dependencies by

$ xec-vm -n somevm list-dependencies

By default when a VM is started, xenmgr will ensure all its dependencies are started first. This can be toggled off via setting "track-dependencies" VM property to false.

There's no automatic shutdown of dependent VMs.

Power Management

Xenmgr supports some configuration of power related operations

  • vm property "s3-mode"

This configures how the toolstack handles requests to put a VM to S3. Note that this doesn't affect requests made from within guest, but just requests originating from the UI / closing the laptopt lid etc. It can be one of the following:

  1. ignore - request to put VM to S3 will be ignored, toolstack will proceed to the next VM on the list
  2. PV - toolstack will ask the PV driver within the guest to put it to S3 via xenstore. This is the default for most VMs
  3. restart - toolstack will shutdown the VM and start it again after S3 resume. This is useful for NDVM.

  • vm property "s4-mode"

Supports exactly same ability to configure as "s3-mode", specifies actions to be taken when the toolstack is supposed to put VM to hibernate.

  • vm property "control-platform-power-state"

This is useful for some single-vm scenarios. The toolstack will track guest power state and try to put host to the same state. So if the guest goes to S3, toolstack will put the host to S3 as well.

  • host methods "set_ac_lid_close_action", "set_battery_lid_close_action"

Configure the power action performed when laptop lid closes, either on battery or AC adapter. Can be one of the following:

  • sleep
  • hibernate
  • shutdown
  • forced-shutdown
  • reboot
  • nothing

Code layout

Xenmgr is split into a small library (xenmgr-core) and main daemon (xenmgr). xenmgr-core atm contains very little, primarily just a bit of v4v firewall rule parsing code. The intent was to move code useful for other projects (such as the OVF import tool) out of xenmgr daemon and into the library.

Custom Libraries Used

There's a bunch of small haskell libraries we wrote which are in use by toolstack components. They reside in xclibs.git.

  • udbus - vincent's small dbus library
  • xchv4v - v4v access library
  • xch-rpc - higher level rpc library based on udbus, also with support for v4v rpc tunnels
  • xchdb - access to database files in /config, thru database daemon
  • xchutils - various random useful utility code
  • xchwebsocket - websocket library used by rpc-proxy

xenmgr-core

  • Vm/Uuid.hs - few functions related to handling UUIDs
  • Vm/ProductProperty.hs - handling OVF product properties (not used much atm besides theoretically for VMs defined by OVF xml)
  • Vm/V4VFirewall.hs - parsing of v4v firewall rules

xenmgr

  • Rpc/Autogen - folder containing all rpc access stubs generated by rpcgen (invoked from bb recipe)
  • Tools/* - various utility code

Vm subtree

  • Vm/Types.hs - definition of important types, such as definition of vm config and vm states
  • Vm/Monad.hs - definition of "Vm" monad which is a simple reader-based monad that implicitly holds context for single vm
  • Vm/Config.hs - definition of database location for all vm config properties and code to create lower-level xenvm config files
  • Vm/State.hs - vm state conversions from/to string as well as introduction of concept of internal vm state, which is a vm state private to xenmgr and more detailed than the generic state exposed from xenvm
  • Vm/Dm.hs, DmTypes.hs - types relating to device model, such as disks, nics, xenstore device states, handling backend/frontend interactions
  • Vm/DomainCore.hs - handling domids/lookup of domains and their stubdoms
  • Vm/Balloon.hs - balloning service vm memory down (if allowed by vm config, which it's usually not)
  • Vm/Pci*.hs - PCI passthrough - parsing pci passthrough rules, handling of binding drivers to pciback
  • Vm/DepGraph.hs - few graphing utilities to solve the dependency graph between vms (even though it usually boils down to 2 node graphs...)
  • Vm/Policies.hs - storage and query of vm policy settings
  • Vm/Monitor.hs - code for monitoring vm events from various sources (xenvm and xenstore) and registering/invoking internal handlers
  • Vm/React.hs - event handlers for vm events coming from Monitor.hs
  • Vm/Templates.hs - finding, categorizing, reading vm and service vm templates (in /usr/share/xenmgr-1.0/templates)
  • Vm/Queries.hs - many passive functions to query about vm states/config. Of particular interest would be "getVmConfig" functions which creates in-memory config representation based on database
  • Vm/Actions.hs - active functions for changing vm config as well as doing some runtime state manipulation (such as relocating network backend to different ndvm), starting/stopping vms etc

XenMgr subtree

  • XenMgr/Connect/* - wrappers to access other daemons in the system
  • XenMgr/Expose/* - entry points for all xenmgr's dbus server rpcs
  • XenMgr/CdLock.hs - relatively new code for handling the AFRL request cd drive lock model
  • XenMgr/Config.hs - global xenmgr config storage/query
  • XenMgr/Diagnostics.hs - gathering status reports from vms + other diagnostics
  • XenMgr/Diskmgr.hs - vhd creation
  • XenMgr/Errors.hs - definition of numbered errors reported to the UI
  • XenMgr/Host.hs - lots of host level query functions (eth0 mac adreeses, bios versions, xc versions, update state etc)
  • XenMgr/HostOps.hs - host shutdown/slee/hibernate/reboot entry points
  • XenMgr/PowerManagement.hs - actual implementation of host shutdown/sleep/hibernate/reboot etc plus code to handle lid state changes
  • XenMgr/Notify.hs - wrappers for easier generation of various dbus signals
  • XenMgr/Rpc.hs - definition of Rpc monad used in xenmgr for dbus access
  • XenMgr/XM.hs - definition of XM monad based on reader monad containing context forall vms. Useful for doing some cross vm interactions which require locking / synchronization

Interactions with other daemons

xenmgr interacts with the following daemons on system:

  • database daemon (dbd) for config storage. Via DBUS only.
  • xenvm for all low level vm operations / domain creation / shutdown etc. Via DBUS and textual vm config files in /tmp/xenmgr-xenvm-$UUID
  • input daemon for handling on-boot authentication screen, vm focus switching, screen lock, seamless mouse configuration. Via DBUS.
  • network daemon to notify it about state for vms marked with "provides-network-backend" property. Via DBUS (over v4v into network domain)
  • surfman to query it for list of passthrough GPU devices if using PVMs ("get_vgpu_mode" surfman RPC). Via DBUS.

apptool

Apptool is an OVF (open virtualization format) import tool. More info at http://wiki.cam.xci-test.com/index.php/Ovf  <<< TODO that is a bad link - maybe make the page public?