Toolstack - Haskell/OCaml

xec / xec-vm

These are small haskell programs. Xec is a generic dbus access utility, similar to dbus-send, can be used to communicate with any dbus daemon though defaults to communication with xenmgr. Xec-vm is a wrapper around xec which is vm aware. Both have decent "--help" toggles so will not describe their options further.

Database daemon

Database daemon (dbd) is a small ocaml/dbus aplication which handles persistence of xenclient configuration data. The database files are stored in /config/db and /config/vms.

Dbd reads config files (which are in json format) at startup and creates in-memory representation of the database, which is primarily a simple tree of strings similar to xenstore. From now on it proceeds to work on memory representation only, marking it dirty in case of modifications. The dirty trees are flushed to disk in 3s intervals. Dbd features protection against power cuts / partial writes. It always flushes to temporary file first, then atomically renames the temporary file.

API presented by dbd

    +  read                         ( path:s, OUT value:s )

reads a string value at given path

    +  read-binary                  ( path:s, OUT value:ay )

reads byte array value at given path

    +  write                        ( path:s, value:s )

writes string value to given path

    +  dump                         ( path:s, OUT value:s )

dumps whole json subtree at given path

    +  inject                       ( path:s, value:s )

writes whole json subtree to given path

    +  list                         ( path:s, OUT value:as )

lists child nodes at given path

    +  rm                           ( path:s )

removes subtree at given path

    +  exists                       ( path:s, OUT ex:b )

checks if a subtree/value at given path exists

RPC proxy

RPC proxy is a haskell application which supports proxying and filtering of RPC traffic used in xenclient. The proxied traffic consists either of binary dbus (for most use cases) or json/websocket based dbus (for conversations with web browser in uivm) and is transported using unix domain sockets or v4v sockets.

There are 3 instances of rpc-proxy running by default in dom0, each of them having different function:

/usr/bin/rpc-proxy -s

This one forwards data between default incoming channel (v4v port 5555) to default outgoing channel (unix socket /var/run/dbus/system_bus_socket). The effect is that it's possible for any VM to connect to v4v port 5555 of domain 0 and get access to it's system bus in similar fashion that's possible locally within vm by connecting to the unix socket. V4V port 5555 is exposed only to reasonably trusted service vms, such as ndvm/syncvm since it's possible to export new services on domain dom0 bus by connecting to it. Rpc-proxy makes decision whether to forward or drop messages based on default rules file in /etc/rpc-proxy.rules

/usr/bin/rpc-proxy -i v4v:5556 -n com.citrix.xenclient.guest.uuid_$UUID --translate-anonymous-dests

This one is similar to one above however slightly more "secure" and therefore exposed to user vms as well. It forwards data between v4v port 5556 to default outgoing channel (unix socket /var/run/dbus/system_bus_socket). The "-n com.citrix.xenclient.guest.uuid_$UUID" arguments causes any attempt to export named service on dom0 to force a rename of the service to com.citrix.xenclient.guest.uuid_$UUID. Therefore a guest connecting to this port can only export one service. This has been useful in the past for implementing an agent in linux vms exporting guest power management operations, but I believe this has since been replaced by xenstore/pv driver based functionality.

/usr/bin/rpc-proxy -i v4v:8080 --json-in --websockets-in -n com.citrix.xenclient.guest.uuid_$UUID --auto-auth

This one listens on v4v port 8080 and forwards to default outgoing channel (unix socket /var/run/dbus/system_bus_socket). It expects incoming data in json format (--json-in) wrapped in websocket protocol (--websockets-in). It limits ability to export service names on dom0 bus via "-n com.citrix.xenclient.guest.uuid_$UUID". This is used for communications with browser used in uivm. "--auto-auth" is just a shortcut to avoid the browser having to do dbus authentication, rpc-proxy will do it on its behalf (we don't use dbus authentication for any security purposes, all it does is state that we are connecting as root user).

RPC rules format

The default rpc rules are stored in /etc/rpc-proxy.rules file. Extra rules can be attached to vm config trees and will take effect when the VM is started and be torn down when the VM is stopped. The format of the rules is relatively straightforward, by example:

# nothing can be done by default
deny all
# allow stubdoms to talk to surfman,xenmgr,dbus
allow stubdom true destination com.citrix.xenclient.surfman
allow stubdom true destination com.citrix.xenclient.xenmgr
allow stubdom true destination org.freedesktop.DBus interface org.freedesktop.DBus
# allow guests to call 'gather' on diagnostics interface (required by xc-diag)
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.diag member gather
# allow anybody to do some vm queries required for switcher bar
allow destination com.citrix.xenclient.xenmgr interface org.freedesktop.DBus.Properties member Get
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr member list_vms
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member get_db_key
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member read_icon
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member switch
allow destination com.citrix.xenclient.input interface com.citrix.xenclient.input member get_focus_domid
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr member find_vm_by_domid
# allow guest to do some requests
allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.guestreq member request_attention
# allow conditional domstore (private db space) access
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member read if-boolean domstore-read-access true
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member read_binary if-boolean domstore-read-access true
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member list if-boolean domstore-read-access true
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member exists if-boolean domstore-read-access true
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member write if-boolean domstore-write-access true
allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member rm if-boolean domstore-write-access true
  • Only allow/deny rules are supported. The rule type specification is followed by a dbus message matcher where "destination <name>" "interface <name>" and "member <name>" can be specified to match on the corresponding fields of incoming dbus message.
  • stubdom rules

Of some interest are stubdom rules marked as "stubdom true". They only match on messages coming from stub domains

  • rules based on configuration of vm sending the message

Also it is possible to make rules based on any boolean fields in incoming vm config tree. In the example above "if-boolean domstore-read-access true" matches only if the VM which has sent the message has a "domstore-read-access" boolean config in its tree set to true. Therefore it is possible to disable/enable VM's domstore access (which boils down to allowing it to access dbd remotely) simply by manipulating its config tree.

  • rules based on domain type

Sometimes it's useful to grant rpc permission to all vms of particular type (such as "ndvm", "syncvm"). This can be achieved by adding "dom-type <type>" matcher to the rule, for example:

allow dom-type syncvm destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member add_disk

xenvm

xenvm is a single virtual machine monitor, written in Ocaml. It's forked on per-vm basis, and responsible for its lifecycle and control operations. It interacts directly with Xen via libxenctrl wrapper. You can find it in toolstack.git.

Which layers of xenvm have state? If xenvm is killed should the domain die? Can xenvm be restarted while a domain continues running? Should xenvm instances exist for VMs which are not running? - Dickon

Tomaszw: Xenvm is a quite stateful daemon:

  • upper layers hold the vm config state in memory
  • upper layers hold the vm state in memory

I believe the lower layers (such as xenops / device layer) primarily rely on state stored in xenstore, mostly by pv drivers, and don't hold in-memory state (they do tend to write (and later read) some small extra state bits to xenstore (rarely)).

It does run even if the VM is not running, providing the configurability of VM in its "stopped" state. However, as a boot performance optimization, it's only started for the first time when the 1st request to start a VM it handles is created, then continues to run until system shutdown (or VM deletion). Xenmgr copes with it by delaying the initial configuration.

The state of VM reported by xenvm is partially taken from domain state reported by xen, partially figured out by xenvm. Xen doesn't report exact domain states when it notifies the dom0 toolstack about a state change (via event channel), it basically just gives a notification that "something has happened" and it's the job of xenvm to check on each VM whether it's still running, or died etc and update its internal state accordingly (and then forward it to upper layers of toolstack, that is xenmgr).

Xenvm cannot be restarted when a domain is running, that is the limitation of how it's coded at the moment (stateful). It can be restarted when domain is dead, but does not automatically quit when domain dies (and shouldn't).

Code Layout

Libraries

  • libs/common - various utilities
  • libs/eventchn - binding for libxenctrl's eventchannel interfaces
  • libs/json - json parser
  • libs/stdext - utility extensions for Ocaml's standard library
  • libs/uuid - uuid generation and handling
  • libs/xc - libxenctrl binding
  • libs/xg - libxenguest binding
  • libs/xs - native ocaml xenstore protocol implementation

Scripts

There are few udev scripts in "scripts" subfolder, as well as udev rules for executing these scripts. These handle the notifications to the toolstack that the backend pv drivers have completed creating network/disk devices, or torn them down.

Xenops

Xenops is the lower-layer part of xenvm, responsible for lower level management of xen domains (via domain ids). It is both used internally by xenvm as well as exposed to the user via the "xenops" dom0 utility.

  • xenops/balloon.ml - memory ballooning utilities, not used much in xenclient
  • xenops/device.ml and device_common.ml - important files which are responsible for initialization of pv (or PCI passthrough) device backends (via xenstore)
  • xenops/dm.ml - config construction for qemu
  • xenops/dmagent.ml - communication with dmagent, which is a program used to fork/configure qemu instances (which can be running in another domain)
  • xenops/domain[_common].ml - domain management functions
  • xenops/hotplug.ml - few utility functions to wait on device being created by pv backend etc
  • xenops/memory.ml - crazy arthimetic to figure out how much memory is required to boot a VM
  • xenops/netman.ml - couple network helper functions
  • xenops/watch.ml - xenstore watch helper functions
  • xenops/xal.ml - low level loop waiting and parsing pv device events

Xenvm

  • xenvm/misc.ml - as name says
  • xenvm/tasks.ml - list of rpc tasks xenvm supports
  • xenvm/vmact.ml - high level implementation of vm operations (start/stop etc)
  • xenvm/vmconfig.ml - parsing xenvm config files (in xenclient, placed in /tmp/xenmgr-xenvm-*)
  • xenvm/vmstate.ml - vm state struct
  • xenvm/xenops.ml - xenops dom0 utility entry point
  • xenvm/xenvm.ml - daemon entry point

xenmgr

xenmgr is a haskell application which exports VM configuration over DBus and translates it to lower level configuration files consumed by xenvm. Since xenvm is a single virtual machine monitor, xenmgr is reponsible for some cross-vm concepts, such as relocating network pv backend on backend domain reboot, vm dependencies, enforcing cross-vm v4v firewall rules etc.

Please detail the concurrency model (e.g. thread per dbus connection, reactor or whatever). What are the consequenes of killing and restarting xenmgr? What normally launches xenmgr? - Dickon

Tomaszw: xenmgr is started by the "bootage" dom0 program, similarily to many other dom0 daemons (configured in /etc/bootage.conf). Xenmgr is fully restartible, since it doesn't keep almost any in-memory vm state. The only consequence is a temporary DoS on its functionality, as well as the possibility of some startup code executing again, which includes for example locking the UIVM with authentication screen, or performing boot-time service vm filesystem checksum.

Each incoming RPC call to xenmgr is processed in parallel (in so called haskell IO thread). Because most of the state is kept outside of xenmgr (either in db, or xenstore), there isn't much synchronisation needed. Still, because the dbd doesn't support transactions, some db writes need to be protected by locking, which xenmgr does.

Unlike incoming calls, incoming notifications are processed serially on a separate IO thread. This is because the ordering of notifications is important (and guaranteed by dbus, therefore we have to process them serially to keep the guarantee). As a consequence, long running notification handlers should be forking off a thread to not block the queue.

Exported DBus entities

xenmgr exports following dbus objects:

  • / - root object, contains global configuration and operations
  • /vm/$VM_UUID - per vm dbus object, provides access to vm configuration
  • /vm/$VM_UUID/$DISK_ID - provides access to vm disk configuration
  • /vm/$VM_UUID/$NIC_ID - provides access to vm network interface configuration
  • /host - provides access to host specific configuration and host information

each of these objects exports one or more dbus interfaces, as defined in IDL repository (idl.git). Following IDL files are used by xenmgr

  • xenmgr.xml
  • xenmgr_vm.xml
  • xenmgr_host.xml
  • vm_nic.xml
  • vm_disk.xml

DBus interface boilerplate generation

Most of dbus boilerplate code, such as the hooks to implement exported functions as well as stubs for calling other daemons is generated from the files in idl.git by a custom written program "rpcgen". It takes dbus xml files as input and produces binding/hooks in variety of languages as output.

In case of xenmgr the boilerplate is generated from its BB recipie, in the configure step:

   # generate rpc stubs
   mkdir -p Rpc/Autogen
   # Server objects
   xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr.xml
   xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr_vm.xml

Hence usually modification of IDL consists of following steps:

  • modify IDL files
  • restage IDL files (for example by ./bb -cforce_rebuild xenclient-idl && ./bb xenclient-idl)
  • regenerate boilerplate (./bb -cconfigure xenmgr)
  • compile xenmgr (./bb -ccompile xenmgr), fix errors, add implementation for new methods etc

VM templates

There's a bunch of json templates in the "templates" subfolder; on the device they are installed in /usr/share/xenmgr-1.0/templates. There are basically two types of templates:

  • service vm template, distinguished via "service-" prefix in filename. On each boot, the toolstack will create a VM instance based on these (which will be visible via xec-vm contrl utility). If same instance was already present, it's config will be overwritten each boot, which generally makes for easy upgrades of service vm config files (default ndvm or uivm)
  • all other templates. VM instances based on these will only be created by manual request (either from UI vm creation wizard, or one of the xec create-vm-* functions)

It's sometimes handy to turn off the automatic overwrite of service vm templates each xenmgr start, so that the service vm can be reconfigured via manual xec-vm invocations. This can be achieved via

db-write /overwrite-<vmtype>-settings false

for example for ndvm: db-write /overwrite-ndvm-settings false

VM configuration

JSON vm configuration files are stored in /config/vms/ though xenmgr doesn't access them directly but rather through dbd's dbus API. Low level configuration files consumed by xenvm (which are output by xenmgr) reside in /tmp/xenmgr-xenvm-$VM_UUID files.

VM properties

VM properties are declared in idl.git, in interfaces/xenmgr_vm.xml. Most of them are documented, so I won't redo that here. Any additional documention should go to IDL files since that's also the place from which sdk docs are generated.

Dodgy VM properties

There are some vm properties which have non trivial repercussions and warrant additional clarification:

  • xci-cpuid-signature

Setting it to true changes CPUID signature reported by xen to be xenclient specific. Effectively it hides xen presence from the guest. We use it on linux VMs to avoid the default PV drivers in upstream kernels from activating (so that we can use custom ones). However toggling it off enables the usage of upstream drivers, if needed.

  • flask-label

This specifies the selinux security context under which VM runs, if mismatched it can prevent the VM from being able to issue hypercalls required for its normal function.

  • provides-network-backend

Tells xenmgr to treat this vm as network VM; it will send notifications about VM that VM state to network daemon.

  • greedy-pciback-bind

This activates the behaviour of greedily seizing devices , which are configured for PCI passhthrough by any vm, by pciback driver on xenclient boot. This behaviour is different from the upstream one, however it is useful to prevent dom0 drivers using the devices which might later be used for passthrough.

Audio driver is a good example; if audio card is not seized by pciback driver at boot, alsa drivers in dom0 will take over it which might prevent passthrough from working correctly later when PCI passthrough vm is started.

VM hooks

xenmgr supports offloading some of its functionality to either a script, or another dbus daemon (possibly running in different vm). This is done via the following VM hooks (implemented as regular vm properties):

  • run-post-create
  • run-pre-delete
  • run-pre-boot
  • run-insteadof-start
  • run-on-state-change
  • run-on-acpi-state-change

Each hook can be either a script name, example:

xec-vm -n foo set run-insteadof-start /bin/customstart

or a string specifying dbus rpc call to make, example:

xec-vm -n foo set run-insteadof-start rpc:vm=$SOME_UUID,destination=com.citrix.some.service,interface=com.citrix.iface,member=com.citrix.some.method

both the script and rpc call gets passed the affected VM uuid as its single argument.

PCI passthrough rules

VM passthrough rules are specified by a set of matchers, which are evaluated when VM starts to find actual PCI devices which need to be PT-ed.

VM PCI passthrough rules are managed by the following dbus methods:

  • add_pt_rule <pciclass> <pcivendor> <pcidevice>

Adds a new passthrough rules. each of the <pciclass> <pcivendor> <pcidevice> matchers can take either an ID or word "any" which means match on all devices. example:

xec -n somevm add-pt-rule 0x680 any any
  • add_pt_rule_bdf <bdf>

Adds a new passthrough rule using BDF notation, example:

xec -n somevm add-pt-rule-bdf 0000:00:16.0
  • delete_pt_rule, delete_pt_rule_bdf - removal of passthrough rules
  • list_pt_rules - lists current passthrough rules
  • list_pt_pci_devices - evaluates all vm passthrough rules and outputs lists of matching pci devices on host

V4V firewall rules

V4V firewall rules are managed by dbus api similarily to PCI passthrough rules and evaluated when the VM starts. They usually result in a series of calls to "viptables" commandline program to erect the firewall. On VM shutdowns, the entries added during VM start are torn down.

V4V firewall rules can be modified via add_v4v_firewall_rule / delete_v4v_firewall_rule methods. These take a string argument with a rule definiton. The format of rule string is as follows:

<source> -> <destination>

the format of each of the source/destination endpoints is

( * | my-stubdom | myself | dom-type=<domaintype> | dom-name=<domainname> ) : <v4v port>

examples:

  • open connection from the VM to domain's 0 port 5555: myself-> 0:555
  • open connection from the VM to all domains of type "ndvm" port 5555: myself -> dom-type=ndvm:5555
  • open connection from VM's stubdom to domain 0 port 333: my-stubdom -> 0:333

VM measurement

Before VM is started, if it has "measured" property set, the first disk (with ID of 0), and only the first disk, will be checked for checksum consistency with a hash stored in vm config tree. This is done by mounting the filesystem and computing sha256 hash of the filesystem (as opposed to doing it on VHD file; because even readonly VHD files are modified due some bookkeep information such as access timestamp).

If a hash inconsistency is detected, a measurement failure action will be invoked. It defaults to shutting down the system. It can be overriden via

db-write /xenmgr/measure-fail-action <powermanagementaction>

<powermanagementaction> can be one of the following

  • sleep
  • hibernate
  • shutdown
  • forced-shutdown
  • reboot
  • nothing

and as said previously, defaults to forced shutdown if not specified

VM dependencies

xenmgr supports simple form of dependency tracking between vms. If a VM is configured such that its network backend is not in dom0, xenmgr will internally track that dependency. It's possible to list all the vms a given vm depends on by

xec-vm -n somevm list-dependencies

By default when a VM is started, xenmgr will ensure all its dependecies are started first. This can be toggled off via setting "track-dependencies" vm property to false.

There's no automatic shutdown of dependent vms.

Power Management

Xenmgr supports some configuration of power related operations

  • vm property "s3-mode"

This configures how the toolstack handles requests to put a VM to S3. Note that this doesn't affect requests made from within guest, but just requests originating from the UI / closing the laptopt lid etc. It can be one of the following:

    • ignore - request to put vm to S3 will be ignored, toolstack will proceed to next vm on the list
    • pv - toolstack will ask the PV driver within the guest to put it to S3 via xenstore. This is the default for most vms
    • restart - toolstack will shutdown the VM and start it again after S3 resume. This is useful for NDVM.
  • vm property "s4-mode"

Supports exactly same configurability as "s3-mode", specifies actions to be taken when toolstack is supposed to put VM to hibernate.

  • vm property "control-platform-power-state"

This is useful for some single-vm scenarios. Toolstack will track guest power state and try to put host to the same state. So if the guest goes to S3, toolstack will put the host to S3 as well.

  • host methods "set_ac_lid_close_action", "set_battery_lid_close_action"

Configure the power action performed when laptop lid closes, either on battery or AC adapter. Can be one of the following:

  • sleep
  • hibernate
  • shutdown
  • forced-shutdown
  • reboot
  • nothing

Code layout

Xenmgr is split into a small library (xenmgr-core) and main daemon (xenmgr). xenmgr-core atm contains very little, primarily just a bit of v4v firewall rule parsing code. The intent was to move code useful for other projects (such as the OVF import tool) out of xenmgr daemon and into the library.

Custom Libraries Used

There's a bunch of small haskell libraries we wrote which are in use by toolstack components. They reside in xclibs.git.

  • udbus - vincent's small dbus library
  • xchv4v - v4v access library
  • xch-rpc - higher level rpc library based on udbus, also with support for v4v rpc tunnels
  • xchdb - access to database files in /config, thru database daemon
  • xchutils - various random useful utility code
  • xchwebsocket - websocket library used by rpc-proxy

xenmgr-core

  • Vm/Uuid.hs - few functions related to handling UUIDs
  • Vm/ProductProperty.hs - handling OVF product properties (not used much atm besides theoretically for vms defined by OVF xml)
  • Vm/V4VFirewall.hs - parsing of v4v firewall rules

xenmgr

  • Rpc/Autogen - folder containing all rpc access stubs generated by rpcgen (invoked from bb recipie)
  • Tools/* - various utility code

Vm subtree

  • Vm/Types.hs - definition of important types, such as definition of vm config and vm states
  • Vm/Monad.hs - definition of "Vm" monad which is a simple reader-based monad which implicitly holds context for single vm
  • Vm/Config.hs - definition of database location for all vm config properties and code to create lower-level xenvm config files
  • Vm/State.hs - vm state conversions from/to string as well as introduction of concept of internal vm state, which is a vm state private to xenmgr and more detailed than the generic state exposed from xenvm
  • Vm/Dm.hs, DmTypes.hs - types relating to device model, such as disks, nics, xenstore device states, handling backend/frontend interactions
  • Vm/DomainCore.hs - handling domids/lookup of domains and their stubdoms
  • Vm/Balloon.hs - balloning service vm memory down (if allowed by vm config, which it's usually not)
  • Vm/Pci*.hs - PCI passthrough - parsing pci passthrough rules, handling of binding drivers to pciback
  • Vm/DepGraph.hs - few graphing utilities to solve the dependency graph between vms (even though it usually boils down to 2 node graphs...)
  • Vm/Policies.hs - storage and query of vm policy settings
  • Vm/Monitor.hs - code for monitoring vm events from various sources (xenvm and xenstore) and registering/invoking internal handlers
  • Vm/React.hs - event handlers for vm events coming from Monitor.hs
  • Vm/Templates.hs - finding, categorizing, reading vm and service vm templates (in /usr/share/xenmgr-1.0/templates)
  • Vm/Queries.hs - many passive functions to query about vm states/config. Of particular interest would be "getVmConfig" functions which creates in-memory config representation based on database
  • Vm/Actions.hs - active functions for changing vm config as well as doing some runtime state manipulation (such as relocating network backend to different ndvm), starting/stopping vms etc

XenMgr subtree

  • XenMgr/Connect/* - wrappers to access other daemons in the system
  • XenMgr/Expose/* - entry points for all xenmgr's dbus server rpcs
  • XenMgr/CdLock.hs - relatively new code for handling the AFRL request cd drive lock model
  • XenMgr/Config.hs - global xenmgr config storage/query
  • XenMgr/Diagnostics.hs - gathering status reports from vms + other diagnostics
  • XenMgr/Diskmgr.hs - vhd creation
  • XenMgr/Errors.hs - definition of numbered errors reported to the UI
  • XenMgr/Host.hs - lots of host level query functions (eth0 mac adreeses, bios versions, xc versions, update state etc)
  • XenMgr/HostOps.hs - host shutdown/slee/hibernate/reboot entry points
  • XenMgr/PowerManagement.hs - actual implementation of host shutdown/sleep/hibernate/reboot etc plus code to handle lid state changes
  • XenMgr/Notify.hs - wrappers for easier generation of various dbus signals
  • XenMgr/Rpc.hs - definition of Rpc monad used in xenmgr for dbus access
  • XenMgr/XM.hs - definition of XM monad based on reader monad containing context for _all_ vms. Useful for doing some cross vm interactions which require locking / synchronization

Interactions with other daemons

Please specify kind of interaction, e.g. exec, sockets, DBUS, etc - Dickon

xenmgr interacts with the following daemons on system:

  • database daemon (dbd) for config storage. Via DBUS only.
  • xenvm for all low level vm operations / domain creation / shutdown etc. Via DBUS and textual vm config files in /tmp/xenmgr-xenvm-$UUID
  • input daemon for handling on-boot authentication screen, vm focus switching, screen lock, seamless mouse configuration. Via DBUS.
  • network daemon to notify it about state for vms marked with "provides-network-backend" property. Via DBUS (over v4v into network domain)
  • surfman to query it for list of passthrough GPU devices if using PVMs ("get_vgpu_mode" surfman RPC). Via DBUS.