Introduction
This page is intended to document the OpenXT toolstack. Please feel free to contribute heavily.
xec / xec-vm
These are small haskell programs. Xec is a generic dbus access utility, similar to dbus-send, can be used to communicate with any dbus daemon though defaults to communication with xenmgr. Xec-vm is a wrapper around xec which is VM aware. Both have decent "--help" toggles.
Database daemon
Database daemon (dbd) is a small ocaml/dbus aplication which handles persistence of OpenXT configuration data. The database files are stored in /config/db and /config/vms.
The dbd reads config files (which are in json format) at startup and creates an in-memory representation of the database, which is primarily a simple tree of strings similar to xenstore. From now on it proceeds to work on memory representation only, marking it dirty in case of modifications. The dirty trees are flushed to disk in 3s intervals. Dbd features protection against power cuts / partial writes. It always flushes to temporary file first, then atomically renames the temporary file.
API presented by dbd
+ read ( path:s, OUT value:s )
reads a string value at given path+ read-binary ( path:s, OUT value:ay )
reads byte array value at given path+ write ( path:s, value:s )
writes string value to given path+ dump ( path:s, OUT value:s )
dumps whole json subtree at given path *+ inject ( path:s, value:s )
writes whole json subtree to given path+ list ( path:s, OUT value:as )
lists child nodes at given path+ rm ( path:s )
removes subtree at given path+ exists ( path:s, OUT ex:b )
checks if a subtree/value at given path exists
RPC proxy
RPC proxy is a haskell application which supports proxying and filtering of RPC traffic used in OpenXT. The proxied traffic consists of either binary dbus (for most use cases) or json/websocket based dbus (for conversations with a web browser in the UIVM) and is transported using UNIX domain sockets or v4v sockets.
There are 3 instances of rpc-proxy running by default in Dom0, each of them having different function:
/usr/bin/rpc-proxy -s
This one forwards data between default incoming channel (v4v port 5555) to default outgoing channel (UNIX socket /var/run/dbus/system_bus_socket). The effect is that it's possible for any VM to connect to v4v port 5555 of Dom0 and get access to its system bus. Similar to connecting locally within the VM by binding the UNIX socket. V4V port 5555 is exposed only to reasonably trusted service VMs, such as NDVM and SyncVM since it's possible to export new services on the Dom0 bus by connecting to it. Rpc-proxy makes the decision whether to forward or drop messages based on the default rules file in /etc/rpc-proxy.rules
/usr/bin/rpc-proxy -i v4v:5556 -n com.citrix.xenclient.guest.uuid_$UUID --translate-anonymous-dests
This one is similar to the one above however slightly more "secure" and therefore exposed to user VMs as well. It forwards data between v4v port 5556 to the default outgoing channel (UNIX socket /var/run/dbus/system_bus_socket). The "-n com.citrix.xenclient.guest.uuid_$UUID" arguments causes any attempt to export named service on Dom0 to force a rename of the service to com.citrix.xenclient.guest.uuid_$UUID. Therefore, a guest connecting to this port can only export one service. This has been useful in the past for implementing an agent in Linux VMs exporting guest power management operations. This may have been replaced by xenstore/pv driver based functionality.
/usr/bin/rpc-proxy -i v4v:8080 --json-in --websockets-in -n com.citrix.xenclient.guest.uuid_$UUID --auto-auth
This one listens on v4v port 8080 and forwards to default outgoing channel (UNIX socket /var/run/dbus/system_bus_socket). It expects incoming data in json format (--json-in) wrapped in websocket protocol (--websockets-in). It limits ability to export service names on Dom0 bus via "-n com.citrix.xenclient.guest.uuid_$UUID". This is used for communications with browsers used in UIVM. "--auto-auth" is just a shortcut to avoid the browser having to do dbus authentication, rpc-proxy will do it on its behalf (we don't use dbus authentication for any security purposes, all it does is state that we are connecting as root).
RPC rules format
The default rpc rules are stored in /etc/rpc-proxy.rules file. Extra rules can be attached to VM config trees and will take effect when the VM is started and be torn down when the VM is stopped. The format of the rules is relatively straightforward, for example:
# nothing can be done by default deny all # allow stubdoms to talk to surfman,xenmgr,dbus allow stubdom true destination com.citrix.xenclient.surfman allow stubdom true destination com.citrix.xenclient.xenmgr allow stubdom true destination org.freedesktop.DBus interface org.freedesktop.DBus # allow guests to call 'gather' on diagnostics interface (required by xc-diag)allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.diag member gather # allow anybody to do some vm queries required for switcher bar allow destination com.citrix.xenclient.xenmgr interface org.freedesktop.DBus.Properties member Get allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr member list_vms allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member get_db_key allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member read_icon allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member switch allow destination com.citrix.xenclient.input interface com.citrix.xenclient.input member get_focus_domid allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr member find_vm_by_domid # allow guest to do some requests allow destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.guestreq member request_attention # allow conditional domstore (private db space) access allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member read if-boolean domstore-read-access true allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member read_binary if-boolean domstore-read-access trueallow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member list if-boolean domstore-read-access true allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member exists if-boolean domstore-read-access true # allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member write if-boolean domstore-write-access true allow destination com.citrix.xenclient.db interface com.citrix.xenclient.db member rm if-boolean domstore-write-access true
Only allow/deny rules are supported. The rule type specification is followed by a dbus message matcher where "destination " "interface " and "member " can be specified to match the corresponding fields of an incoming dbus message.
stubdom rules
Of some interest are stubdom rules marked as "stubdom true". They only match on messages coming from stub domains.
rules based on configuration of VM sending the message It is also possible to make rules based on any boolean fields in an incoming VM config tree. In the example above "if-boolean domstore-read-access true" matches only if the VM which has sent the message has a "domstore-read-access" boolean config in its tree set to true. Therefore, it is possible to disable/enable VM's domstore access (which boils down to allowing it to access dbd remotely) simply by manipulating its config tree.
rules based on domain type Sometimes it's useful to grant rpc permission to all VMs of particular type (such as "NDVM", "SyncVM"). This can be achieved by adding "dom-type " matcher to the rule, for example: allow dom-type syncvm destination com.citrix.xenclient.xenmgr interface com.citrix.xenclient.xenmgr.vm member add_disk
xenvm
xenvm is a single virtual machine monitor, written in Ocaml. It's forked on per-vm basis, and responsible for its lifecycle and control operations. It interacts directly with Xen via the libxenctrl wrapper. You can find it in toolstack.git.
TODO: Which layers of xenvm have state? If xenvm is killed should the domain die? Can xenvm be restarted while a domain continues running? Should xenvm instances exist for VMs which are not running?
Xenvm is a stateful daemon:
- upper layers hold the VM config state in memory
- upper layers hold the VM state in memory
The lower layers (such as xenops / device layer) primarily rely on state stored in xenstore, mostly by PV drivers, and don't hold in-memory state (they do tend to write [and later read] some small extra state bits to xenstore [rarely]).
It does run even if the VM is not running, providing the ability to configure a VM in its "stopped" state. However, as a boot performance optimization, it's only started for the first time when the 1st request to start a VM it handles is created. Then it continues to run until system shutdown (or VM deletion). Xenmgr copes with it by delaying the initial configuration.
The state of a VM reported by xenvm is partially taken from domain state reported by Xen as well as partially derived by xenvm. Xen doesn't report exact domain states when it notifies the dom0 toolstack about a state change (via event channel), it basically just gives a notification that "something has happened" and it's the job of xenvm to check on each VM whether it's still running, or died etc and update its internal state accordingly (and then forward it to upper layers of toolstack, that is xenmgr).
Xenvm cannot be restarted when a domain is running, that is the limitation of how it's coded at the moment (stateful). It can be restarted when domain is dead, but xenvm does not automatically quit when a domain dies (and shouldn't).
Code Layout
Libraries
- libs/common - various utilities
- libs/eventchn - binding for libxenctrl's eventchannel interfaces
- libs/json - json parser
- libs/stdext - utility extensions for Ocaml's standard library
- libs/uuid - uuid generation and handling
- libs/xc - libxenctrl binding
- libs/xg - libxenguest binding
- libs/xs - native ocaml xenstore protocol implementation
Scripts
There are few udev scripts in "scripts" subfolder, as well as udev rules for executing these scripts. These handle the notifications to the toolstack that the backend PV drivers have completed creating network/disk devices, or tearing them down.
Xenops
Xenops is the lower-layer part of xenvm, responsible for lower level management of Xen domains (via domain ids). It is both used internally by xenvm as well as exposed to the user via the "xenops" Dom0 utility.
- xenops/balloon.ml - memory ballooning utilities, not used much in OpenXT
- xenops/device.ml and device_common.ml - important files which are responsible for initialization of PV (or PCI passthrough) device backends (via xenstore)
- xenops/dm.ml - config construction for qemu
- xenops/dmagent.ml - communication with dmagent, which is a program used to fork/configure qemu instances (which can be running in another domain)
- xenops/domain[_common].ml - domain management functions
- xenops/hotplug.ml - few utility functions to wait on device being created by PV backend etc
- xenops/memory.ml - crazy arithmetic to figure out how much memory is required to boot a VM
- xenops/netman.ml - couple network helper functions
- xenops/watch.ml - xenstore watch helper functions
- xenops/xal.ml - low level loop waiting and parsing PV device events
Xenvm
- xenvm/misc.ml - as name says
- xenvm/tasks.ml - list of rpc tasks xenvm supports
- xenvm/vmact.ml - high level implementation of vm operations (start/stop etc)
- xenvm/vmconfig.ml - parsing xenvm config files (in OpenT, placed in /tmp/xenmgr-xenvm-*)
- xenvm/vmstate.ml - VM state struct
- xenvm/xenops.ml - xenops Dom0 utility entry point
- xenvm/xenvm.ml - daemon entry point
xenmgr
xenmgr is a haskell application which exports VM configuration over DBus and translates it to lower level configuration files consumed by xenvm. Since xenvm is a single virtual machine monitor, xenmgr is reponsible for some cross-vm concepts, such as relocating network PV backend on backend domain reboot, VM dependencies, enforcing cross-vm v4v firewall rules etc.
TODO: detail the concurrency model (e.g. thread per dbus connection, reactor or whatever). What are the consequenes of killing and restarting xenmgr? What normally launches xenmgr?
Xenmgr is started by the "bootage" Dom0 program, similarly to many other Dom0 daemons (configured in /etc/bootage.conf). Xenmgr is fully reusable since it doesn't keep almost any in-memory VM state. The only consequence is a temporary DoS on its functionality as well as the possibility of some startup code executing again, which includes for example locking the UIVM with authentication screen, or performing boot-time service VM filesystem checksum.
Each incoming RPC call to xenmgr is processed in parallel (called haskell IO thread). Because most of the state is kept outside of xenmgr (either in db or xenstore), there isn't much synchronization needed. Still, because the dbd doesn't support transactions some db writes need to be protected by locking. Xenmgr provides this capability.
Unlike incoming calls, incoming notifications are processed serially on a separate IO thread. This is because the ordering of notifications is important (and guaranteed by dbus, therefore we have to process them serially to keep the guarantee). As a consequence, long running notification handlers should be forking off a thread to not block the queue.
exported DBus entities
xenmgr exports following dbus objects:
- / - root object, contains global configuration and operations
- /vm/$VM_UUID - per VM dbus object, provides access to VM configuration
- /vm/$VM_UUID/$DISK_ID - provides access to VM disk configuration
- /vm/$VM_UUID/$NIC_ID - provides access to VM network interface configuration
- /host - provides access to host specific configuration and host information
each of these objects exports one or more dbus interfaces, as defined in IDL repository (idl.git). Following IDL files are used by xenmgr
- xenmgr.xml
- xenmgr_vm.xml
- xenmgr_host.xml
- vm_nic.xml
- vm_disk.xml
DBus interface boilerplate generation
Most of dbus boilerplate code, such as the hooks to implement exported functions as well as stubs for calling other daemons is generated from the files in idl.git by a custom written program "rpcgen". It takes dbus xml files as input and produces binding/hooks in variety of languages as output.
In case of xenmgr the boilerplate is generated from its BB recipe, in the configure step:
`# generate rpc stubs` `mkdir -p Rpc/Autogen` `# Server objects` `xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr.xml` `xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr_vm.xml`
Hence usually modification of IDL consists of following steps:
- modify IDL files
- re-stage IDL files (for example by ./bb -cforce_rebuild xenclient-idl && ./bb xenclient-idl)
- regenerate boilerplate (./bb -cconfigure xenmgr)
- compile xenmgr (./bb -ccompile xenmgr), fix errors, add implementation for new methods etc