...
Xenops is the lower-layer part of xenvm, responsible for lower level management of Xen domains (via domain ids). It is both used internally by xenvm as well as exposed to the user via the "xenops" Dom0 utility.
- xenops/balloon.ml - memory ballooning utilities, not used much in OpenXT
- xenops/device.ml and device_common.ml - important files which are responsible for initialization of PV (or PCI passthrough) device backends (via xenstore)
- xenops/dm.ml - config construction for qemu
- xenops/dmagent.ml - communication with dmagent, which is a program used to fork/configure qemu instances (which can be running in another domain)
- xenops/domain[_common].ml - domain management functions
- xenops/hotplug.ml - few utility functions to wait on device being created by PV backend etc
- xenops/memory.ml - crazy arithmetic to figure out how much memory is required to boot a VM
- xenops/netman.ml - couple network helper functions
- xenops/watch.ml - xenstore watch helper functions
- xenops/xal.ml - low level loop waiting and parsing PV device events
Xenvm
- xenvm/misc.ml - as name says
- xenvm/tasks.ml - list of rpc tasks xenvm supports
- xenvm/vmact.ml - high level implementation of vm operations (start/stop etc)
- xenvm/vmconfig.ml - parsing xenvm config files (in OpenT, placed in /tmp/xenmgr-xenvm-*)
- xenvm/vmstate.ml - VM state struct
- xenvm/xenops.ml - xenops Dom0 utility entry point
- xenvm/xenvm.ml - daemon entry point
xenmgr
...
Unlike incoming calls, incoming notifications are processed serially on a separate IO thread. This is because the ordering of notifications is important (and guaranteed by dbus, therefore we have to process them serially to keep the guarantee). As a consequence, long running notification handlers should be forking off a thread to not block the queue.
exported DBus entities
xenmgr exports following dbus objects:
- / - root object, contains global configuration and operations
- /vm/$VM_UUID - per VM dbus object, provides access to VM configuration
- /vm/$VM_UUID/$DISK_ID - provides access to VM disk configuration
- /vm/$VM_UUID/$NIC_ID - provides access to VM network interface configuration
- /host - provides access to host specific configuration and host information
each of these objects exports one or more dbus interfaces, as defined in IDL repository (idl.git). Following IDL files are used by xenmgr
- xenmgr.xml
- xenmgr_vm.xml
- xenmgr_host.xml
- vm_nic.xml
- vm_disk.xml
DBus interface boilerplate generation
Most of dbus boilerplate code, such as the hooks to implement exported functions as well as stubs for calling other daemons is generated from the files in idl.git by a custom written program "rpcgen". It takes dbus xml files as input and produces binding/hooks in variety of languages as output.
In case of xenmgr the boilerplate is generated from its BB recipe, in the configure step:
Code Block | ||
---|---|---|
| ||
`# generate rpc stubs`
`mkdir -p Rpc/Autogen`
`# Server objects`
`xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr.xml`
`xc-rpcgen --haskell -s -o Rpc/Autogen --module-prefix=Rpc.Autogen ${STAGING_DATADIR}/idl/xenmgr_vm.xml` |
Hence usually modification of IDL consists of following steps:
- modify IDL files
- re-stage IDL files (for example by ./bb -cforce_rebuild xenclient-idl && ./bb xenclient-idl)
- regenerate boilerplate (./bb -cconfigure xenmgr)
- compile xenmgr (./bb -ccompile xenmgr), fix errors, add implementation for new methods etc