Community Call - 15th December 2016
Minutes by Christopher Clark. Copyright (c) 2016 BAE Systems.
Present:
Christopher Clark, Rich Persaud; BAE
Jim Rauscher and R2
James McKenzie; Bromium
Daniel Smith; Apertus
Ross Philipson, Eric Chanudet; AIS Burlington
Kevin Pearson; AFRL
tboot and UEFI
Ross: have had a request to formalize a plan
Kevin: requesting a RFC document for host UEFI integration to give others a chance to review the approach.
James: happy to write it
Overview of the work:
SMX mode must be entered after performing exit boot services.
Option 1: fold tboot into Xen
Option 2: if desirable to preserve the separation of the projects, compile tboot as a driver for UEFI.
Advantage of Option 1:
Not all BIOSes will measure all modules into PCR4, so some hacking required to get those measurements;
have some code for TPM 2.0 and TrEE and also some for TPM 1, but this is not as nice as having the platform just do it for you.
Ross: think the uefi module is the easier approach
James: Then there's the handoff after SIPI to handle. Can avoid extra SIPI logic in tboot if linked into Xen.
ACTION: Ross to get together with James to discuss technical details (eg. PCR4 measurement issue)
Daniel: has anyone spoken to upstream?
James: tboot upstream is not very active.
Ross: concur; tboot project health looks bleak.
ACTION: Rich to talk to upstream Xen to understand whether Xen adoption may be viable and if so, what would be involved.
James: One barrier is that TPM 2.0 specs are very hard to obtain.
It is an ISO standard rather than a TCG standard, so an ISO subscription is required. Has been removed from the TCG web site.
Rich: What's the driving hardware requirement behind TPM 2.0 support?
Kevin: Microsoft Surface. "CSM" going away next year or two. Surface Pro, ProBook, Surface Studio.
Ross: With a Xen merge there are both technical and non-technical concerns; not known if wanted; licensing eg. FreeBSD code in tboot
James: Should be simpler: In the new implementation, the platform measures rather than tboot code.
ACTION: Ross and James to write a doc so that Rich can start the upstream discussion.
James: the gist is: "Support a module that goes into Xen binary that runs when Xen starts".
icbinn
Software designed to support 2-3 syncxt VMs, managing a subset of the platform and the transfer of VHDs.
James: Designed to run a filesystem over a stream.
Took SUN-RPC, glued it to a stream protocol, to libc functions.
Goal to be very simple, inspectable and not much code; to run in dom0 or a storage domain, expose POSIX calls remoted via SUN-XDR.
The difference between icbinn and icbinn_resolve is in the linking: icbinn_resolve is build as a static binary suitable for deployment on any foreign distro with no shared library dependencies.
The preexisting options at the time were NFS, CIFS, SMB; all are complicated. A kernel-side implementation has security implications: harder to apply Access Control and have special access to the kernel, including inode lookup functions. icbinn is just a usermode process, and so can be confined using standard mechanisms. Much smaller and more lightweight than CIFS.
There is also a pure python implementation, so it can run in a language with measured memory. The original Synchronizer VMs used it. Just import a python module and have access to it as a filesystem, with all existing python libraries.
icbinn would be usable outside of OpenXT, by anyone else. Provides a POSIX filesystem over a socket.
ACTION: Rich to document it to aid in presenting to other potential users.
Release targets for 7.0 and 7.1
Rich: There is interest in a release before June for Kaby Lake support.
Kevin: Current target for a derivative release is at the end of March 2017.
Requirements: Kaby Lake, TPM 2.0 and host UEFI if ready.
Measured Launch should also be included. libxl would make things easier if in.
Ross: libxl is not in yet but close, looking positive.
Rich: Plan: 7.0 in March, potentially 7.1 in June.
Ross: Eric is working on Linux 4.9 integration, which is the main part of Kaby Lake support.
Kevin: Intel have not released the SINIT ACM for Kaby Lake yet. Our org is doing TPM 2.0 work with meta-measured, and getting ready to post for OpenXT.
Need to get with AIS Rome for the measured launch part.
Ross: May have to work with existing tboot to get TPM 2.0 for now.
Rich: Request that this work is done in public with a RFC describing it. Daniel is the 7.0 release manager. Visibility is important.
Release planning for 7.0 will start in January.
Christopher: Xen 4.6 is already in master and will be included in the release.
Ross: Yes, and qemu, libxl and Linux uprevs too. TPM 2.0 is an open question.
Rich: Host UEFI for 7.0 release in March seems unlikely.
Ross: meta-measured versus OpenXT measured launch: work involved in integration is not known yet.
Need both TPM 1.0 and 2.0 support: so switch at install time or runtime?
Kevin: Need to continue to support TPM 1.0 for existing systems. For flashable TPMs, can treat a TPM version change from 1.0 to 2.0 as a reinstall case, since migration would be very difficult.
ACTION: Ross and AIS Burlington to find out by January if they can sign up to perform QA for a March 7.0 release.
Tracking upstream Xen stable branch
Christopher: Want to avoid maintaining an OpenXT-specific Xen build; use the upstream stable branch, which would currently be stable-4.6 for OpenXT master.
The branch contains XSA fixes plus important corrections. Is not high churn. Think that we could track the tip of that stable branch, since it doesn't appear that there is a minor release for every XSA fix issued. However, would also be fine with using minor releases if community and maintainers prefer that.
Suspect that if the XSA patches were removed, that the OpenXT patch queue would not break between Xen 4.6.1 and the tip of stable-4.6; should go and test that to obtain data to aid this decision.
Eric: Preference for using the upstream minor release tarballs.
Easier for the build system mirror that using a git repository, and hoping that upstream perform testing on their minor releases.
Have been cases where upstream broke eg. XenStore in a stable branch, functionality that OpenXT depends upon.
Rich: We need to detect that breakage and actively report it upstream.
Ross: Also have a preference for the upstream minor release tarballs.
Daniel: Still want to continue to see XSAs patched in OpenXT immediately when they exit embargo.
Christopher: Well, that would be easiest using the upstream stable branch, and advancing along it to a specified commit, if there is no minor release issued.
ACTION: Christopher + Eric to work on a plan and PR.
64-bit service VMs
Daniel: network-slave is an obstacle: haskell, so needs to be a 32 bit binary.
Challenge is in getting a mixed 32-bit/64-bit build from standard OE build.
Replace midori with surf browser
Primary advocate for this not present on the call. No objections to the change expressed.
Discussed; test cycle for 7.0 release would be useful to validate a switch.
meta-virtualization
Christopher: meta-virtualization is an upstream OpenEmbedded layer with recipes for virtualization and container technologies and their dependencies,
eg. Docker, Go, lxc, KVM, libvirt, Xen, Linux.
Have a working prototype of OpenXT built with meta-virtualization layer; looks good, should be suitable for review in the New Year.
Rich: will enable collaboration with other users of virtualization in the wider, larger community in OpenEmbedded, which exceeds the size of the Xen community.
Last call of the year
Merry Winter Festivals to all!