Muli Ben-Yehuda's journal

April 10, 2011

3rd Workshop on I/O Virtualization, VAMOS and SplitX

Filed under: Uncategorized — Muli Ben-Yehuda @ 1:15 PM

The program committee meeting for the 3rd Workshop on I/O Virtualization was held this Friday. I like the resulting program quite a bit, regardless of the fact that two of our submissions—VAMOS and SplitX—were accepted. WIOV is probably my favorite workshop ever, and this year it will be held again with the USENIX Annual Technical Conference, another favorite venue. The full program will be available online in a week or two.

Our two papers which have been accepted, “SplitX: Split Guest/Hypervisor Execution on Multi-Core” (joint with with Alex Landau and Abel Gordon) and “VAMOS: Virtualization Aware Middleware” (joint with Abel Gordon, Dennis Filimonov, and Maor Dahan) tackle the I/O virtualization problem from two different directions. VAMOS follows the same general line of thought as our earlier Scalable I/O and IsoStack work. Raising the level of abstraction of I/O operations—socket calls instead sending and receiving Ethernet frames, file system operations instead of reading and writing blocks—improves I/O performance because it cuts down the number of protection-domain crossings needed. In VAMOS, we perform I/O at the level of middleware operations, with the guest passing database queries to the hypervisor instead of reading and writing disk blocks. This gives a nice boost to performance, as you might expect, and is fairly easy to do taking advantage of the inherent modularity of middleware—which to me was a surprising result.

SplitX is a whole other kettle of fish. It has been clear to us for some time that the inherent overhead of x86 machine virtualization is tied to the trap-and-emulate model, as can be seen perhaps most clearly in the Turtles paper. With the trap-and-emulate model, both direct and indirect overheads are inherent in the model, because we time-multiplex two different contexts (the guest and the hypervisor) onto the same CPU core, incurring both the switch overhead and the indirect cost of dirtying the caches. But what if we could run guests on their own cores, and hypervisors on their own cores, and never the twain shall meet? SplitX presents our initial exploration of this—very promising, if I may say so myself—idea.

The papers will be available online later, but shoot me an email to get the current draft.

Blog at WordPress.com.