[py-dev] OpenStack

Brack, Laurent P. lpbrac at dolby.com
Tue Mar 27 18:23:08 CEST 2012


Hi Holger,

No problem for the delayed response. I know you were in the bay area a
while back and may have felt everyone is always in a rush but this is
not our case :)

-----Original Message-----
From: holger krekel [mailto:holger at merlinux.eu] 
Sent: Saturday, March 24, 2012 6:42 AM
To: Brack, Laurent P.
Cc: py-dev at codespeak.net
Subject: Re: [py-dev] OpenStack

Hi Brack,
[Laurent] >> This is my last name :)

sorry for taking a while.  Am still on travel, currently in the Sonoran
deserts.  My answers up until first half May are thus likely to lag.
(They do have suprisingly good internet connection ... better than in
the three-times more expensive hotel in the San Francisco valley last
week).

On Tue, Mar 20, 2012 at 10:27 -0700, Brack, Laurent P. wrote:
> Forking from the [py-dev] pytest-timeout 0.2 e-mail thread.
> 
> > I am interested in OpenStack but you can you detail a bit more of 
> > what you want to achieve?
> 
> I have attached a rough diagram of what we are building internally
(hopefully it will not be filtered out). 

thanks for sharing.

> About a year ago, we attended a presentation on OpenStack (when it was
still driven by Anso Labs before they got acquired by Rackspace). 
> We were (and are) currently using a private cloud (VMWare) but 
> contemplated the idea of scaling up to public clouds. Problem we had 
> was that we had to develop custom code for each cloud vendor whether
Amazon EC2, Penguin, VMWare, etc. so OpenStack was really a viable path
to not lock ourselves in with a given vendor.

sounds sensible.

> While a bulk of our tests require embedded devices (in which case 
> virtualization makes little sense) a good chunk can be run on standard
workstations (all OS flavors) and in this case it only makes sense to
move to a virtual environment.
> 
> PyTest combined with xdist was the dream come true as we could focus 
> on developing tests meant to run on a single machine and later
seamlessly parallelize their execution. There is nothing new here.
> 
> We were thinking of writing a plugin to xdist (cxdist - c for cloud) 
> that would interface with OpenStack, using the pytest_xdist_setupnodes
and pytest_configure_node hooks.

ok.

> In those hooks, the plugin would provision the machine (via OpenStack)

> and then make xdist believe that it is dealing with physical slaves.

> Finally at the end of the run, we would teardown the slaves, leaving
resources for other tests to run. 
> 
> We have not started yet on this as scalability is not an issue but 
> this is our plan. As the diagram shows, the read boxes are plugins we
intend to release to the open source community.

Is your general idea to use the open stack integration to get
parallelizable test runs with totally controllable environments? 

[Laurent] >> It is. We have a fairly large VMWare infrastructure but I
feel we are locked in. Also there is a point where scaling up this
private cloud will simply make no sense from an economic standpoint.
Finally, infrastructure like VMWare are of little use for the open
source community. 

> Our teams write a lot of "data driven tests" (testing various AV 
> codecs with different configuration parameters using very similar
verification procedure) and as a result we make heavy use of funcargs.

Curious, are you using 2.2.3 for this?  There are some plans to further
improve parametrization where i would be interested in your feedback.

[Laurent] >> Yes. As a matter of fact we had to make changes to our
plugin to work properly with the new metafunc.parametrize method. Maybe
here is the time to open a small parenthesis. We have simplified the
generation process
For our common users, although this doesn't prevent them from using
custom hook implementations or factories. One thing that the plugin does
is attach "meta data" to items (which is carried over from hook to
hook). This meta data contains information about test cases on the
testlink Server. One of the challenge with metafunc was to find a way to
attach this information in a way it would not be lost during the test
generation done by pytest. In 2.1.3, with "addcall" we had done this by
hacking  the "param" formal parameter intended use (while preserving the
functionality). In 2.2.3 (the addcall hack was broken - meta data got
lost), we found another way. Again a hack and therefore no guaranty it
will work with future versions. It would be nice to have a supported way
to "attach" data as part of the parametrize process which is then
carried over to the "item" being generated. The idea is to carry over
information that may be generated by a hook to another hook (whenever it
makes sense).

> The pytest_testlink plugin is very similar to the pytest Case
Conductor plugin from the Mozilla folks
(http://blargon7.com/2012/01/case-conductor-pytest-plugin-proposal/) but
interacts with TestLink (teamst.org in which we are currently involved
in - improving performance of web services etc.) as opposed to Case
Conductor. 

Is pytest-testlink a plugin that you plan to work on?
[Laurent] >> Actually we have it working. We have been using it for
about 6 months now and I just made a new internal release (to fix the
issue with 2.1.3+) which also supports "platforms" (a feature of
TestLink) and filtering based on TestLink constructs (outcome, test case
ID, keywords). The plugin relies on another package (called pytl) which
provides an OO model of the testlink server built on top of their
existing web services. Those web services are quite buggy and incomplete
but we have built-in workarounds. In parallel we are working with the
TestLink team to provide a more complete set of web services. We are
also working on restructuring the code base allowing better integration
via plugins (requirement managements, reporting - we have a solution
using BIRT) as well performance improvements, test case libraries (for
re-use between projects), project groups, etc. While the pytest testlink
plugin is extensively documented and so far proven to be pretty robust,
I want to clean it up even more before releasing it (making it available
as an open source package was the intent from the start).

> However in addition to linking a test case ID to a given python test, 
> we have implemented a simple mechanism to generate test cases from 
> what we called Test Description Records. Those are currently stored in

> excel spreadsheets but will be store in TestLink as test case
properties. The object injection is done automatically by the plugin
using a custom marker indicating which function argument shall be used
for TDR injection as well as correlating TDR with parametrized test
functions.

right.  Driving things from a spreadsheet, maybe even in Google docs,
sounds good to me, btw :)
[Laurent] >> We considered this but decided otherwise just because of
the limited functionality offered compared to M$ Office or LibreOffice
(we are actually using xlrd/xlwr). As a matter of fact I wanted to find
a way to abstract the "data source" for the TDR. Google doc would be a
perfect exercise.

> The point is that by using the funcarg feature offered by pytest, we 
> can generate tremendous amount of test cases with little effort and
this will require scalability and use of virtualization over time.
> 
> Sorry for the long description, but I wanted to give a complete
overview to help comprehend where we are going. 
> Our plan is to eventually release all this to the open source
community (when we think the plugins are mature and robust enough). 
> 
> If other people have a need/interest for those plugins (some of them 
> are not illustrated), I would be more than happy to accelerate this 
> process (getting approval from our Open Source Board). Since
pytest_cxdist doesn't exist, this could be open source from the get go.

I wonder if the provisioning of VMs wouldn't better fit at the level of
"tox", see http://tox.testrun.org or even yet a level higher.  tox
creates virtual environments on the Python level.  FYI there are plans
to introduce a plugin system to tox and to make pytest-xdist make use of
tox configuration directly.   
[Laurent] >> I need to educate myself more on Tox. We are using tox for
our CI system (Jenkins) but environment setup is actually done via
buildout (tox invoking buildout - maybe we would have worked with tox
from the start if we had known about it).
But the tox idea is pretty good as it this is really where the testbed
should be setup rather than runtime (pytest) when things can go wrong. 

Btw, from May on i (and others) should be available through my company
merlinux for some consulting in case you need some more substantial
support than occassional comments from far away deserts :)
[Laurent] >> Actually this sounds like interesting. Let me think about
it. We are a rather small team so we can use all the help we can get.
Enjoy and drink a lot of water :)

Cheers/Laurent
best,
holger



More information about the Pytest-dev mailing list