From eli at water.ca.gov Thu Jan 19 05:55:56 2012 From: eli at water.ca.gov (Ateljevich, Eli) Date: Wed, 18 Jan 2012 20:55:56 -0800 Subject: [py-dev] xdist and thread-safe resource counting Message-ID: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> I have a question about managing resources in a threadsafe way across xdist -n. My group is using py.test as a high-level driver for testing an mpi-based numerical code. Many of our system-level tests wrap a system call to mpirun then postprocess results. I have a decorator for the tests that hints at the number of processors needed (usually something like 1,2,8). I would like to launch as much as I can at once given the available processors. For instance, if 16 processors are available there is no reason I couldn't be doing a 12 and a 4 processor test. I was thinking of using xdist with some modest number of processors representing the maximum number of concurrent tests. The xdist test processors would launch mpi jobs when enough processors become available to satisfy the np hint for that test. This would be managed by having the tests "check out" cores and sleep if they aren't available yet. This design requires a threadsafe method to query, acquire and lock the count of available mpi cores. I could use some sort of lock or semaphore from threading, but I thought it would be good to run this by the xdist cognoscenti and find out if there might be a preferred way of doing this given how xdist itself distributes its work or manages threads. Thanks much, Eli From holger at merlinux.eu Fri Jan 20 09:50:18 2012 From: holger at merlinux.eu (holger krekel) Date: Fri, 20 Jan 2012 08:50:18 +0000 Subject: [py-dev] xdist and thread-safe resource counting In-Reply-To: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> Message-ID: <20120120085018.GL26028@merlinux.eu> Hi Eli, interesting problem. On Wed, Jan 18, 2012 at 20:55 -0800, Ateljevich, Eli wrote: > I have a question about managing resources in a threadsafe way across xdist -n. > > My group is using py.test as a high-level driver for testing an mpi-based numerical code. Many of our system-level tests wrap a system call to mpirun then postprocess results. I have a decorator for the tests that hints at the number of processors needed (usually something like 1,2,8). > > I would like to launch as much as I can at once given the available processors. For instance, if 16 processors are available there is no reason I couldn't be doing a 12 and a 4 processor test. I was thinking of using xdist with some modest number of processors representing the maximum number of concurrent tests. The xdist test processors would launch mpi jobs when enough processors become available to satisfy the np hint for that test. This would be managed by having the tests "check out" cores and sleep if they aren't available yet. > > This design requires a threadsafe method to query, acquire and lock the count of available mpi cores. I could use some sort of lock or semaphore from threading, but I thought it would be good to run this by the xdist cognoscenti and find out if there might be a preferred way of doing this given how xdist itself distributes its work or manages threads. pytest-xdist itself does not provide or use a method to query the number of available processors. Quick background of xdist: Master process starts a number of processes which collect tests (see output of py.test --collectonly) and the master sees the test ids of all those collections. It then decides the scheduling (Each or Load at the moment, "-n5" implies load-balancing) and sends test ids to the nodes to execute. It pre-loads tests with test ids and then waits for completion for sending more test ids to each node. There is no node-to-node communication for co-ordination. It might be easiest to not try to extend the xdist-mechanisms but to implement an independent method which co-ordinates the number of running MPI tests / used processors via a file or so. For example, on posix you can get read/write a file with some meta-information and use the atomic os.rename operation. Not sure about the exact semantics but this should be doable and testable without any xdist involvement. If you have such a method which helps to restrict the number of MPI-processes you can then use it from a pytest_runtest_setup which can read your decorator-attributes/markers and then make the decision if to wait or run the test. This method also makes you rather independent from the number of worker processes started with "-nNUM". HTH, holger From eli at water.ca.gov Fri Jan 20 21:56:37 2012 From: eli at water.ca.gov (Ateljevich, Eli) Date: Fri, 20 Jan 2012 12:56:37 -0800 Subject: [py-dev] xdist and thread-safe resource counting In-Reply-To: <20120120085018.GL26028@merlinux.eu> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov>, <20120120085018.GL26028@merlinux.eu> Message-ID: <67CC54752B3F8D43ACD7E3AD455162630B5714DAB0@mrsbmapp20306.ad.water.ca.gov> Thanks, Holger. I appreciate the hint about where to do the testing and waiting in pytest_runtest_setup and I think the atomic file rename idea is an interesting way to set up a signal. I may not fully understand about xdist. I certainly agree it is efficient use of mpirun that is the crux of doing the job right ... so probably any load balancing offered by xdist is going to be wasted on me. The main service I was looking for out of xdist was the ability to run tests concurrently. As I think you realize, if I have a pool of 16 processors and the first four tests collected require 8, 4, 8, 4 processors, I would want this behavior: 1. the first test to start immediately 2. the second test to start immediately without the first finishing 3. the third test to either wait or start in a python sense but "sleep" before launching mpi 4. the fourth test to start immediately Is vanilla py.test able to do this kind of concurrent testing? Or would I need to tweak it to launch tests in threads according to my criterion for readiness? I think we have settled how I would allocate resources, but your idea implies I might have all the test hints in one place. If I have full control all the test launches this might allow me to do some sort of knapsack problem-ish kind of reorganization to keep everything fully utilized rather than taking the test in the order they were collected. For instance, if I had 16 processors and the first four tests take 12-12-4-4 I could do this in the order (12+4 concurrently) (12+4 concurrently). Do I have this level of control? Thanks, Eli _________________________ _______________ From: holger krekel [holger at merlinux.eu] Sent: Friday, January 20, 2012 12:50 AM To: Ateljevich, Eli Cc: py-dev at codespeak.net Subject: Re: [py-dev] xdist and thread-safe resource counting Hi Eli, interesting problem. On Wed, Jan 18, 2012 at 20:55 -0800, Ateljevich, Eli wrote: > I have a question about managing resources in a threadsafe way across xdist -n. > > My group is using py.test as a high-level driver for testing an mpi-based numerical code. Many of our system-level tests wrap a system call to mpirun then postprocess results. I have a decorator for the tests that hints at the number of processors needed (usually something like 1,2,8). > > I would like to launch as much as I can at once given the available processors. For instance, if 16 processors are available there is no reason I couldn't be doing a 12 and a 4 processor test. I was thinking of using xdist with some modest number of processors representing the maximum number of concurrent tests. The xdist test processors would launch mpi jobs when enough processors become available to satisfy the np hint for that test. This would be managed by having the tests "check out" cores and sleep if they aren't available yet. > > This design requires a threadsafe method to query, acquire and lock the count of available mpi cores. I could use some sort of lock or semaphore from threading, but I thought it would be good to run this by the xdist cognoscenti and find out if there might be a preferred way of doing this given how xdist itself distributes its work or manages threads. pytest-xdist itself does not provide or use a method to query the number of available processors. Quick background of xdist: Master process starts a number of processes which collect tests (see output of py.test --collectonly) and the master sees the test ids of all those collections. It then decides the scheduling (Each or Load at the moment, "-n5" implies load-balancing) and sends test ids to the nodes to execute. It pre-loads tests with test ids and then waits for completion for sending more test ids to each node. There is no node-to-node communication for co-ordination. It might be easiest to not try to extend the xdist-mechanisms but to implement an independent method which co-ordinates the number of running MPI tests / used processors via a file or so. For example, on posix you can get read/write a file with some meta-information and use the atomic os.rename operation. Not sure about the exact semantics but this should be doable and testable without any xdist involvement. If you have such a method which helps to restrict the number of MPI-processes you can then use it from a pytest_runtest_setup which can read your decorator-attributes/markers and then make the decision if to wait or run the test. This method also makes you rather independent from the number of worker processes started with "-nNUM". HTH, holger From holger at merlinux.eu Fri Jan 20 23:26:48 2012 From: holger at merlinux.eu (holger krekel) Date: Fri, 20 Jan 2012 22:26:48 +0000 Subject: [py-dev] xdist and thread-safe resource counting In-Reply-To: <67CC54752B3F8D43ACD7E3AD455162630B5714DAB0@mrsbmapp20306.ad.water.ca.gov> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> <20120120085018.GL26028@merlinux.eu> <67CC54752B3F8D43ACD7E3AD455162630B5714DAB0@mrsbmapp20306.ad.water.ca.gov> Message-ID: <20120120222648.GR26028@merlinux.eu> On Fri, Jan 20, 2012 at 12:56 -0800, Ateljevich, Eli wrote: > Thanks, Holger. I appreciate the hint about where to do the testing and waiting in pytest_runtest_setup and I think the atomic file rename idea is an interesting way to set up a signal. > > I may not fully understand about xdist. I certainly agree it is efficient use of mpirun that is the crux of doing the job right ... so probably any load balancing offered by xdist is going to be wasted on me. > > The main service I was looking for out of xdist was the ability to run tests concurrently. As I think you realize, if I have a pool of 16 processors and the first four tests collected require 8, 4, 8, 4 processors, I would want this behavior: > 1. the first test to start immediately > 2. the second test to start immediately without the first finishing > 3. the third test to either wait or start in a python sense but "sleep" before launching mpi > 4. the fourth test to start immediately > > Is vanilla py.test able to do this kind of concurrent testing? Or would I need to tweak it to launch tests in threads according to my criterion for readiness? A run with pytest-xdist, notably, "py.test -nNUM" allows to implement this behaviour, i think. > I think we have settled how I would allocate resources, but your idea implies I might have all the test hints in one place. If I have full control all the test launches this might allow me to do some sort of knapsack problem-ish kind of reorganization to keep everything fully utilized rather than taking the test in the order they were collected. For instance, if I had 16 processors and the first four tests take 12-12-4-4 I could do this in the order (12+4 concurrently) (12+4 concurrently). Do I have this level of control? I think so yes. IIRC pytest-xdist distributed the first four tests such that they each land at different nodes. So, given the algorithm i hinted at, and running with "py.test -n3" the first sub process would start and run on 12 processors. The second process would see that there are 12 used and wait until 12 become available. The third process would only need 4 and immediatly continue, utilizing all 16 processors at that time. When the first one finishes the second sub process would see that there now are enough and proceed with its testing. This is all fully compatible with pytest-xdist semantics and only needs code at pytest_runtest_setup time i think. best, holger From eli at water.ca.gov Thu Jan 26 00:30:42 2012 From: eli at water.ca.gov (Ateljevich, Eli) Date: Wed, 25 Jan 2012 15:30:42 -0800 Subject: [py-dev] xdist and thread-safe resource counting In-Reply-To: <20120120222648.GR26028@merlinux.eu> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> <20120120085018.GL26028@merlinux.eu> <67CC54752B3F8D43ACD7E3AD455162630B5714DAB0@mrsbmapp20306.ad.water.ca.gov> <20120120222648.GR26028@merlinux.eu> Message-ID: <67CC54752B3F8D43ACD7E3AD455162630B57114387@mrsbmapp20306.ad.water.ca.gov> Thanks. I really appreciate the help. I have one last question -- is the underlying parallel mechanism threads or mpi or something else? Do I have to worry about things like MPI communicators getting in each other's way? Cheers, Eli -----Original Message----- From: holger krekel [mailto:holger at merlinux.eu] Sent: Friday, January 20, 2012 2:27 PM To: Ateljevich, Eli Cc: holger krekel; py-dev at codespeak.net Subject: Re: [py-dev] xdist and thread-safe resource counting On Fri, Jan 20, 2012 at 12:56 -0800, Ateljevich, Eli wrote: > Thanks, Holger. I appreciate the hint about where to do the testing and waiting in pytest_runtest_setup and I think the atomic file rename idea is an interesting way to set up a signal. > > I may not fully understand about xdist. I certainly agree it is efficient use of mpirun that is the crux of doing the job right ... so probably any load balancing offered by xdist is going to be wasted on me. > > The main service I was looking for out of xdist was the ability to run tests concurrently. As I think you realize, if I have a pool of 16 processors and the first four tests collected require 8, 4, 8, 4 processors, I would want this behavior: > 1. the first test to start immediately > 2. the second test to start immediately without the first finishing > 3. the third test to either wait or start in a python sense but "sleep" before launching mpi > 4. the fourth test to start immediately > > Is vanilla py.test able to do this kind of concurrent testing? Or would I need to tweak it to launch tests in threads according to my criterion for readiness? A run with pytest-xdist, notably, "py.test -nNUM" allows to implement this behaviour, i think. > I think we have settled how I would allocate resources, but your idea implies I might have all the test hints in one place. If I have full control all the test launches this might allow me to do some sort of knapsack problem-ish kind of reorganization to keep everything fully utilized rather than taking the test in the order they were collected. For instance, if I had 16 processors and the first four tests take 12-12-4-4 I could do this in the order (12+4 concurrently) (12+4 concurrently). Do I have this level of control? I think so yes. IIRC pytest-xdist distributed the first four tests such that they each land at different nodes. So, given the algorithm i hinted at, and running with "py.test -n3" the first sub process would start and run on 12 processors. The second process would see that there are 12 used and wait until 12 become available. The third process would only need 4 and immediatly continue, utilizing all 16 processors at that time. When the first one finishes the second sub process would see that there now are enough and proceed with its testing. This is all fully compatible with pytest-xdist semantics and only needs code at pytest_runtest_setup time i think. best, holger From Ronny.Pfannschmidt at gmx.de Thu Jan 26 01:46:04 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Thu, 26 Jan 2012 01:46:04 +0100 Subject: [py-dev] xdist and thread-safe resource counting In-Reply-To: <67CC54752B3F8D43ACD7E3AD455162630B57114387@mrsbmapp20306.ad.water.ca.gov> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAA9@mrsbmapp20306.ad.water.ca.gov> <20120120085018.GL26028@merlinux.eu> <67CC54752B3F8D43ACD7E3AD455162630B5714DAB0@mrsbmapp20306.ad.water.ca.gov> <20120120222648.GR26028@merlinux.eu> <67CC54752B3F8D43ACD7E3AD455162630B57114387@mrsbmapp20306.ad.water.ca.gov> Message-ID: <4F20A24C.1020108@gmx.de> Hi, the underlying parallel mechanism is processes communicating via execnet. -- Ronny On 01/26/2012 12:30 AM, Ateljevich, Eli wrote: > Thanks. I really appreciate the help. I have one last question -- is the underlying parallel mechanism threads or mpi or something else? Do I have to worry about things like MPI communicators getting in each other's way? > > Cheers, > Eli > > -----Original Message----- > From: holger krekel [mailto:holger at merlinux.eu] > Sent: Friday, January 20, 2012 2:27 PM > To: Ateljevich, Eli > Cc: holger krekel; py-dev at codespeak.net > Subject: Re: [py-dev] xdist and thread-safe resource counting > > On Fri, Jan 20, 2012 at 12:56 -0800, Ateljevich, Eli wrote: >> Thanks, Holger. I appreciate the hint about where to do the testing and waiting in pytest_runtest_setup and I think the atomic file rename idea is an interesting way to set up a signal. >> >> I may not fully understand about xdist. I certainly agree it is efficient use of mpirun that is the crux of doing the job right ... so probably any load balancing offered by xdist is going to be wasted on me. >> >> The main service I was looking for out of xdist was the ability to run tests concurrently. As I think you realize, if I have a pool of 16 processors and the first four tests collected require 8, 4, 8, 4 processors, I would want this behavior: >> 1. the first test to start immediately >> 2. the second test to start immediately without the first finishing >> 3. the third test to either wait or start in a python sense but "sleep" before launching mpi >> 4. the fourth test to start immediately >> >> Is vanilla py.test able to do this kind of concurrent testing? Or would I need to tweak it to launch tests in threads according to my criterion for readiness? > > A run with pytest-xdist, notably, "py.test -nNUM" allows to implement this > behaviour, i think. > >> I think we have settled how I would allocate resources, but your idea implies I might have all the test hints in one place. If I have full control all the test launches this might allow me to do some sort of knapsack problem-ish kind of reorganization to keep everything fully utilized rather than taking the test in the order they were collected. For instance, if I had 16 processors and the first four tests take 12-12-4-4 I could do this in the order (12+4 concurrently) (12+4 concurrently). Do I have this level of control? > > I think so yes. IIRC pytest-xdist distributed the first four tests > such that they each land at different nodes. So, given the algorithm > i hinted at, and running with "py.test -n3" the first sub process would > start and run on 12 processors. The second process would see that > there are 12 used and wait until 12 become available. The > third process would only need 4 and immediatly continue, utilizing > all 16 processors at that time. When the first one finishes the > second sub process would see that there now are enough and proceed > with its testing. This is all fully compatible with pytest-xdist > semantics and only needs code at pytest_runtest_setup time i think. > > best, > holger > > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From eli at water.ca.gov Fri Jan 27 06:16:58 2012 From: eli at water.ca.gov (Ateljevich, Eli) Date: Thu, 26 Jan 2012 21:16:58 -0800 Subject: [py-dev] Can py.test collect a test inside a shared library? Message-ID: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE2@mrsbmapp20306.ad.water.ca.gov> Hi all, If I have a test compiled from C or f2py that is called test_something and is compiled to mymod.so (using Linux as the example), would it get collected ... if you grant for the sake of conversation that I could possibly have something useful to put in it? I realize I can't use the usual recompiling of assertion magic. I tried creating an __init__.py file: from mymod import * and running py.test on the directory without success. Maybe it needs to be in test_something.py? I thought I would check in about whether this is credited with a chance before I double my efforts. Thanks, Eli From Ronny.Pfannschmidt at gmx.de Fri Jan 27 08:54:22 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Fri, 27 Jan 2012 08:54:22 +0100 Subject: [py-dev] Can py.test collect a test inside a shared library? In-Reply-To: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE2@mrsbmapp20306.ad.water.ca.gov> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE2@mrsbmapp20306.ad.water.ca.gov> Message-ID: <4F22582E.5040905@gmx.de> Hi Eli, by default py.test searches for test functions test_*.py as for test functions, they are supposed to start with test as well it might make sense to slit your c written test function into a few test util function, that way you can run the c written part step by step, and do assertions in between (for getting a more detailed idea of whats wrong, if something goes wrong) -- Ronny On 01/27/2012 06:16 AM, Ateljevich, Eli wrote: > Hi all, > If I have a test compiled from C or f2py that is called test_something and is compiled to mymod.so (using Linux as the example), would it get collected ... if you grant for the sake of conversation that I could possibly have something useful to put in it? I realize I can't use the usual recompiling of assertion magic. > > I tried creating an __init__.py file: > from mymod import * > > and running py.test on the directory without success. Maybe it needs to be in test_something.py? I thought I would check in about whether this is credited with a chance before I double my efforts. > > Thanks, > Eli > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: From eli at water.ca.gov Fri Jan 27 16:52:43 2012 From: eli at water.ca.gov (Ateljevich, Eli) Date: Fri, 27 Jan 2012 07:52:43 -0800 Subject: [py-dev] Can py.test collect a test inside a shared library? In-Reply-To: <4F22582E.5040905@gmx.de> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE2@mrsbmapp20306.ad.water.ca.gov>, <4F22582E.5040905@gmx.de> Message-ID: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE8@mrsbmapp20306.ad.water.ca.gov> Thanks Ronny. I will wrap the C/Fortran in python, name it test_* and expect it to work. I figured if I went that far it would work, since it amounts to just using py.test the normal way on tests that happen to use C library functions to do some work. I had hoped that python namespaces were being searched for test_* rather than files. What you suggest about utility function makes sense, and it is my first choice. There is a subtle difference between the more intuitive thing you are suggesting: 1. Code to be tested is in fortran/C, tests are entirely in python, testing system is py.test and the precise thing I was asking about: 2. Code is in fortran/C, test is in fortran/C (with a helper macro to make it look easy) and the testing system is py.test That the latter can even be achieved is probably not obvious, but tools such as f2py have a callback system and I can write a little preprocessor macro for wrapping the test so that it looks like it is native but utilizes a python callback function for the assert (which will be a somewhat special one having to do with mpi). So it isn't that hard to achieve a unit testing framework that looks like it is written entirely in the native languages and has a fairly uniform way of treating parallel asserts (ie, it peforms the all-reduce so that the processors agree on pass/fail). But possible does not mean well-motivated. I am not sure that approach #2 is best even for my requirements, and it certainly would not be the best practice in general. It was more attractive if test collection would become trivially automatic rather than having to generate a test_* function to shadow each native test. Maybe I could write a custom collection hook to do that, but I don't want to bother anyone further until I run through my requirements and confirm this convoluted approach is actually better at meeting them. Thanks, Eli ________________________________________ From: Ronny Pfannschmidt [Ronny.Pfannschmidt at gmx.de] Sent: Thursday, January 26, 2012 11:54 PM To: Ateljevich, Eli Cc: py-dev at codespeak.net Subject: Re: [py-dev] Can py.test collect a test inside a shared library? Hi Eli, by default py.test searches for test functions test_*.py as for test functions, they are supposed to start with test as well it might make sense to slit your c written test function into a few test util function, that way you can run the c written part step by step, and do assertions in between (for getting a more detailed idea of whats wrong, if something goes wrong) -- Ronny On 01/27/2012 06:16 AM, Ateljevich, Eli wrote: > Hi all, > If I have a test compiled from C or f2py that is called test_something and is compiled to mymod.so (using Linux as the example), would it get collected ... if you grant for the sake of conversation that I could possibly have something useful to put in it? I realize I can't use the usual recompiling of assertion magic. > > I tried creating an __init__.py file: > from mymod import * > > and running py.test on the directory without success. Maybe it needs to be in test_something.py? I thought I would check in about whether this is credited with a chance before I double my efforts. > > Thanks, > Eli > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev From Ronny.Pfannschmidt at gmx.de Sat Jan 28 06:07:35 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sat, 28 Jan 2012 06:07:35 +0100 Subject: [py-dev] Can py.test collect a test inside a shared library? In-Reply-To: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE8@mrsbmapp20306.ad.water.ca.gov> References: <67CC54752B3F8D43ACD7E3AD455162630B5714DAE2@mrsbmapp20306.ad.water.ca.gov>, <4F22582E.5040905@gmx.de> <67CC54752B3F8D43ACD7E3AD455162630B5714DAE8@mrsbmapp20306.ad.water.ca.gov> Message-ID: <4F238297.2050703@gmx.de> Hi Eli, i simply suggested, option 1, cause its much more easy and straightforward to do from my pov its entirely possible to implement implement a collector that gets to participate in finding test functions inside of a shared library (just like osjekit does for javascript files for example), and if the shared lib acts close enough to a module, it may even be possible to reuse the Module collector but you will loose some of the python integration things if you do that and i'm not sure if its reasonably simple to bring those to a c/fortran binding -- Ronny On 01/27/2012 04:52 PM, Ateljevich, Eli wrote: > Thanks Ronny. I will wrap the C/Fortran in python, name it test_* and expect it to work. I figured if I went that far it would work, since it amounts to just using py.test the normal way on tests that happen to use C library functions to do some work. > > I had hoped that python namespaces were being searched for test_* rather than files. What you suggest about utility function makes sense, and it is my first choice. There is a subtle difference between the more intuitive thing you are suggesting: > 1. Code to be tested is in fortran/C, tests are entirely in python, testing system is py.test > and the precise thing I was asking about: > 2. Code is in fortran/C, test is in fortran/C (with a helper macro to make it look easy) and the testing system is py.test > > That the latter can even be achieved is probably not obvious, but tools such as f2py have a callback system and I can write a little preprocessor macro for wrapping the test so that it looks like it is native but utilizes a python callback function for the assert (which will be a somewhat special one having to do with mpi). So it isn't that hard to achieve a unit testing framework that looks like it is written entirely in the native languages and has a fairly uniform way of treating parallel asserts (ie, it peforms the all-reduce so that the processors agree on pass/fail). > > But possible does not mean well-motivated. I am not sure that approach #2 is best even for my requirements, and it certainly would not be the best practice in general. It was more attractive if test collection would become trivially automatic rather than having to generate a test_* function to shadow each native test. Maybe I could write a custom collection hook to do that, but I don't want to bother anyone further until I run through my requirements and confirm this convoluted approach is actually better at meeting them. > > Thanks, > Eli > > > > ________________________________________ > From: Ronny Pfannschmidt [Ronny.Pfannschmidt at gmx.de] > Sent: Thursday, January 26, 2012 11:54 PM > To: Ateljevich, Eli > Cc: py-dev at codespeak.net > Subject: Re: [py-dev] Can py.test collect a test inside a shared library? > > Hi Eli, > > by default py.test searches for test functions test_*.py > as for test functions, they are supposed to start with test as well > > it might make sense to slit your c written test function into a few test > util function, that way you can run the c written part step by step, and > do assertions in between (for getting a more detailed idea of whats > wrong, if something goes wrong) > > -- Ronny > > On 01/27/2012 06:16 AM, Ateljevich, Eli wrote: >> Hi all, >> If I have a test compiled from C or f2py that is called test_something and is compiled to mymod.so (using Linux as the example), would it get collected ... if you grant for the sake of conversation that I could possibly have something useful to put in it? I realize I can't use the usual recompiling of assertion magic. >> >> I tried creating an __init__.py file: >> from mymod import * >> >> and running py.test on the directory without success. Maybe it needs to be in test_something.py? I thought I would check in about whether this is credited with a chance before I double my efforts. >> >> Thanks, >> Eli >> _______________________________________________ >> py-dev mailing list >> py-dev at codespeak.net >> http://codespeak.net/mailman/listinfo/py-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 900 bytes Desc: OpenPGP digital signature URL: