![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
Currently the numpy build system(s) support two ways of building numpy: either by compiling a giant concatenated C file, or by the more conventional route of first compiling each .c file to a .o file, and then linking those together. I gather from comments in the source code that the former is the traditional method, and the latter is the newer "experimental" approach. It's easy to break one of these builds without breaking the other (I just did this with the NA branch, and David had to clean up after me), and I don't see what value we really get from having both options -- it seems to just double the size of the test matrix without adding value. Now that the separate build seems to be fully supported, maybe it's time to finish the "experiment" and pick one approach to support going forward? I guess the arguments for each would be: - The monolithic build in principle allows for some extra intra-procedural optimization. I won't believe this until I see benchmarks, though; numpy doesn't have a lot of tiny inline-able function calls or anything like that. - The separate build is probably more convenient for developers, allowing faster rebuilds. Numpy builds fast enough for me that I'm not too worried about which approach we use, but it definitely seems worthwhile to reduce the number of configurations we have to support one way or the other. -N