Hi,
I'd like to share a use pattern for contracts that might have got lost in the discussion and which I personally grow to like more and more. I'm not making any claims; this use pattern work for our team and I can't judge how much of a benefit it would be to others.
Imagine there are two developers, Alice working on a package A, and Betty working on a package B. Package A depends on package B.
Betty tested her package B with some test data D_B.
Alice tests her package A with some test data D_A. Now assume Betty did not write any contracts for her package B. When Alice tests her package, she is actually making an integration test. While she controls the inputs to B from A, she can only observe the results from B, but not whether they are correct by coincidence or B did its job correctly. Let's denote D'_B the data that is given to B from her original test data D_A during Alice's integration testing.
How can she test that package B gives the correct results on D'_B ? She needs to manually record the data somehow (by dynamically mocking package B and intercepting what gets passed from A to B?). She might fetch the tests from the package B, copy/paste the test cases and append D'_B. Or she could make a pull request and provide the extra test data directly to package B. She needs to understand how Betty's unit tests work and see how D'_B fits in there and what needs to be mocked.
All in all, not a trivial task if Alice is not familiar with the package B and even less so if Alice and Betty don't work in the same organization. Most of the time, Alice would not bother to test the dependencies on her testing data D_A. She would assume that her dependencies work, and just tests what comes out of them. If the results make sense, she would call it a tick on her to-do list and move on with the next task.
Let's assume now that Betty wrote some contracts in her code. When Alice runs the integration test of her package A, the contracts of B are automatically verified on D'_B. While the contracts might not cover all the cases that were covered in Betty's unit tests, they still cover some of them. Alice can be a bit more confident that at least something was checked on D'_B. Without the contracts, she would have checked nothing on D'_B in most of her everyday programming.
You can consider writing contracts as a matter of economy in this story. Betty might not need contracts for maintaining her package B -- she can read her code, she can extend her test cases. However, you can see contracts as a service to the package users, Alice in this case. Betty helps Alice have some integration tests free-of-charge (free for Alice; Betty of course pays the overhead of writing and maintaining the contracts). Alice does not need to understand how B can be tested nor needs to manually record data that needs to be passed to B. She merely runs her test code and the checker library will do the testing of B on D'_B automatically.
The utility of this service tends to grow exponentially in cases where dependency trees grow exponentially as well. Imagine if we had Carol with the package C, with the dependencies A -> B -> C. When Carol writes contracts, she does a service not only to her direct users (Betty) but also to the users of B (Alice). I don't see how Alice could practically cover the case with dependencies A -> B -> C and test C with D'_C (i.e. test C with the data coming from D_A) without the contracts unless she really takes her time and gets familiar with dependencies of all here immediate dependencies.
We found this pattern helpful in the team, especially during refactorings where contracts provide an additional security net. We don't have time to record and add tests of B for D'_B, and even less so of C for D'_C. The contracts work thus as a good compromise for us (marginal overhead, but better documentation and "free" integration tests rather than none).