[Doc-SIG] Re: Evolution of library documentation

Dinu Gherman gherman@darwin.in-berlin.de
Wed, 14 Mar 2001 14:04:47 +0100


Ka-Ping Yee wrote:
> 
> At the Python conference, a small group of us discussed the possibility
> of merging the external and internal documentation; that is, moving
> the library reference into the module source files.  It would no longer
> be written in TeX so that you wouldn't have to have TeX in order to
> produce documentation.  This would address the duplication problem and
> also keep all of a module's documentation in one place together with
> the module.  To avoid forcing you to page through a huge docstring
> before getting to the source code, we would allow a long docstring to
> go at the end of the file (or maybe collect docstrings from anywhere
> in the file).
> 
> To implement this convention, we wouldn't need to change the core
> because the compiler already throws out string constants if they aren't
> used for anything.  So a big docstring at the end of the file would not
> appear in the .pyc or occupy any memory on import; it would only be
> obtainable from the parse tree, and tools like pydoc could use the
> compiler module to do that.

I know I'm a bit late to jump in on this topic (guess a few
days delay can be considered late in a mailing list thread),
but nevertheless I would like to make one point that I feel
has not been adequately addressed yet.

Following Ping's thoughts, quickly as they move, he is pro-
posing nothing else, but an equivalent of Don Knuth's well
known literate programming scheme in Python. Ping, am I 
right?

I believe the literate programming folks who followed their 
master in the syntactical challenge of writing code using 
what was called the Web system (a combination of TeX with 
other languages like C and Pascal) is rather low, precisely
because the syntax to mangle both was maybe ok for Knuth
but far from easy for most others. Obviously, Python has
something of a promise here...

... but apart from keeping the syntax of docstrings easy to
understand there is one issue, that Web solved that Python
doesn't (this is where I have to disagree with Ping), at 
least not right out of the box. 

While it is possible today to write docstrings like this and 
also execute the code below as expected:

  def step1(): print '1!'
  def step2a(): print '2a!'
  def step2b(): print '2b!'

  def foo(bar):
      'step 1'
      step1()
      'step 2'
      if bar > 0:
          'step 2a'
          step2a()
      else:
          'step 2b'
          step2b()

  >>> foo(9)
  1!
  2a!

I think without some excellent code parsing/analysing support
it will be quite some challenge to implement something to get
hold of all these 'additional' docstrings in order to finally 
get at something close to the books (or some equivalent hyper-
text system) rendered with Web/TeX in former times (don't know 
of any very recent one) or with Mathematica, the most recent 
one to appear shortly: 

  http://www.wolfram-science.com

Fortunately, at ICP9 there were tools announced to analyse Py-
thon code much better/easier than ever before, like the comiler
module (I think Jeremy gave that presentation). And I'm really
putting some hope into that.

But finally, we'll probably need to know how far we'd like to
go the way of Web/Mathematica? Ping, any ideas?

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")