Hi All! I'm trying to get together a master's thesis and was hoping to do something related to Python's suitability as a language for teaching programming. I was thinking of comparing Python, C++, and Java solutions to common first year computer science problems in terms of length and readability. Comparing length is not a problem. "Readability" on the other hand is not easy to define in any quantifiable way. I was thinking of trying something like "tokens per line". Any thoughts? Thanks! jeff elkner yorktown high school arlington, va
Fun quote: We often refer to Perl because it contains the extremes of both good and bad, thus making for poignant examples. We respect its power but also abhor some of its abusability. It can be called the "Bill Clinton" of programming languages because it has some good ideas and is quite good at some things, but it also has some serious integrity problems http://www.geocities.com/tablizer/langopts.htm Relevant: Simplicity This is an attempt to keep the code simple. A common (but imperfect) measurement is the number of "tokens" needed. Tokens will be defined variables, objects, methods, keywords, operators, specifiers, etc. http://www.geocities.com/tablizer/goals.htm Note: this page part of a vast anti-OOP site, which is fun to explore if only because the guy is passionate about his anti-OOPiness. http://www.geocities.com/tablizer/oopbad.htm (hahah: http://www.geocities.com/tablizer/xbasefan.htm I'm an Xbase fan too, although I use an OOPized version of it) Also relevant: http://www.cise.ufl.edu/~jnw/COP5555/Lectures/02.html I like the approach of this latter page, which doesn't make "readability" an atomic measure, but something higher level, with lots of component parts (code size might fit into it). Readability seems related to the question "What makes a language a VHLL?" Part of the answer: built in data structures that already do a lot (e.g. Python's dictionary). Final point: one reason I'd give for why Python is a good first language and/or teaching language is that it has so much in common with many other languages. In other words, I'd emphasize the similarities over the differences. Python is very cosmopolitan and in this sense provides a broad spectrum of analogies, templates, experiences, which the student programmer can draw against, when learning whatever next language(s) (i.e. we should never assume that "first language" means "last language"). Kirby
Thanks a 10**6, Kirby! I already planned to include Python's commonality with other languages somewhere in the paper. And I can talk about it from personal experience. I'm taking an AI course this semester and we are learning Lisp. I was struck by how many Lisp concepts I was already familiar with thanks to my exposure to Python. My classmates whose experiences were generally limited to C, C++, and Java did not have this advantage. I've printed out a few of the references and will begin looking at them tomorrow. The http://www.cise.ufl.edu/~jnw/COP5555/Lectures/02.html site looks particularly helpful. Thanks again! jeff On Sat, 2001-09-29 at 17:29, Kirby Urner wrote:
Fun quote:
We often refer to Perl because it contains the extremes of both good and bad, thus making for poignant examples. We respect its power but also abhor some of its abusability. It can be called the "Bill Clinton" of programming languages because it has some good ideas and is quite good at some things, but it also has some serious integrity problems
http://www.geocities.com/tablizer/langopts.htm
Relevant:
Simplicity
This is an attempt to keep the code simple. A common (but imperfect) measurement is the number of "tokens" needed. Tokens will be defined variables, objects, methods, keywords, operators, specifiers, etc.
http://www.geocities.com/tablizer/goals.htm
Note: this page part of a vast anti-OOP site, which is fun to explore if only because the guy is passionate about his anti-OOPiness.
http://www.geocities.com/tablizer/oopbad.htm
(hahah: http://www.geocities.com/tablizer/xbasefan.htm I'm an Xbase fan too, although I use an OOPized version of it)
Also relevant: http://www.cise.ufl.edu/~jnw/COP5555/Lectures/02.html
I like the approach of this latter page, which doesn't make "readability" an atomic measure, but something higher level, with lots of component parts (code size might fit into it).
Readability seems related to the question "What makes a language a VHLL?" Part of the answer: built in data structures that already do a lot (e.g. Python's dictionary).
Final point: one reason I'd give for why Python is a good first language and/or teaching language is that it has so much in common with many other languages. In other words, I'd emphasize the similarities over the differences.
Python is very cosmopolitan and in this sense provides a broad spectrum of analogies, templates, experiences, which the student programmer can draw against, when learning whatever next language(s) (i.e. we should never assume that "first language" means "last language").
Kirby
PYTHON Python: I love for its readability on two levels: 1. - consistent syntax and mostly very clear logical flow top to bottom. 2. - friendly visual syntax mainly derived from the obligatory but liberating white-space indentation Scrolling down and scanning large chunks of python code, I detect a well tempered rhythm. The blocks, indentations, naming and repetition reveal a similar construct pattern across myriad examples. This is a very 'un-scientific' comment I realize, but one perhaps worthwhile investigating.. I love Python's writability for two main reasons also: a. - executable pseudo-code .. to me Python is very sculptural. You start by naming and imagining things and relationships and then proceed to sketch them in. With time and skill you just keep going, implementing details and functionality. Play and exploration are encouraged, as is throwing things away when one has a better insight. I think this stems from the fact that because everything in Python is an object, one must name it, and doing so makes it exist. you might argue that is true for all languages, and I would agree, but Python seems very direct in the workflow from imagine, to model to further implementation adn naming and objects [tokens??]are present in the same manner every step of the way. b. - dictionaries and named arguments ..yes thank you Guido! I argue named arguments are a major reason why Python is readable and makes a great learning language. You take your semantic token map with you and share it liberally with all who come after. Dictionaries are the engine behind this. What you call something or assign a value/meaning too is more important than where it comes in the sequence, and we people should not be persecuted by having to interpret lists of unidentified symbols. Once you know how to 'read' a little Python, you know how to read most Python. Ditto writing. There is evident a clear design pattern here, which apart from a few characterful idiosyncrasies, works very well. This I think makes it especially suitable for teaching programming. And I agree with Kirby, it is a very cosmopolitan language. I would stress also that any language for people to learn is only as good as its community. Even if a small tribe, is it a living growing language? I believe the mood and quality of its culture is especially crucial for learning computer programming. [From age 9, I studied Latin for 5 years, but never was any teacher able to articulate successfully why we should study Latin, despite the daily question from some prankster in class. I am glad I studied Latin, but wish to god that at the time there had been a more cosmopolitan context provided for our study.. connections to the past, present and future, common concepts, omni present structure, etc.. At 12 years old I doubt if I would have responded this vocabulary, but certainly to the meaning of them. As you may have noted, I have been having list of fun exploring Rebol. It has provided some valuable fresh perspectives. REBOL notes Rebol is interesting in parallel to Python because readability is also a prime design goal. http://www.rebol.org Rebol puts great emphasis on minimal simplicity of syntax and a readability from left to write which hopes to attain a pragmatic but simplified feeling, inspired by natural language such as English:
send somebody something send edu-sig@python.org %notes_on_python.txt send edu-sig@python.org read http://somesite.com/notes_on_python.html
English and almost all human spoken languages are full of oddness, some more than others. But it is undeniable that a verb like 'send' is pretty universal idea, regardless of protocol, syntax, etc. Rebol's author Carl Sassenrath was formerly a bigtime Smalltalk fan and credits a mathematical theory of 'Denotational Semantics' with playing a role in the design philosophy underpinning Rebol. I know almost nothing about this yet myself, but following that thread, you might find some research, theory and metrics helpful to your Ph.D.. Denotational Semantics Understand the basic idea of calculating the meaning of a program as some element of a semantic domain. The links to papers I find: http://ase.isu.edu/ase01_07/ase01_07/bookcase/ref_sh/foldoc/9/29.htm http://www.acm.org/pubs/citations/journals/toplas/1992-14-1/p107-gudeman/ http://www.cogs.susx.ac.uk/lab/nlp/datr/datrnode17.html http://www.cis.upenn.edu/~bcpierce/types/archives/1991/msg00064.html http://ptolemy.eecs.berkeley.edu/papers/97/dataflow/ http://www.cs.unc.edu/~stotts/COMP204/assn/densem.html http://www.cogs.susx.ac.uk/local/teach/spl/list/node3.html [most are painfully academic to me, ymmv ;-)] <quote http://www.cs.bham.ac.uk/~mhe/issac/node17.html> Denotational semantics On the other hand, from the point of view of the programmer, who, in the case of real number computation, is typically a mathematician, a physicist or an engineer, representation details are mostly irrelevant. Whereas the operational semantics assigns computational mechanisms to program constructs, the denotational semantics assigns mathematical entities to them. These entities are real numbers, functions, sequences etc. </quote> I have read Python mythology interviews with Guido which refer to ABC and lessons learned from that. Hopefully there is more in the core Python design history which could help you too. hth Good Luck - Jason ----- Original Message ----- From: "Jeffrey Elkner" <jeff@elkner.net> To: <edu-sig@python.org> Sent: Saturday, September 29, 2001 11:47 AM Subject: [Edu-sig] measuring python...
Hi All!
I'm trying to get together a master's thesis and was hoping to do something related to Python's suitability as a language for teaching programming. I was thinking of comparing Python, C++, and Java solutions to common first year computer science problems in terms of length and readability.
Comparing length is not a problem. "Readability" on the other hand is not easy to define in any quantifiable way. I was thinking of trying something like "tokens per line".
Any thoughts?
http://www.idyll.org/~n8gray/code/index.html <quote> Announcement This module provides interactive 2-D plotting capabilities via the Grace package. Why do we need yet another plotting package for Python? Simple. None of the packages out there (that I've tried) currently offer all of the following desirable properties: Designed for use at the interactive prompt Provide UI access to plot properties and allow changes on-the-fly Tight integration with Numeric Python A typical gracePlot session goes something like this:
from gracePlot import gracePlot p = gracePlot() # A grace session opens p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], ... symbols=1 ) # A plot with errorbars & symbols p.title('Funding: Ministry of Silly Walks') p.ylabel('Funding (Pounds\S10\N)') p.multi(2,1) # Multiple plots: 2 rows, 1 column p.xlimit(0, 6) # Set limits of x-axis p.focus(1,0) # Set current graph to row 1, column 0 p.histoPlot( [7, 15, 18, 20, 21], x_min=1, ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars p.xlabel('Silliness Index') p.ylabel('Applications/yr') p.xlimit(0, 6) # Set limits of x-axis
</quote>
<FUN> A few moments of math thru programming just for the fun of it... <SETUP> Definition: A 'derangement' is a rearrangement of n things such that none is in its original position. Starting with 3 letters abc, the two derangements would be bca and cab. acb would be a permutation certainly, but not a derangement, since the first letter (a) didn't move. n = 3 in this case (3 things). As you would expect, as n goes higher, the number of possible derangements increases dramatically. Euler found an way to get the total number of possible derangements of n as a function of n (a breakthrough!). The formula is [1]: drs(n) = n! * [1/0! - 1/1! + 1/2! - 1/3! + ... + ((-1)^n)/n!] In other words, we have a series of fractions of the form 1/t! with alternating sign, where t=0,n. Multiply by factorial of n and you're done. Recall 0! = 1. Since 2.2a4 doesn't require worrying about overflow, we can write a simpler factorial program:
import operator def fact(n): """ Return n! """ if n==0 or n==1: return 1 return reduce(operator.mul, range(1,n+1))
As for the fractions, many have written rational numbers modules, including me. I include a Fraction class in my mathobjects package (along with matrix and polynomial objects)[2]:
from mathobjects import Fraction F = Fraction
Fraction objects know how to perform basic operations, like addition, taking care of the common denominator issue, reducing to lowest terms, other stuff like that, behind the scenes:
F(1,2) + F(2,3) + F(3,7) # 1/2 + 2/3 + 3/7 (67/42)
<SIDEBAR> In my Math Curriculum of Tomorrow, one of the first things students get to do is write a fraction class, as it implements basic math ideas, plus is a great intro to operator overriding. And since the int vs. long dichotomy will be ever less pronounced as we move towards Python 3.0, the coding of such will just get easier. </SIDEBAR> </SETUP> With these preliminaries out of the way, it's time for the main event:
def drs(n): """ Return the number of derangements of n things """ f = 0 # f: running total of fractions flip = 0 # sign changer for t in range(n+1): # t = 0 thru n if flip: f -= F(1,fact(t)) flip = 0 else: f += F(1,fact(t)) flip = 1 return fact(n) * f
drs(1) 0 drs(2) 1 drs(3) 2 drs(4) 9 drs(5) 44 drs(6) 265 drs(12) 176214841
<VARIATIONS> If you don't like the 'flip' mechanism for changing sign, you might test for whether t is even or odd. t/2 has no remainder if t is even, i.e. t%2==0. So a variation of the above would be:
def drs(n): """ Return the number of derangements of n things """ f = 0 # f: running total of fractions for t in range(n+1): # t = 0 thru n if t%2==0: f += F(1,fact(t)) else: f -= F(1,fact(t)) return fact(n) * f
Or, if you want to get even more compressed...
from operator import add, sub def drs(n): """ Return the number of derangements of n things """ f = 0 for t in range(n+1): f = apply([add,sub][t%2],(f, F(1,fact(t))) ) return fact(n) * f
(You could also improve readability by changing the name f to something else. Having f, F and fact all in play is a bit heavy on the fs -- but I'll just leave it be at this point). </VARIATIONS> <NOTES> [1] William Dunham, 'Euler, The Master of Us All' (MAA, 1999), p. 167 [2] mathobjects package at Oregon Tips: http://www.oregon-tips.com/ under Curricula | Python Programming | Source code </NOTES> </FUN>
]Kirby Urner]
... Euler found an way to get the total number of possible derangements of n as a function of n (a breakthrough!). The formula is [1]:
drs(n) = n! * [1/0! - 1/1! + 1/2! - 1/3! + ... + ((-1)^n)/n!]
Note that the sum in brackets consists of the first n+1 terms of the Taylor expansion of e**-1 around 0, so is a good approximation to 1/e (e being 2.71828..., of course). In fact, it's such a good approximation that drs(n) == round(n!/e) where round(x) rounds x to the nearest integer, for all n >= 1. An old probability teaser asks: "N gentlemen check in their hats at the country club. If the hats are given back to them at random, what's the probability that no gentleman gets back his own hat?". Because of the equality above, the probability approaches 1/e ~= 37% as N increases, and is already within spitting distance of 37% for N==4. If the students are advanced enough, it's an interesting exercise to find the smallest n for which the limitations of floating-point arithmetic cause round(n!/e) to return a different result than your exact computation using rationals.
...
def drs(n): """ Return the number of derangements of n things """ f = 0 # f: running total of fractions flip = 0 # sign changer for t in range(n+1): # t = 0 thru n if flip: f -= F(1,fact(t)) flip = 0 else: f += F(1,fact(t)) flip = 1 return fact(n) * f
And later you flipped in a different way, by testing the parity of t. A common idiom in numeric programs is to multiply by 1 or -1 instead, like so: f = 0 sign = 1 for t in range(n+1): f += sign * F(1, fact(t)) sign = -sign return fact(n) * f As a *programming* issue, it's thus interesting to consider whether F.__mul__ should or shouldn't try to recognize the special cases of multiplication by 1 and -1. There's isn't a "one size fits all" answer, so it's not for students with low tolerance for uncertainty <wink>.
In fact, it's such a good approximation that
drs(n) == round(n!/e)
where round(x) rounds x to the nearest integer, for all n >= 1.
A very excellent point.
If the students are advanced enough, it's an interesting exercise to find the smallest n for which the limitations of floating-point arithmetic cause round(n!/e) to return a different result than your exact computation using rationals.
Not following this.
def altdrs(n): return round(fact(n)/e)
returns the same answer as drs(n) right down to n=1. A related question (that I understand) would be how far do we need to push the series in brackets to get 1/e within the limitations of floating point arithmetic:
from math import e def e_neg1(): f = 0 sign = 1 t = 0 while float(f)<>(1/e): f += sign * F(1, fact(t)) sign = -sign t += 1 return (t-1,f)
e_neg1() (18, (138547156531409/376610217984000))
I get 18 terms, with the accompanying fraction. >>> 138547156531409/376610217984000 # (import future division) 0.36787944117144233 >>> 1/e 0.36787944117144233 This is related to the thread wherein we get phi to the limits of floating point, using the continued fraction: phi= 1 + 1 -------------- 1 + 1 -------- 1 + 1 ---- 1 + ... It's related because evaluating a continued fraction involves recursive use of the Fraction object F:
def evalcf(qs): """ Evaluate continued fraction with partial quotients qs """ if len(qs)==1: return F(qs[0],1) return F(qs[0],1) + F(1,evalcf(qs[1:]))
evalcf([1,1,1,1,1]) (8/5) evalcf([1,1,1,1,1,1,1,1,1]) (55/34) float(evalcf([1,1,1,1,1,1,1,1,1])) 1.6176470588235294
from math import sqrt phi = (1 + sqrt(5))/2 def getphi(): qs = [] v = 0 while float(v) <> phi: qs.append(1) v = evalcf(qs) return (len(qs),v)
getphi() (40, (165580141/102334155))
And later you flipped in a different way, by testing the parity of t. A common idiom in numeric programs is to multiply by 1 or -1 instead, like so:
Slaps forehead. Yes, of course! This is what you see most often.
f = 0 sign = 1 for t in range(n+1): f += sign * F(1, fact(t)) sign = -sign return fact(n) * f
As a *programming* issue, it's thus interesting to consider whether F.__mul__ should or shouldn't try to recognize the special cases of multiplication by 1 and -1. There's isn't a "one size fits all" answer, so it's not for students with low tolerance for uncertainty <wink>.
Not following again. Like, of course it should, right?
f = F(1,2) f (1/2) f*1 (1/2) f*-1 (-1/2)
What other behavior would I want? Kirby
[Tim]
If the students are advanced enough, it's an interesting exercise to find the smallest n for which the limitations of floating-point arithmetic cause round(n!/e) to return a different result than your exact computation using rationals.
[Kirby Urner]
Not following this.
def altdrs(n): return round(fact(n)/e)
returns the same answer as drs(n) right down to n=1.
Then 1 certainly isn't the smallest n for which equality fails <wink>. Think about larger n, not smaller. After all, "e" there has at best 53 bits of precision, and for large enough n "fact(n)" can't even be represented approximately as a float (in 2.2a4 you'll get an OverflowError then; in earlier Pythons *probably* a platform floating Infinity).
A related question (that I understand) would be how far do we need to push the series in brackets to get 1/e within the limitations of floating point arithmetic:
Yes, that is related.
... while float(v) <> phi:
Note that loops like this may never terminate: there's really no a priori guarantee that any approximation will *exactly* equal the value computed by some other approximation (and math.sqrt() and math.e are approximations too).
f = 0 sign = 1 for t in range(n+1): f += sign * F(1, fact(t)) sign = -sign return fact(n) * f
As a *programming* issue, it's thus interesting to consider whether F.__mul__ should or shouldn't try to recognize the special cases of multiplication by 1 and -1. There's isn't a "one size fits all" answer, so it's not for students with low tolerance for uncertainty <wink>.
Not following again. Like, of course it should, right?
f = F(1,2) f (1/2) f*1 (1/2) f*-1 (-1/2)
What other behavior would I want?
It's a pragmatic ("programming") issue, not a semantic issue: the purpose of special-casing 1 and -1 within F.__mul__ would not be to deliver a different answer, but to return the correct answer *faster*. Even just making a copy of an unbounded rational number can take an unbounded amount of time (depending on how long it takes to copy over all the data), so special-casing multiplication by 1 to simply return "self" can save an unbounded amount of runtime. On the other hand, testing for 1 also takes time, and that time is wasted whenver the outcome is "nope -- it's not 1". So deciding whether it's a good idea *overall* is a balancing act. For most programs using floats or (machine) ints, it turns out it's usually an overall speed loss to special-case potentially special values. The tradeoffs are much muddier for unbounded numeric types, while the existence of idioms like "multiply by 1 or -1 on alternate iterations" make it an important pragmatic issue ("who in their right mind would bother multiplying by one?" turns out to be a poor argument <wink>).
At 07:14 PM 10/1/2001 -0400, Tim Peters wrote:
[Tim]
If the students are advanced enough, it's an interesting exercise to find the smallest n for which the limitations of floating-point arithmetic cause round(n!/e) to return a different result than your exact computation using rationals.
[Kirby Urner]
Not following this.
def altdrs(n): return round(fact(n)/e)
returns the same answer as drs(n) right down to n=1.
Then 1 certainly isn't the smallest n for which equality fails <wink>.
OK, I get it now:
def notsame(): n = 1 while altdrs(n)==drs(n): n = n+1 return n
notsame() 19 drs(19) 44750731559645106 altdrs(19) 44750731559645112.0
I.e. at n=18, we're still in the money:
drs(18) 2355301661033953 altdrs(18) 2355301661033953.0
... while float(v) <> phi:
Note that loops like this may never terminate:
Very true. Would be better to have some loop that detects when there's no difference between successive iterations and cuts it off there -- assumes the series is convergent in a nice way. Or you could do some abs(target-current)<epsilon, but have to choose epsilon with some care.
It's a pragmatic ("programming") issue, not a semantic issue: the purpose of special-casing 1 and -1 within F.__mul__ would not be to deliver a different answer, but to return the correct answer *faster*.
OK, I understand. My code doesn't do any case checking for 1, -1, but does a lot of instance checking, as my fraction objects may have matrices or polynomials to deal with (which doesn't mean I've implemented rational expressions -- just means you can multiply a polynomial by (1/2) and have the coefficients be fractions: >>> p x**6 + 2*x**5 + 3*x**4 + 4*x**3 + 4*x**2 + 2 >>> p*F(1,2) (1/2)*x**6 + x**5 + (3/2)*x**4 + 2*x**3 + 2*x**2 - 0*x + 1 ^^^ Oops, I see a bug I need to deal with). Kirby
Here's somethin' I wrote to compute e. Uses 2.2's ability to go with long integers on the fly, no explicit conversion needed: def gete(d): "Return the value of e with d digits" i = n = 1 # approx number of terms needed terms = int(math.ceil(d * 4/math.log10(d))) e = val = 10**terms while i < terms: n *= i e += val/n i += 1 val = str(e) return val[0]+"."+val[1:d] Usage:
numb.gete(600) '2.718281828459045235360287471352662497757247093699959574966967627724 076630353547594571382178525166427427466391932003059921817413596629043 572900334295260595630738132328627943490763233829880753195251019011573 834187930702154089149934884167509244761460668082264800168477411853742 345442437107539077744992069551702761838606261331384583000752044933826 560297606737113200709328709127443747047230696977209310141692836819025 515108657463772111252389784425056953696770785449969967946864454905987 931636889230098793127736178215424999229576351482208269895193668033182 52886939849646510582093923982948879332036250944311'
I checked it against a web page going for thousands more digits. So far so good. :-D The approximation re terms needed, given request for d digits, could probably use some fine tuning no doubt. The approach uses the fact that e = 1/0! + 1/1! + 1/2! + 1/3!... but with every term multiplied by 100000000000000000000000000000000... or whatever so that computations all happen in positive long integer world. The final step is simply to take the digits in string form, and stick in a decimal point after the 2. Kirby
The approximation re terms needed, given request for d digits, could probably use some fine tuning no doubt.
Here's another stab at generating e to more-than-we-usually-get decimals. There's this algebraic transformation, which I haven't quite figured out yet, that lets you rewrite 1 + 1/1! + 1/2! + 1/3! + 1/4! + 1/5! as 1+(1+(1+(1+(1+1/5)/4)/3)/2)/1. This translates into a very simple algorithm for e (see below), used by Jean-Michel Ferrard in a program for the TI (calculator). He manages to get 600 significant digits out of the thing (a TI-92 or 89). But this algorithm requires knowing in advance how many terms you need in order to produce n significant digits. Jean-Michel knows 294 is the magic number for 600 digits, but the best approximation I was able to find, for the number of terms, was here: http://numbers.computation.free.fr/Constants/E/e.html where is written: "To have d decimal digits of e, the first K ~ d log(10)/log(d) terms of the series are sufficient" (Section 7). But that's approximate, and it doesn't work for a lot of the values of d that I try. My earlier approach was to boost it higher -- but how much higher would I need to go? I went too far. If we're really not sure in advance how many terms to compute, how do we stop from running through too many? I thought a good way to apply the brakes would be to check the least significant bits of the long integer being completed, and to say "if these bits don't change through 10 iterations, then we're done". I also go to a few digits beyond what the user requests, for good measure, but only display the number of decimals (= terms) actually asked for. Here's a program implementing this strategy, along with a different form of the algorithm derived in Section 4 of the above web resource. def gete(terms): e = 10 ** (terms + 5) sig = u = n = 1 cnt = 0 while cnt < 10: e = (e * (u+1))//u u = (n+1)*(u+1) n += 1 leastsig = e & 1111 if leastsig == sig: cnt += 1 else: sig = leastsig cnt = 0 se = str(e) return se[0]+"."+se[1:terms+1] I asked for the first 6000 digits of e, and was able to compare them with an online resource to ascertain my result indeed ended with the right digits: ...6110250517100 It goes through about 2090 iterations to reach the terminal condition, whereas the log formula would suggest 1588. Takes about 50 seconds to complete on my 1.2 Ghz AMD w/ 640 MB RAM (ymmv). However, now that I know I should use about 2090 iterations, if I use the algorithm in Jean-Michel Ferrard's paper (it was emailed to me -- not on the web AFAIK), then I get 6000 digits in a blindingly short time, under 1 second! Sheesh, that's a *huge* difference. Here's that algorithm: def gete2(): "Return the value of e with 6000 digits" u = x = 10**6000 for k in range(2090,0,-1): u = x + u//k return str(u) Clearly, knowing the number of terms in advance, and using the above (which implements the algebraic transformation I mention up top), is the superior method, by a long shot. I need to get a better grip on this "knowing how many terms in advance" issue. Of course none of these methods are what the professionals are using to compute e to millions or billions of digits. That's a whole other ballgame, using 'binary splitting' and the like (involves Fourier Transforms -- 'Who is Fourier?'). Still, this game is fun enough, if not ground-breaking (and it's educational). Kirby
You might find interesting SciGraphica (http://scigraphica.sourceforge.net). The Python interface is still under development, but eventually will allow to create interactive 2D/3D/Polar plots. In any case, nice job ;-) Saludos, <ADRIAN> On Sun, 30 Sep 2001, Jason Cunliffe wrote:
http://www.idyll.org/~n8gray/code/index.html
<quote> Announcement This module provides interactive 2-D plotting capabilities via the Grace package. Why do we need yet another plotting package for Python? Simple. None of the packages out there (that I've tried) currently offer all of the following desirable properties: Designed for use at the interactive prompt Provide UI access to plot properties and allow changes on-the-fly Tight integration with Numeric Python A typical gracePlot session goes something like this:
from gracePlot import gracePlot p = gracePlot() # A grace session opens p.plot( [1,2,3,4,5], [10, 4, 2, 4, 10], [1, 0.7, 0.5, 1, 2], ... symbols=1 ) # A plot with errorbars & symbols p.title('Funding: Ministry of Silly Walks') p.ylabel('Funding (Pounds\S10\N)') p.multi(2,1) # Multiple plots: 2 rows, 1 column p.xlimit(0, 6) # Set limits of x-axis p.focus(1,0) # Set current graph to row 1, column 0 p.histoPlot( [7, 15, 18, 20, 21], x_min=1, ... dy=[2, 3.5, 4.6, 7.2, 8.8]) # A histogram w/errorbars p.xlabel('Silliness Index') p.ylabel('Applications/yr') p.xlimit(0, 6) # Set limits of x-axis
</quote>
_______________________________________________ Edu-sig mailing list Edu-sig@python.org http://mail.python.org/mailman/listinfo/edu-sig
participants (6)
-
Adrian Feiguin
-
Jason Cunliffe
-
Jason Cunliffe
-
Jeffrey Elkner
-
Kirby Urner
-
Tim Peters