topmind at technologist.com
Fri Jun 1 18:37:28 EDT 2001
> In comp.object Topmind <topmind at technologist.com> wrote:
> >> >> Another good example is for
> >> >> instance x, y coordinates you want to pass around, without going to the
> >> >> trouble of dealing with anything more complicated.
> >> > What keeps a dictionary from satisfying this role?
> >> They're too heavy, again. :)
> > Please clarify "heavy".
> I think I did elsewhere. Too heavy in syntax,
The difference is minor between arrays and
tables for a well-tuned table API.
I will usually take the slightly-more up-front syntax in
order to get longer-term adaptability any day.
I already gave an example of where I was burned by
dictionaries about 5 or so messages ago.
> However, as I said before, it's mostly a syntax issue to me
> [not just speed]. There is
> a case to be made for a kind of tuple-like dictionary, though the
> advantages of that are minimal in my opinion; for anything more complex
> I'd be inclined to use a full-fledged object anyway.
But an object does not give you a full-fledge collection still.
I can view a table while the program is running, for example.
All an object gives you above a basic array is more columns.
> >> > I think that was my case, more or less. Touples, dictionaries, and
> >> > classes have too much *overlap* in Python. It just seems to me that
> >> > they could have factored the 3 into *one* thing. It keeps the
> >> > language cleaner and the learning curve shorter that way.
> >> I disagree; I think a distinction like this can help sometimes. Look at
> >> Perl and their scalars, which merges things like integers, floats and
> >> strings into 'one thing'.
> > I like that approach. It makes the code leaner and cleaner IMO.
> > Less casting, converting, and declaration clutter. It allows
> > you to look at raw business logic instead of diddling with
> > conversions, casting, and bloated declarations.
> You're confusing things here; you're confusing the effect of static
> type checking (declarations and casting) with that of having different
> datatypes. In Python, you don't need to declare or cast integers, only
> convert when necessary. You need to convert an integer to a string if you
> want to use integers in a string, for instance.
> Because of this, the program stops when you do something silly, instead of
> going on blindly and making a mishmash of your data.
I prefer that the "conversion" be done when comparing. IOW, having
an API that says "compare as numbers" or "compare as strings".
In my pet language, a comparison might look like:
if x %cta> y
The "c" means compare as character and the t means trim and
the "a" means compare case-sensative.
Most languages require one to do this:
if trim(uppercase(toString(x))) > trim(uppercase(y)) then .....
I factor these operations into the middle.
> This is *not* the same argument as that for static type checking however;
> it is important to see the distinction. It's an argument for a light-weight
> dynamically checked type (or interface/protocol) system.
> What you seem to be describing as the benefits of the Perl scalar may
> instead be the benefits of the absence of statically checked types.
Perhaps. I still have not got the hang of Python's typing system
approach yet. I don't like types anymore. Types create "hard
to disect binary blobs" in my view. I have grown toward the
Unix philosophy of "every interface between systems should
be ASCII" (Or UNICODE, perhaps) if possible. I now apply this at a
smaller level than just "between systems and applications".
(Accept I evolved it up to tables also. The ultimate
xfer protocol: text and tables.)
> > A jillion messages
> > are already devoted to that topic, with no "killer proof"
> > on either side. It may be subjective which is the "best".
> > I grew up on strong typing, but have gravitated toward
> > prefering dynamic typing over the years.
> Me too.
I should change that to "type-free". I grew up on strong
and explicit typing, but have completely reversed
and wish my language of use was completely type free.
My pet language has only one type: a dictionary
array. It is used for *everything* including scalers.
scalers are simply a shortcut for something like:
x = 5
x.__value__ = 5 // same
(It does not use underscores, but something
Well, I should say everything except for internal
structures. It is not like Python that way. I
see no real need for that. I would probably use
tables to pull off what others would use the
Python meta language tools for.
My pet language in many ways is similar to Python,
but much more minimalist WRT types and collections.
Python has too many syntax variation IMO.
> >> It's funny you should compare tuples with dictionaries and say they
> >> should be conflated; most people complaining about tuples say they're
> >> too much like *lists* (arrays). They're right that they're very much
> >> like lists,
> > That too. Roll 'em all up. Requirements change. I hate recoding
> > from lists to touples to dictionaries to tables, etc.
> > Make the interfaces the *same*, and only swap the engine, NOT
> > the interface.
> Oh, I'd say make the interfaces different, use the same engine where
Why make the interface different? Then you have to overhaul
everything if your collection needs change. (A Meyerian
It is not just "minimalism", but anti-sub-typing also.
> I changed my mind a little about dictionaries; in practice
> they're often used to store lots of homogenous values, not as a kind
> of datatype (in Python, class instances (objects) are used for that).
> I think it's a myth that having a universal rolled-into-one collection
> type helps your program deal with change more easily.
Well, I am pushing that "myth" and have no reason to
> In my Python
> programs, I use lists and tuples and dictionaries and tables in
> rather different places in different idioms.
But do they STAY different?
I find that collections often need more than what they started
out needing. IS-A collections cannot hop IS-A fences very well.
Perhaps an example would help.
> While it is possible I
> change one into the other on occasion, this is the exception, not the rule.
For what I do, rule.
At least frequent enuf to want to prevent it up front.
> When such changes do happen so many other changes tend to happen it
> doesn't really matter anymore anyway; the change in collection type is
> probably caused by such a larger change.
I disagree. It might simply be another view of the *same* data,
another new column, etc. One thing about custom business
programming is that many different parts often need the
same data, but with a different view, lookup, join, sort,
IOW, you never know what or who will need data from
> With a universal collection type you lose some of the benefit of these
> separate idioms (which can help with the readability of the program).
I disagree about readability. When collection needs change, trying
to force a linked list or dictionary into something else makes for
a much larger readibility problem.
Like I said, array syntax may give you a SLIGHT benefit up
front, but the loss down it road more than makes up for
> also may increase errors, as due to the absence of different interfaces
> and idioms you run a higher risk the program will continue after an error
> and mangle your data in unpredictable and hard to track down ways.
I would have to see some examples of this. The "protection" needs
often don't align along the collection type's boundaries
or features. Not allowing dictionaries to be sorted (in place) is
an *arbitrary* limit in my book.
> Anyway, as I said before, you're in the minimalist camp here, along with
> Smalltalkers (everything's an object with messages) and Lispers (everything's
> a list).
WRT to collections, yes, but not control structures
(IF, loops, etc.)
> I take the position that syntactic sugar can help with idioms,
> which can help with clarity, readability and error detection.
Well, we will just have to agree to disagree. I have used
both approaches, and don't like collection type proliferation
the least bit.
> >> except that they're immutable (like integers and strings
> >> in Python, but unlike lists and dictionaries and instances). Your
> >> desire to conflate them with dictionaries is in my opinion wrong as well,
> >> but you're more right than those who want to merge them with lists;
> > Show me "wrong".
> Wrong as in "I think there are arguments against this which you are missing
> and I disagree with your evaluation of the tradeoffs". This is a
> subjective issue. I imagine you can do empirical research about programming
> language effectiveness and these issues, but I'm not going to do it.
> Are you?
Nope. I won't challenge any agreement that it is subjective. I
should be happy enough that you agree it is likely subjective. This is
a lot more than I often get out of the pro-OO camp.
I just wish the industry would realize this and knock it off
with the one-paradigm/language-fits all scenario, such as the
Java-tization of everything.
> If not, you'll have to accept that my efforts in trying to show
> you 'wrong' are as valid as your efforts to show me wrong and yourself right,
> here. The alternative is saying you're doing no such thing, in which
> case I wonder what we're doing. :)
> >> tuples are generally used as 'records' (heterogenous objects) and not
> >> as lists of homogenous objects.
> > Doesn't matter. Needs change. See above. Homo today, hetero tomorrow.
> > Micheal Jackson Collections, you could say.
> Heterogenous collections are not going to change into homogenous collection
> and vice versa in by far the most circumstances. If you disagree, you
> should name some cases; I can't think of any.
You would have to leave an actual example because "heterogenous" may
depend on how one classifies the world in their head.
> By 'homogenous collection' I mean a collection of 'like' objects
> (English words, files, animals, records with address data, etc).
> By 'heterogenous collection' I mean a collection of significantly
> different objects ("an integer, a string and a list", "a first name,
> a middle name and a last name", "a word and the frequency of its
> occurance in a text", "an x coordinate and an y coordinate").
> >> Anyway, you're in the LISP and Smalltalk camp here; do a lot with just
> >> a few syntactic (surface semantic) concepts.
> > As far as collections, yes you can say that. (Although Smalltalk's
> > collection API's are still too taxonomy-happy for my tastes.)
> >> A language like Python
> >> adds more syntactic sugar, and my theory is that this syntactic
> >> sugar *helps* programmers write and read programs.
> > Perhaps it depends on the programmer. Also, there is maintainability.
> > Having dedicated syntax for certain (false) categorizations of
> > collections may make *some* code easier to read, but still makes
> > it harder to change when collection needs grow, morph, or change.
> Yes, there are definitely tradeoffs there and I recognize those
> tradeoffs. I think the tradeoffs for collections weigh into a different
> direction, however. That's not to say I want a huge forest of collections
> that you can see in some statically typed languages, where they have
> arrays for integers, arrays for strings, arrays for floats, and so on
> ad infinitum. While arrays are usually for homogenous collections I think
> the strict specification and checking of such can bog down the programmer
> too much. It's also fine with me if collections share an underlying
> implementation in some cases, if this is easier or more efficient.
> But for me, the balance of the tradeoffs still leans towards more
> collection interfaces than just a single one.
It would be interesting to see some of your designs.
> Anyway, is that the only response you had to my example? I showed you how
> Python was already doing more or less what you said it should be doing.
> An 'oh, cool' or 'huh?' or 'that's not what I mean' would've been worth
> my troubles.
I guess I would have to see it applied. I might do it another way
> > [snip]
> >> >> > Besides, what is wrong
> >> >> > with regular by-reference parameters?
> >> >>
> >> >> Nothing at all, except that returning multiple values is far more clear
> >> >> by just about any measure you can come up with. :)
> >> > Which would be?
> >> > I suppose you could argue that under the old approach
> >> > one could not tell what was being changed and what was
> >> > not by looking at the caller. However, you might have to check
> >> > the bottom or middle instead of the top of a routine to
> >> > figure out the result parameter interface in Python.
> >> Usually the bottom, yes, unless you document it at the top in
> >> a docstring. Looking for 'return' statements isn't terribly
> >> difficult, either.
> > But harder than looking at the top.
> Yes, but the problem already exists in any dynamically typed language
> where any kind of heterogenous collection can be returned, and you said
> you prefer dynamic typing. It doesn't add to the problem therefore;
> it's just as hard if you're returning a record or dictionary. The
> advantage of tuples is that they can be instantly unpacked after the
> function call.
Another point where a specific example might be helpful.
You argument has "when X happens..." arguments in them,
and the way I code/design, X may not happen very often.
> >> > IOW, it might trade caller readability for callee
> >> > readability. At the most it is a wash IMO.
> >> I disagree; caller readability is not significantly effected and
> >> callee readability (in multiple places) is improved. A clear win,
> >> therefore.
> > I am not sure how you are doing your math here. I figure one
> > always has to go [to] the function definition and parameter
> > list *anyhow* to understand the function's interface.
> > Thus, having it defined in the heading is a one-stop deal.
> Without declarations, that's just a name in an argument, and the
> 'changeable reference' indicator. It's true that is a bit more
I am not sure what your point is here.
> > Having to also check return statements is a two-stop deal.
> > (I don't end up looking at return statements very often.)
> The other deal is that I don't have to go look up the function definition
> each time I see a function call I don't know about, just in case this may
> involve reference parameters! That's a huge deal in my opinion. :)
I guess the naming conventions I use for routines is the
primary indicator as to whether it is mostly changing
or using info. I tend to use prefixes like "put" or
"change" or "move" to indicate that a lot of changing
is going on.
> >> > Having the entire interface defined at the top is
> >> > a good thing IMO. (Although "return" is rarely
> >> > at the top, but it is a single item if it
> >> > exists.)
> >> A single item of any kind of complexity, anyway, and a serious tradeoff
> >> in readability for the callee as there are now two different ways you can
> >> return values, one of which (reference parameters) is a hack.
> > Define "hack".
> Mathematical functions, which inspired functions in programming languages,
> don't have 'reference parameters'. They just have inputs. It can therefore
> be presumed originally computer language functions didn't have them
> either, and someone added them to languages in an early hack in order to support
> multiple output values. The hack makes sense if your language is statically
> typed, as you can then define the types of all the output values in the
> same way as you already defined the types of the input values. You don't have
> to think about extra syntax. It doesn't make a lot of sense in a
> dynamically typed language, though.
To say that "math didn't do it" is misleading IMO. What is good
for math may not be good for programming.
> (and of course in Python you can mutate mutable objects passed to a function and
> *any* variable in Python is a reference. But it's better style to avoid
> mutating input if possible, in my opinion. It encourages more independent
> functions which makes for easier to maintain and debug code).
Yip. Complex things passed in are alterable anyhow. For example,
it makes more sense to change a large array *in place* rather
than make a copy and pass it back out.
Thus, you have *two* param changing
conventions floating around in Python.
> >> > Because I am not convinced it is significantly better. As a rule of
> >> > thumb, I say something has to be at least 15 to 30 percent better to
> >> > deviate from tradition. Perhaps if I saw more actual uses for
> >> > it besides foo-bar examples, but I have not.
> >> '15 to 30 percent better': failure to grok error.
> > Perhaps you grok differently than me.
> >> If you mean the amount
> >> of typing, I can see it's far more than 30 percent better. But you
> >> probably don't mean that, and it's fairly meaningless beyond that.
> So, how *do* you arrive at these 15 to 30 percent better figures?
> It implies somekind of objectively measured thing, did you?
Probably. What is "better" is often highly subjective,
often because our different habits and design philosophies
use different features at different frequencies, etc.
> >> > Most "data structures" I deal with are more than 2 positions.
> >> > Thus, I use tables, and perhaps a dictionary-like thing to
> >> > interface to such records. (I prefer to use tables to store
> >> > data instead of dictionaries themselves, other than an interface
> >> > mechanism to specific records.) Perhaps some niches have lots of
> >> > "skinney collections" where touples may help, but not mine.
> >> Well, I tried to describe such a niche; returning multiple things from
> >> a function. Another niche is indeed the very light weight record
> >> niche; x, y coordinates for instance. Yet another niche, harder to
> >> describe is the 'make a new immutable object from other immutable
> >> objects' niche.
> > I meant industry domains, like business versus embedded systems versus
> > scientific computing, etc. I don't do a lot of X, Y coordinate work, BTW.
> European example for the industry domain is a 'year/weeknumber' tuple.
> In Europe industry often works with (ISO) weeknumbers. To calculate
> weeknumers back to a date (beginning of the week), you need the year as well,
> so it can make sense to pass these around as pairs in ones application.
I prefer to pass dates around as single strings. Formatting it for
different countries is a formatting (output) issue and not an internal
issue. IOW, the internal representation and the external do not
have to be the same.
> [swapping two values with tuples]
> >> But it *is* obvious what is going on as you already understand both
> >> tuple unpacking and tuple construction.
> > But it is just Yet Another Silly Trick To Understand.
> It's not a 'silly trick' like many Perl 'silly tricks' where the trick
> is merely in syntax and not the *consequence* of an orthogonal syntax.
> If your syntax is orthogonal you can reason about it, so it's not
> a silly trick you need to remember in isolation. That's a very different
> thing when you're learning a language.
I guess I don't see a significant net value of touples. They
just complicate the syntax and are often used for
stuff that can be done other ways.
> >> We're just doing both in a
> >> single line. There's nothing special case about this. It's not *hard*
> >> to understand tuple construction and unpacking. You're clinging to
> >> your traditions here just for argument's sake. :)
> > Nope! I am weighing utility versus complexity, and it flunks
> > in my book. Save the syntax complexity for the *common*
> > stuff.
> I tried to show you how this *is* common stuff. Not swapping variables,
> but collecting a bunch of things together and passing them around as
> a whole, and returning them as a whole, and easily separating them into
> pieces again. It happens frequently in software.
Not in a way that makes much use in touples. Perhaps you use
touples the way that I use relational tables, and that is
why my approach has less use for them.
> When the amount of heterogenous objects you're packing together into
> a bundle is large or the situation is complicated (need to do many
> operations on them), it makes sense to use a record or a class.
> In many circumstances it is however not a complicated bunch and it's
> easier not to use such a thing and use the syntactically and semantically
> minimal tuple instead.
Well, I would have to see more examples than your date example,
which I disagreed with.
> Yes, a small record can grow into a large one, and you will have to adapt
> some code when it does (in a dynamically typed language, not a lot).
> There are many cases when this just doesn't happen, though; x, y coordinates
> are an example, so are year/weeknumbers, or 'year/month/day' pairs, or
> 'amount/currency_type' pairs, and so on.
Well, I don't use much X and Y coordinates, and would probably
use tables if I did, since such an app probably has lots of them.
Often in table-land you pass around a record ID or record
reference instead of the record contents. Such a
record reference is similar in concept to a touple
of X and Y.
> >> > Don't get me wrong, there are languages a lot worse than Python,
> >> > but the poor consolidation of the similar things I mentioned
> >> > kind of bug me.
> >> I see these syntactic issues in a somewhat different philosophil light,
> >> something which I tried to describe above. While I'm all in favor of
> >> semantic minimalism, I'm not a syntactic minimalist. If you're a
> >> syntactic minimalist these subtle differences make no sense, indeed.
> >> Anyway, if I were designing a new language I would indeed attempt to
> >> bring dictionaries and tuples closer together, so we're in agreement
> >> in that sense as well. I'm just defending the special syntax for tuples, though
> >> I also wonder about performance (but we'd just have to profile it) if
> >> all tuples were dictionaries.
> > Performance often only becomes an issue when stupid programmers play with
> > too many features. Thus, reduce the syntax features and you have less
> > playing around with wasteful things and cryptic tricks.
> The tradeoff here is that one thing often doesn't fit all.
But most :-)
That is the prestine beauty of tables. They flex like nothing
else I have ever seen in programming. I am trying to spread
the Table Godspel. Mabye we will get our own common language
and widely used buzzwords just like everybody else.
More information about the Python-list