Email conversation between myself and Marshall Cline

Some background: Marshall Cline has served on the ISO C++ and Smalltalk standardisation committees. He maintains the C++ FAQ Lite which is how we ended up conversing.

In the following conversation which required myself to sit down for at least four hours per night for a week, I learned a lot of stuff and possibly Marshall may even have learned one or two things as well. In the following X emails, we cover the following subjects:

You can return to nedprod.com here


From: Niall Douglas <xxx@xxxxxxx.xxx>
To: xxxxx@xxxxxxxxx.xxx
Subject: Comments on your C++ FAQ
Date: Sat, 27 Jul 2002 00:05:45 +0200

Firstly, it looks good, and I learned some things (like delete NULL; 
isn't supposed to cause an exception). Thanks.

However ...

I would prefer if you said making classes final leafs is evil. The 
use of private constructors with static ctors should be evil. 
*Protected* constructors with the same is fine. The reason I say this 
is recently I've had to modify quite a lot of code which kept its 
internal data and constructors private and hence my subclasses 
couldn't access them. My view is that good C++ should never ever 
think it won't be subclassed by someone without the sources - to that 
end I make most private variables protected and I rely on discipline 
not to access them. The same should go for constructors. I of course 
fully support commenting it with "subclass this under threat of death 
signed JS". (I know one solution is for the third party to modify the 
header files but that seems dirty and error prone to me).

Regarding static initialisation problems, for objects the trick is to 
create a helper class which initialises the static objects on 
construction and destructs them appropriately. The static members 
should be pointers or references to the classes. You can either new 
the pointers or else contain the classes in your helper class and 
write the pointers or references in the constructor. Then you simply 
instantiate the helper class first thing in main() and then it 
destructs last thing when main() exits.

Templates are funny things on different platforms. I ran into a 
problem which compiled fine on Linux but MSVC6 fell over where I was 
doing this:
int compareItems(void *a, void *b)
return ((T *) a)<((T *) b);

MSVC complained there was no < operator for T during the template 
definition, not at actual template use to create a class (where I 
think it should throw an error if the T passed doesn't have the 
operator). To get around it, I created a dummy class with the 
operator which I made all T's subclasses thereof and now MSVC shut 
up. Annoying though. My point is, I'd prefer your section on 
templates to point out more the stuff which doesn't work as it should 
- you make it sound like it all works according to spec, which it 
rarely in my experience does.

Lastly, I know you think macros are evil, but I have always used them 
to implement functionality which C++ should have. The biggest of 
these is try...finally without which multithreaded programming is a 
severe pain. Of course, you don't call the macro 'finally' - I've 
used TERRH_TRYF...TERRH_FINALLY...TERRH_ENDTRYF to make it very 
clear. In this context, macros are very useful and I think to be 
encouraged so long as the macro: (a) says where it's defined and (b) 
all caps.

Anyway, hope you don't mind these comments. I would hardly consider 
myself much good at C++, I've only been programming in it for three 
years or so and it still befuddles me at times (unlike in C or 
assembler, where I always know what to do).

Cheers,
Niall Douglas




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Fri, 26 Jul 2002 20:01:03 -0500

Hi Niall,

Thanks for your thoughtful note. Unfortunately I disagree with most
of your suggestions (see below for details). But I don't way my
disagreements to detract from my appreciation that you took the time
to write in the first place.

[See details below.]

Niall Douglas wrote:
>I would prefer if you said making classes final leafs is evil. The
>use of private constructors with static ctors should be evil.
>*Protected* constructors with the same is fine. The reason I say
>this is recently I've had to modify quite a lot of code which kept
>its internal data and constructors private and hence my subclasses
>couldn't access them. My view is that good C++ should never ever
>think it won't be subclassed by someone without the sources

Sorry, you're wrong above: good C++ classes very well might (and
often do!) want to make sure they don't get subclassed. There are
lots of reasons for this, including the practical reality that the
constraints on a base class is much more stringent than a leaf
class, and in some cases those constraints make it very expensive,
and perhaps even impossible, to make a class inheritable. When
people inherit anyway, all sorts of bad things happen, such as the
slicing problem (AKA chopped copies), the covariant parameter on the
assignment operator, etc., etc.

So making a class a leaf / final is *not* evil. In some cases *not*
making a class into a leaf / final is evil! In other words, in
cases like these, the programmer is not simply *allowed* to make the
class final, but has an actual fiduciary responsibility to do so.
It would be professionally irresponsible to *not* make the class a
leaf / final.

I recognize that you've had a bad experience trying to inherit from
a class with private stuff. But you must not generalize that bad
experience or take it to mean leaf classes are universally bad.
Your bad experience may mean the base class's programmer did
something wrong, or it may mean you did something wrong. But the
guideline / rule you propose ("final / leaf classes are evil") will
do more harm than good.

>- to
>that end I make most private variables protected and I rely on
>discipline not to access them. The same should go for
>constructors. I of course fully support commenting it with "subclass
>this under threat of death signed JS". (I know one solution is for
>the third party to modify the header files but that seems dirty and
>error prone to me).
>
>Regarding static initialisation problems, for objects the trick is
>to create a helper class which initialises the static objects on
>construction and destructs them appropriately. The static members
>should be pointers or references to the classes. You can either new
>the pointers or else contain the classes in your helper class and
>write the pointers or references in the constructor. Then you simply
>instantiate the helper class first thing in main() and then it
>destructs last thing when main() exits.

Sorry, but this is a very poor solution. It certainly works in a
few cases, but it violates the first premise of pluggable software.
The three solutions described in the FAQ are much better: they
handle all the trivial cases that yours handles, and in addition
they handle the more sophisticated "plugability" cases.


>Templates are funny things on different platforms. I ran into a
>problem which compiled fine on Linux but MSVC6 fell over where I was
>doing this:
>
>int compareItems(void *a, void *b)
>return ((T *) a)<((T *) b);
>
>MSVC complained there was no < operator for T during the template
>definition, not at actual template use to create a class (where I
>think it should throw an error if the T passed doesn't have the
>operator).

There must be something wrong with your code or your description,
because the code you've given above will never generate an error
irrespective of the type 'T'. The '<' is applied between two
pointers, not between two 'T's.

(I'm obviously assuming you fix the compile-time bugs in the code,
e.g., by adding 'template<class T>', and by wrapping the function
body in '{' and '}'.)


>To get around it, I created a dummy class with the
>operator which I made all T's subclasses thereof and now MSVC shut
>up. Annoying though. My point is, I'd prefer your section on
>templates to point out more the stuff which doesn't work as it
>should - you make it sound like it all works according to spec,
>which it rarely in my experience does.

I am interested in this error, but I'm only interested in it if it
is real, that is, if you can get MS VC++ to actually generate an
incorrect error message. I'd suggest rewriting the example, trying
it, and seeing if you can actually create a case that fails with MS
VC++.


>Lastly, I know you think macros are evil, but I have always used
>them to implement functionality which C++ should have. The biggest
>of these is try...finally without which multithreaded programming is
>a severe pain. Of course, you don't call the macro 'finally' - I've
>used TERRH_TRYF...TERRH_FINALLY...TERRH_ENDTRYF to make it very
>clear. In this context, macros are very useful and I think to be
>encouraged so long as the macro: (a) says where it's defined and (b)
>all caps.

Sorry, but I would strongly recommend against the above. That's
just not the C++ way of doing things. To be honest, it makes you
sound like you're a Java programmer who has learned C++ syntax but
still hasn't mastered C++ idioms.


>Anyway, hope you don't mind these comments.

Not at all.

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sun, 28 Jul 2002 03:42:12 +0200

On 26 Jul 2002 at 20:01, Marshall Cline wrote:

> Sorry, you're wrong above: good C++ classes very well might (and
> often do!) want to make sure they don't get subclassed. There are
> lots of reasons for this, including the practical reality that the
> constraints on a base class is much more stringent than a leaf class,
> and in some cases those constraints make it very expensive, and
> perhaps even impossible, to make a class inheritable. 

I had thought one was meant to encourage reusability in every line of 
C++ you wrote? Where possible obviously.

To that end, and I will admit I have only worked on small C++ 
projects (less than 5k lines), it has always appeared to me not 
hugely difficult to structure your code appropriately to ensure 
maximum reusability.

> When people
> inherit anyway, all sorts of bad things happen, such as the slicing
> problem (AKA chopped copies), the covariant parameter on the
> assignment operator, etc., etc.

Surely chopped copies are an optimisation problem for the compiler? 
You can help it of course by trying to ensure subclasses never undo 
or redo operations the base class did.

I'm afraid I don't understand covariant parameter on the assignment 
operator.

If what you say about inheriting is bad, then I would say my code is 
very foul. I have huge depths of subclassing as a normal part of 
writing my code. It's not just me - I picked up the idea from Qt 
(http://www.trolltech.com/). I've basically learned C++ by copying 
their example.

> I recognize that you've had a bad experience trying to inherit from a
> class with private stuff. But you must not generalize that bad
> experience or take it to mean leaf classes are universally bad. Your
> bad experience may mean the base class's programmer did something
> wrong, or it may mean you did something wrong. But the guideline /
> rule you propose ("final / leaf classes are evil") will do more harm
> than good.

No all he/she did wrong was assume no one would ever want to subclass 
their class. Not only is that arrogant and assumes themselves 
infallible, it seems to me bad practice and against the whole idea of 
software reusability.

> >Regarding static initialisation problems, for objects the trick is to
> >create a helper class which initialises the static objects on
> >construction and destructs them appropriately. The static members
> >should be pointers or references to the classes. You can either new
> >the pointers or else contain the classes in your helper class and
> >write the pointers or references in the constructor. Then you simply
> >instantiate the helper class first thing in main() and then it
> >destructs last thing when main() exits.
> 
> Sorry, but this is a very poor solution. It certainly works in a few
> cases, but it violates the first premise of pluggable software. The
> three solutions described in the FAQ are much better: they handle all
> the trivial cases that yours handles, and in addition they handle the
> more sophisticated "plugability" cases.

I'll review your FAQ's suggestions again. Maybe I missed something, 
but they all seemed to have fairly substantial caveats.

> There must be something wrong with your code or your description,
> because the code you've given above will never generate an error
> irrespective of the type 'T'. The '<' is applied between two
> pointers, not between two 'T's.

Agreed. Here is the actual code:
template<class type> class TEXPORT_TCOMMON TSortedList : public 
QList<type>
{
...
virtual int compareItems( QCollection::Item s1, QCollection::Item s2 
)
{
if(*((type *) s1)==*((type *) s2)) return 0;
return (*((type *) s1)<*((type *) s2) ? -1 : 1 );
}
}

This code generates an error saying operators == and < for class type 
don't exist. Changing type to T or anything else doesn't help. Oddly, 
almost identical code compiles elsewhere plus it worked on Linux last 
time I compiled it (quite some time ago).

That's MSVC 6 SP6.

> I am interested in this error, but I'm only interested in it if it is
> real, that is, if you can get MS VC++ to actually generate an
> incorrect error message. I'd suggest rewriting the example, trying
> it, and seeing if you can actually create a case that fails with MS
> VC++.

Hardly important unless Visual Studio .NET has the same problem. MS 
no longer consider MSVC6 a primary support product.

I have tried changing it around, but in the end I have bigger 
priorties. In the end, if it causes me too many problems, I'll switch 
to GCC.

> >Lastly, I know you think macros are evil, but I have always used them
> >to implement functionality which C++ should have. The biggest of
> >these is try...finally without which multithreaded programming is a
> >severe pain. Of course, you don't call the macro 'finally' - I've
> >used TERRH_TRYF...TERRH_FINALLY...TERRH_ENDTRYF to make it very
> >clear. In this context, macros are very useful and I think to be
> >encouraged so long as the macro: (a) says where it's defined and (b)
> >all caps.
> 
> Sorry, but I would strongly recommend against the above. That's
> just not the C++ way of doing things. To be honest, it makes you
> sound like you're a Java programmer who has learned C++ syntax but
> still hasn't mastered C++ idioms.

God no, I hated Java. I'm actually an assembler programmer originally 
which of course uses loads of macros all the time.

How, might I ask, would you suggest you implement try...finally 
without macros in "the C++ way of doing things"? I am assuming you 
will surely agree multithreaded programming is not fun without 
try...finally (if you don't, I want reasons, it'll be interesting to 
see what you'd come up with).

> >Anyway, hope you don't mind these comments.
> 
> Not at all.

I used to read the reports of the ANSI committee meetings regarding 
C++ as it was still being formalised and it always struck me as being 
an awful hodge-podge. The more I learn about it, the more I realise I 
was correct! I never had much experience with it, but Objective C 
always seemed a lot cleaner. Unfortunately, today is now and the 
world we live in uses C++.

I take it you come from a Smalltalk background? I know this will 
sound approaching blasphemous - and I don't mean at all to be 
offensive, but merely to garner an opinion - but I have always 
considered OO to be a good way of organising maintainable source but 
really crap for designing code. I suppose I still write C++ like I 
did assembler (and thereafter C) in that fashion, and hence our great 
difference in style.

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sat, 27 Jul 2002 20:26:46 -0500

Niall Douglas wrote:
>On 26 Jul 2002 at 20:01, Marshall Cline wrote:
>
>> Sorry, you're wrong above: good C++ classes very well might (and
often 
>> do!) want to make sure they don't get subclassed. There are lots of 
>> reasons for this, including the practical reality that the
constraints 
>> on a base class is much more stringent than a leaf class, and in some

>> cases those constraints make it very expensive, and perhaps even 
>> impossible, to make a class inheritable.
>
>I had thought one was meant to encourage reusability in every line of 
>C++ you wrote? Where possible obviously.

Nope. In fact, that approach normally causes projects to fail.

Instead of encouraging reusability in every line of C++ code (where
possible), the goal is to be a responsible professional who sometimes
invests effort in a future pay-back (AKA reuse), and sometimes does not.
A responsible professional does not invest in a future pay-back when the
ROI isn't right (return-on-investment) or when doing so would add
unacceptable risk or cost or time to the current project. Balancing the
future with the present is a very subtle task, but that is the task of a
responsible professional.


>To that end, and I will admit I have only worked on small C++ 
>projects (less than 5k lines), it has always appeared to me not 
>hugely difficult to structure your code appropriately to ensure 
>maximum reusability.

Everyone, including you, is to some extent a prisoner of their past.
Your experience with very small projects limits your ability to
understand how things work in the real world with large projects. I
don't fault you for your lack of experience with large projects, but in
a similar way you must not presume that your small-system experiences
are applicable or relevant to the way things happen in that other world.


>> When people
>> inherit anyway, all sorts of bad things happen, such as the slicing 
>> problem (AKA chopped copies), the covariant parameter on the 
>> assignment operator, etc., etc.
>
>Surely chopped copies are an optimisation problem for the compiler? 

No, not at all. They are logical errors - places where the compiler is
*required* by the language to generate code that does "the wrong thing."
So even if the compiler had a *perfect* optimizer, it would still be
required by the language generate code that the programmer would
ultimately consider a bug. But it's not a bug in the compiler; it's a
bug in the programmer's code.


>You can help it of course by trying to ensure subclasses never undo 
>or redo operations the base class did.

No, not at all.

I don't think you understand what the slicing problem (AKA chopped
copies) is all about. I suggest you read the FAQ on that one.


>I'm afraid I don't understand covariant parameter on the assignment 
>operator.

I don't think the FAQ covers this one. I'll try to give a very brief
overview. Suppose someone inherits D from B, and passes a D to a
function expecting a B& (reference-to-B). If that function assigns to
the B&, then only the B part of the D object is changed, and the object
often goes into a nonsensical state. It's kind of like genetic
engineering, where biologists scoop out the chromosomes from one animal,
and replace them with the chromosomes of another. It's real easy to get
garbage when you do that.


>If what you say about inheriting is bad, then I would say my code is 
>very foul. I have huge depths of subclassing as a normal part of 
>writing my code. 

Ouch, that's certainly a very dubious design style. It's a typical
hacker's style, and it comes from the Smalltalk world, but it's
generally inappropriate for C++ or Java or any other statically typed OO
language.


>It's not just me - I picked up the idea from Qt 
>(http://www.trolltech.com/). I've basically learned C++ by copying 
>their example.

If you're building an application framework (especially if you're
building a GUI framework), you might have chosen a reasonable piece of
software to learn from. If you're simply *using* a framework to build
some other app, you've chosen poorly.


>> I recognize that you've had a bad experience trying to inherit from a

>> class with private stuff. But you must not generalize that bad 
>> experience or take it to mean leaf classes are universally bad. Your 
>> bad experience may mean the base class's programmer did something 
>> wrong, or it may mean you did something wrong. But the guideline / 
>> rule you propose ("final / leaf classes are evil") will do more harm 
>> than good.
>
>No all he/she did wrong was assume no one would ever want to subclass 
>their class. Not only is that arrogant and assumes themselves 
>infallible, it seems to me bad practice and against the whole idea of 
>software reusability.

You seem to have a wrong notion of how reusability and inheritance are
supposed to mix. Inheritance is not "for" reuse. One does *not*
inherit from something to reuse that thing.


>> >Regarding static initialisation problems, for objects the trick is
to 
>> >create a helper class which initialises the static objects on 
>> >construction and destructs them appropriately. The static members 
>> >should be pointers or references to the classes. You can either new 
>> >the pointers or else contain the classes in your helper class and 
>> >write the pointers or references in the constructor. Then you simply
>> >instantiate the helper class first thing in main() and then it 
>> >destructs last thing when main() exits.
>> 
>> Sorry, but this is a very poor solution. It certainly works in a few
>> cases, but it violates the first premise of pluggable software. The 
>> three solutions described in the FAQ are much better: they handle all
>> the trivial cases that yours handles, and in addition they handle the
>> more sophisticated "plugability" cases.
>
>I'll review your FAQ's suggestions again. Maybe I missed something, 
>but they all seemed to have fairly substantial caveats.

They all do. But at least they all solve the pluggability problem,
which is typically the core reason for using this sort of syntax /
technique.


>> There must be something wrong with your code or your description, 
>> because the code you've given above will never generate an error 
>> irrespective of the type 'T'. The '<' is applied between two 
>> pointers, not between two 'T's.
>
>Agreed. Here is the actual code:
>template<class type> class TEXPORT_TCOMMON TSortedList : public 
>QList<type>
>{
>...
> virtual int compareItems( QCollection::Item s1,
QCollection::Item s2 
>)
> {
> if(*((type *) s1)==*((type *) s2)) return 0;
> return (*((type *) s1)<*((type *) s2) ? -1 : 1 );
> }
>}

This is an interesting example. Please do three things:

1. Send me the code for template class QList (Qt has a template called
QValueList, but I didn't find one called QList).
2. What is the type of QCollection::Item?
3. What is the '___' in TSortedList<____> that caused this error? Or
are you saying it always generates an error independent of any
TSortedList<___> usage?? If the latter, better send me the whole
TSortedList template.


>This code generates an error saying operators == and < for class type 
>don't exist. Changing type to T or anything else doesn't help. Oddly, 
>almost identical code compiles elsewhere plus it worked on Linux last 
>time I compiled it (quite some time ago).
>
>That's MSVC 6 SP6.
>
>> I am interested in this error, but I'm only interested in it if it is
>> real, that is, if you can get MS VC++ to actually generate an 
>> incorrect error message. I'd suggest rewriting the example, trying 
>> it, and seeing if you can actually create a case that fails with MS
>> VC++.
>
>Hardly important unless Visual Studio .NET has the same problem. MS 
>no longer consider MSVC6 a primary support product.

Not true. Many companies will continue to use MS VC++ 6 for years to
come. I know a company that's still using MS VC++ version 1.62 for some
of their embedded systems programming.


>I have tried changing it around, but in the end I have bigger 
>priorties. In the end, if it causes me too many problems, I'll switch 
>to GCC.
>
>> >Lastly, I know you think macros are evil, but I have always used
them 
>> >to implement functionality which C++ should have. The biggest of 
>> >these is try...finally without which multithreaded programming is a 
>> >severe pain. Of course, you don't call the macro 'finally' - I've 
>> >used TERRH_TRYF...TERRH_FINALLY...TERRH_ENDTRYF to make it very 
>> >clear. In this context, macros are very useful and I think to be 
>> >encouraged so long as the macro: (a) says where it's defined and (b)
>> >all caps.
>> 
>> Sorry, but I would strongly recommend against the above. That's just
>> not the C++ way of doing things. To be honest, it makes you sound 
>> like you're a Java programmer who has learned C++ syntax but still 
>> hasn't mastered C++ idioms.
>
>God no, I hated Java. I'm actually an assembler programmer originally 
>which of course uses loads of macros all the time.
>
>How, might I ask, would you suggest you implement try...finally 
>without macros in "the C++ way of doing things"? 

Use the C++ idiom that "destruction is resource reclamation."


>I am assuming you 
>will surely agree multithreaded programming is not fun without 
>try...finally (if you don't, I want reasons, it'll be interesting to 
>see what you'd come up with).

Constructor/destructor <==> resource acquisition/reclamation.


>> >Anyway, hope you don't mind these comments.
>> 
>> Not at all.
>
>I used to read the reports of the ANSI committee meetings regarding 
>C++ as it was still being formalised and it always struck me as being
>an awful hodge-podge. The more I learn about it, the more I realise I 
>was correct!

Have you ever been on *any* ANSI or ISO standardization committee? If
not, it must be easy for you to sit there with zero experience and throw
insults at the hard work of others who have selflessly sacrificed their
time and money to do something big.


>I never had much experience with it, but Objective C 
>always seemed a lot cleaner.

If ObjC is so much better, why is it so unpopular?


>Unfortunately, today is now and the 
>world we live in uses C++.
>
>I take it you come from a Smalltalk background? 

Not at all. My C++ "smells like" C++, not like Smalltalk or assembler
or anything else. Similarly my C code smells like C code, and it uses C
idioms, etc., and my Java smells like Java, etc. I am language neutral,
e.g., I've been a member of both the ANSI C++ and ANSI Smalltalk
committees.


>I know this will 
>sound approaching blasphemous - and I don't mean at all to be 
>offensive, but merely to garner an opinion - but I have always 
>considered OO to be a good way of organising maintainable source but 
>really crap for designing code.

Another really big error. OO is primarily a design approach. The
concept of "OO programming" is very close to a misnomer, since OO
programming cannot stand on its own - it needs OO *design*.

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Mon, 29 Jul 2002 15:06:30 +0200

On 27 Jul 2002 at 20:26, Marshall Cline wrote:

> >I had thought one was meant to encourage reusability in every line of
> > C++ you wrote? Where possible obviously.
> 
> Nope. In fact, that approach normally causes projects to fail.
> 
> Instead of encouraging reusability in every line of C++ code (where
> possible), the goal is to be a responsible professional who sometimes
> invests effort in a future pay-back (AKA reuse), and sometimes does
> not. A responsible professional does not invest in a future pay-back
> when the ROI isn't right (return-on-investment) or when doing so would
> add unacceptable risk or cost or time to the current project. 
> Balancing the future with the present is a very subtle task, but that
> is the task of a responsible professional.

I think we're actually agreeing here, but it's a case of opinion 
dictating emphasis. I tend to always overengineer because I believe 
15% extra development time halves your debugging time and quarters 
augmentation time and many projects suffer from fuzzy definition, so 
this approach makes sense (and I have used it successfully many 
times).

However, I also completely agree with your statement.

> >To that end, and I will admit I have only worked on small C++ 
> >projects (less than 5k lines), it has always appeared to me not
> >hugely difficult to structure your code appropriately to ensure
> >maximum reusability.
> 
> Everyone, including you, is to some extent a prisoner of their past.
> Your experience with very small projects limits your ability to
> understand how things work in the real world with large projects. I
> don't fault you for your lack of experience with large projects, but
> in a similar way you must not presume that your small-system
> experiences are applicable or relevant to the way things happen in
> that other world.

Remember I have worked on >30k C projects and a number of obscenely 
large all-assembler projects and I've been doing all this 
successfully for a decade now. Now I don't claim for a moment to know 
much about C++, but in the end code is code and while organising C++ 
like assembler might not be a good idea it's still better than no 
organisation at all. Hence, by analogy, I believe my considerable 
prior experience does give me an advantage overall, although I will 
freely admit some things will have to be unlearned for C++. 
Unfortunately, that is something that only comes with experience and 
time - although talking with people like yourself, examining existing 
code and reading resources like your C++ FAQ accelerate the process.

> >You can help it of course by trying to ensure subclasses never undo
> >or redo operations the base class did.
> 
> No, not at all.
> 
> I don't think you understand what the slicing problem (AKA chopped
> copies) is all about. I suggest you read the FAQ on that one.

No I didn't, but I do now. It's strongly related to below.

> >I'm afraid I don't understand covariant parameter on the assignment
> >operator.
> 
> I don't think the FAQ covers this one. I'll try to give a very brief
> overview. Suppose someone inherits D from B, and passes a D to a
> function expecting a B& (reference-to-B). If that function assigns to
> the B&, then only the B part of the D object is changed, and the
> object often goes into a nonsensical state. It's kind of like genetic
> engineering, where biologists scoop out the chromosomes from one
> animal, and replace them with the chromosomes of another. It's real
> easy to get garbage when you do that.

I was under the /strong/ impression references were treated 
syntaxically identical to their non-referenced version. Hence, you 
can't pass anything other than B to a function expecting a B&. I 
distinctly remember twigging I should no longer use C style pointers 
except when I explicitly expect D* and all ptrs to subclasses thereof 
(and where I think in the future I may pass a subclass).

> >If what you say about inheriting is bad, then I would say my code is
> >very foul. I have huge depths of subclassing as a normal part of
> >writing my code. 
> 
> Ouch, that's certainly a very dubious design style. It's a typical
> hacker's style, and it comes from the Smalltalk world, but it's
> generally inappropriate for C++ or Java or any other statically typed
> OO language.

Can you point me to resources explaining why this is bad and not just 
a question of individual style? I would have thought it /better/ for 
statically typed languages because the compiler is given more 
knowledge with which to optimise.

> >It's not just me - I picked up the idea from Qt 
> >(http://www.trolltech.com/). I've basically learned C++ by copying
> >their example.
> 
> If you're building an application framework (especially if you're
> building a GUI framework), you might have chosen a reasonable piece of
> software to learn from. If you're simply *using* a framework to build
> some other app, you've chosen poorly.

I'd like to think I accumulate ideas from whatever existing source I 
look at. It is after all how I originally taught myself how to 
program. Again, I'd like to know precisely why this style would be a 
poor choice for some other app.

> >No all he/she did wrong was assume no one would ever want to subclass
> > their class. Not only is that arrogant and assumes themselves
> >infallible, it seems to me bad practice and against the whole idea of
> > software reusability.
> 
> You seem to have a wrong notion of how reusability and inheritance are
> supposed to mix. Inheritance is not "for" reuse. One does *not*
> inherit from something to reuse that thing.

I really don't get you here. I reread the appropriate sections in 
your FAQ and I *still* don't get this. I can't escape thinking that 
what I and you mean by "reusing code" is not the same thing - for me, 
whenever you inherit something you inherit its structure (API) and 
its code - that, to me, is reusing already written and tested code 
which is a good thing. Hence inheritance = code reuse.

> >> Sorry, but this is a very poor solution. It certainly works in a
> >> few
> >> cases, but it violates the first premise of pluggable software. The
> >> three solutions described in the FAQ are much better: they handle
> >> all
> >> the trivial cases that yours handles, and in addition they handle
> >> the
> >> more sophisticated "plugability" cases.
> >
> >I'll review your FAQ's suggestions again. Maybe I missed something,
> >but they all seemed to have fairly substantial caveats.
> 
> They all do. But at least they all solve the pluggability problem,
> which is typically the core reason for using this sort of syntax /
> technique.

Pluggability = ability to link in or out a "module" of code easily 
yes?

If so, static class constructs worry me because I can't guarantee 
their order before main(). My approach solves this, and hence answers 
the main caveat your FAQ stated.

> This is an interesting example. Please do three things:
> 
> 1. Send me the code for template class QList (Qt has a template called
> QValueList, but I didn't find one called QList).

Yeah Trolltech renamed QList to QPtrList in Qt 3.0.

> 2. What is the type
> of QCollection::Item?

MSVC thinks it's a void *. QGList I included should say more on this.

> 3. What is the '___' in TSortedList<____> that
> caused this error? Or are you saying it always generates an error
> independent of any TSortedList<___> usage?? If the latter, better
> send me the whole TSortedList template.

It's the latter. I've commented out the original code in what I've 
sent to get it to compile. If you compare it to QSortedList.h, the 
two are almost identical (which is intentional) but QSortedList.h 
compiles perfectly whereas mine stops complaining with:

> d:\atoms\tclient\include\tsortedlist.h(57) : error C2678: binary '=='
> : no operator defined which takes a left-hand operand of type 'class
> type' (or there is no acceptable conversion)
> d:\atoms\tclient\include\tsortedlist.h(56) : while compiling
> class-template member function 'int __thiscall
> TSortedList<class type>::compareItems(void *,void *)'
> d:\atoms\tclient\include\tsortedlist.h(58) : error C2678: binary '<' :
> no operator defined which takes a left-hand operand of type 'class
> type' (or there is no acceptable conversion)
> d:\atoms\tclient\include\tsortedlist.h(56) : while compiling
> class-template member function 'int __thiscall
> TSortedList<class type>::compareItems(void *,void *)'

This is at template definition, not at template use.

> >Hardly important unless Visual Studio .NET has the same problem. MS
> >no longer consider MSVC6 a primary support product.
> 
> Not true. Many companies will continue to use MS VC++ 6 for years to
> come. I know a company that's still using MS VC++ version 1.62 for
> some of their embedded systems programming.

Companies may continue to use a product, but it's not in Microsoft's 
commercial interests to encourage them. Based on historical 
precident, it is extremely clear Microsoft take a bug much more 
seriously if it's in the current top-of-the-line product. Bugs in 
older products are more likely to be fixed if (a) new product's fix 
can be retro-engineered easily and (b) if their reputation would 
suffer if they didn't. I'm guessing (a) won't apply given the likely 
substantial redesign to accommodate C#.

> >How, might I ask, would you suggest you implement try...finally
> >without macros in "the C++ way of doing things"? 
> 
> Use the C++ idiom that "destruction is resource reclamation."
> 
> >I am assuming you 
> >will surely agree multithreaded programming is not fun without 
> >try...finally (if you don't, I want reasons, it'll be interesting to
> >see what you'd come up with).
> 
> Constructor/destructor <==> resource acquisition/reclamation.

That is a cracking idea I am kicking myself for not having thought of 
earlier. I was already concerned about the overhead of throwing an 
exception every try...finally, but your approach is far simpler and 
more efficient. It'll require quite a lot of code refitting, but I 
think it's worth it.

Thank you!

> >I used to read the reports of the ANSI committee meetings regarding
> >C++ as it was still being formalised and it always struck me as being
> >an awful hodge-podge. The more I learn about it, the more I realise I
> > was correct!
> 
> Have you ever been on *any* ANSI or ISO standardization committee? If
> not, it must be easy for you to sit there with zero experience and
> throw insults at the hard work of others who have selflessly
> sacrificed their time and money to do something big.

I apologise if you interpreted my words as throwing insults for they 
were not intended as such. I have the utmost respect and admiration 
for any standardisation committee (with possible exception of the 
POSIX threads committee, their poor design really screws C++ stack 
unwinding which is unforgiveable given how recently it was designed).

However, this does not changed my statement that C++ is an awful 
hodge-podge. I am not saying everyone involved in standardisation 
didn't move heaven and earth to make things as good as they could, 
but with an albatross like keeping existing code compatibility with 
AT&T C++ and C there was only so much that could be done. I remember 
the passionate debates about what compromises to strike well.

Put it this way: when you try something which seems logical in C it 
generally works the way you think it should. The same in C++ is much 
less true - I keep finding myself running into limitations which have 
no good reason. For example, the concept of destination type seems to 
have no effect in C++ eg;

TQString foo;
foo="Hello world";

Now TQString is a subclass of QString, and both have const char * 
ctors. The compiler will refuse to compile the above code because 
there are two methods of resolving it. Now, to me, that seems stupid 
because quite clearly the destination type is TQString and the 
shortest route to that is to use the TQString const char * ctor ie; I 
clearly am inferring to use the shortest route. The same sort of 
thing applies to overloading functions - you cannot overload based on 
return type, something I find particularly annoying.

> >I never had much experience with it, but Objective C 
> >always seemed a lot cleaner.
> 
> If ObjC is so much better, why is it so unpopular?

Lots of reasons. If I remember correctly, there were many problems 
with the run-time library on different platforms. There were issues 
regarding Next and Apple and all that. Of course, as well, there were 
culture issues - programmer inclinations. Also, there was good 
competition between many C++ vendors which brought C++ tools to a 
decent quality pretty quickly.

Computer history is strewn with cases of an inferior product 
destroying a superior product. It's hardly unique.

> >I take it you come from a Smalltalk background? 
> 
> Not at all. My C++ "smells like" C++, not like Smalltalk or assembler
> or anything else. Similarly my C code smells like C code, and it uses
> C idioms, etc., and my Java smells like Java, etc. I am language
> neutral, e.g., I've been a member of both the ANSI C++ and ANSI
> Smalltalk committees.

In which case you are a better programmer than I. I essentially 
program the same in any language using an internal methodology and my 
measure of my liking a language is how little it distorts what I 
actually want to do (hence my strong dislike of Java and 
VisualBasic). Nothing I program is what other people call a typical 
style of that language. You may think that irresponsible and arrogant 
of me, but I know it is an innate quality of mine - it's the same 
when I learn human languages (I still retain my own speech formation 
and pronounciation irrespective).

Hence, I am more of a functional programmer than anything else. It is 
my dream to some day design an imperitive/functional hybrid language 
which would perfectly reflect how I like to program.

> >I know this will 
> >sound approaching blasphemous - and I don't mean at all to be 
> >offensive, but merely to garner an opinion - but I have always 
> >considered OO to be a good way of organising maintainable source but
> >really crap for designing code.
> 
> Another really big error. OO is primarily a design approach. The
> concept of "OO programming" is very close to a misnomer, since OO
> programming cannot stand on its own - it needs OO *design*.

No, I must disagree with you there: design is independent of 
language. I have never agreed with OO design as my university 
lecturers found out - I quite simply think it's wrong. Computers 
don't work naturally with objects - it's an ill-fit.

What computers do do is work with data. If you base your design 
entirely around data, you produce far superior programs. Now I will 
agree OO is good for organising source for improved maintainability, 
but as a design approach I think it lacking.

An example: take your typical novice with OO. Tell them the rules and 
look at what they design. Invariably, pure OO as designed against the 
rules is as efficient as a one legged dog. In fact, in my opinion, OO 
experience is actually learning when to break pure OO and experienced 
OO advocates do not realise that they so automatically break the pure 
application of what they advocate.

A practical example: at university, we had to design a program to 
sort post office regional codes. The typical class effort, for which 
they received top marks, sorted the list in about ten to twenty 
seconds. My effort did it so quickly there wasn't a delay in the 
command prompt returing - and may I add, I received a bare pass mark 
because I adopted a data-centric solution and not an OO one. Now I 
couldn't fault that (the story of my entire degree), but it painfully 
reminded me of how OO is fundamentally incorrect for computers - good 
for humans, but not computers.

Anyway, I've enclosed the files you requested plus others I thought 
would aid you. TSortedList.cpp should almost stand alone against Qt 
3.0 - at least, anything not defined should have an obvious 
equivalent.

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sun, 28 Jul 2002 22:31:36 -0500

Niall Douglas wrote:
>On 27 Jul 2002 at 20:26, Marshall Cline wrote:
>
>>>I had thought one was meant to encourage reusability in every line of
>>> C++ you wrote? Where possible obviously.
>>
>>Nope. In fact, that approach normally causes projects to fail.
>>
>>Instead of encouraging reusability in every line of C++ code (where 
>>possible), the goal is to be a responsible professional who sometimes 
>>invests effort in a future pay-back (AKA reuse), and sometimes does 
>>not. A responsible professional does not invest in a future pay-back 
>>when the ROI isn't right (return-on-investment) or when doing so would
>>add unacceptable risk or cost or time to the current project. 
>>Balancing the future with the present is a very subtle task, but that 
>>is the task of a responsible professional.
>
>I think we're actually agreeing here, but it's a case of opinion 
>dictating emphasis. I tend to always overengineer because I believe 
>15% extra development time halves your debugging time and quarters 
>augmentation time and many projects suffer from fuzzy definition, so 
>this approach makes sense (and I have used it successfully many 
>times).
>
>However, I also completely agree with your statement.

Sounds like the difference may be one of degrees, as you said.

I spoke last time about being a prisoner of our pasts. My past includes
acting as "senior technology consultant" to IBM throughout North
America, which meant advising on product strategy, mentoring, and (most
relevant to this situation) performing internal audits. The audits
included a number of important engagements with IBM's clients, and
required me to perform assessments of people and technology. During
these audits and assessments, I saw a lot of large projects that failed
because of overengineering. Many of the technologists on these sick or
dead projects had a similar perspective to what you articulated above.
Their basic approach was often that overengineering is better than
underengineering, that it's cheaper in the long run, and perhaps cheaper
in the short run, so let's overengineer just in case.

As a result of seeing in excess of one hundred million dollars worth of
effort (and numerous careers) washed down the drain, I tend to make sure
there is a realistic ROI before adding any effort that has a
future-payback.


>>>To that end, and I will admit I have only worked on small C++
>>>projects (less than 5k lines), it has always appeared to me not
>>>hugely difficult to structure your code appropriately to ensure
>>>maximum reusability.
>>
>>Everyone, including you, is to some extent a prisoner of their past. 
>>Your experience with very small projects limits your ability to 
>>understand how things work in the real world with large projects. I 
>>don't fault you for your lack of experience with large projects, but 
>>in a similar way you must not presume that your small-system 
>>experiences are applicable or relevant to the way things happen in 
>>that other world.
>
>Remember I have worked on >30k C projects and a number of obscenely 
>large all-assembler projects and I've been doing all this 
>successfully for a decade now. 

Okay, I didn't realize that earlier. That makes some sense now.


>Now I don't claim for a moment to know 
>much about C++, but in the end code is code and while organising C++ 
>like assembler might not be a good idea it's still better than no 
>organisation at all. Hence, by analogy, I believe my considerable 
>prior experience does give me an advantage overall, 

Certainly true.

>although I will 
>freely admit some things will have to be unlearned for C++. 

Also true.

>Unfortunately, that is something that only comes with experience and 
>time - although talking with people like yourself, examining existing 
>code and reading resources like your C++ FAQ accelerate the process.
>
>>>You can help it of course by trying to ensure subclasses never undo 
>>>or redo operations the base class did.
>>
>>No, not at all.
>>
>>I don't think you understand what the slicing problem (AKA chopped
>>copies) is all about. I suggest you read the FAQ on that one.
>
>No I didn't, but I do now. It's strongly related to below.

Agreed.


>>>I'm afraid I don't understand covariant parameter on the assignment 
>>>operator.
>>
>>I don't think the FAQ covers this one. I'll try to give a very brief 
>>overview. Suppose someone inherits D from B, and passes a D to a 
>>function expecting a B& (reference-to-B). If that function assigns to
>>the B&, then only the B part of the D object is changed, and the 
>>object often goes into a nonsensical state. It's kind of like genetic
>>engineering, where biologists scoop out the chromosomes from one 
>>animal, and replace them with the chromosomes of another. It's real 
>>easy to get garbage when you do that.
>
>I was under the /strong/ impression references were treated 
>syntaxically identical to their non-referenced version. Hence, you 
>can't pass anything other than B to a function expecting a B&. I 
>distinctly remember twigging I should no longer use C style pointers 
>except when I explicitly expect D* and all ptrs to subclasses thereof 
>(and where I think in the future I may pass a subclass).

Hopefully this new understanding about references will help your coding.
In any case, I agree that references should be used more often than
pointers, but for a different reason. The reason is that references are
restricted compared to pointers, and that restriction is (often) a good
thing. A pointer can be NULL, but a reference cannot (legally) be NULL,
so if you have a function that must not be passed NULL, the easy way to
make that explicit is for the function's parameter to be a reference
rather than a pointer. That way there's one less condition to test for
and one less 'if' at the beginning of your function.


>>>If what you say about inheriting is bad, then I would say my code is 
>>>very foul. I have huge depths of subclassing as a normal part of 
>>>writing my code.
>>
>>Ouch, that's certainly a very dubious design style. It's a typical 
>>hacker's style, and it comes from the Smalltalk world, but it's 
>>generally inappropriate for C++ or Java or any other statically typed 
>>OO language.
>
>Can you point me to resources explaining why this is bad and not just 
>a question of individual style? 

Sure no problem. Start with our book ("C++ FAQs", Addison Wesley), then
go to Scott Meyer's books ("Effective C++" and "More Effective C++",
also Addison Wesley), and probably most any other book that deals with
design/programming style in C++.

>I would have thought it /better/ for 
>statically typed languages because the compiler is given more 
>knowledge with which to optimise.

Nope, it's a very Smalltalk-ish style, and it causes lots of problems in
a statically typed OO language since today's statically typed OO
languages (C++, Java, Eiffel, etc.) equate inheritance with subtyping.
In any language that equates inheritance with subtyping, using
inheritance as a reuse mechanism, as opposed to using inheritance
strictly for subtyping purposes, ultimately causes lots of design and
extensibility problems. It can even effect performance.


>>>It's not just me - I picked up the idea from Qt
>>>(http://www.trolltech.com/). I've basically learned C++ by copying
>>>their example.
>>
>>If you're building an application framework (especially if you're 
>>building a GUI framework), you might have chosen a reasonable piece of
>>software to learn from. If you're simply *using* a framework to build
>>some other app, you've chosen poorly.
>
>I'd like to think I accumulate ideas from whatever existing source I 
>look at. It is after all how I originally taught myself how to 
>program. Again, I'd like to know precisely why this style would be a 
>poor choice for some other app.

Mostly because it creates all sorts of problems for users. Take, for
example, your TSortedList class. You have removed the append() and
prepend() methods because you can't implement them properly in your
class. Nonetheless someone might easily pass an object of your derived
class via pointer or reference to its base class, and within that
function the methods you tried to remove are suddenly available again,
only this time with potentially disastrous results. Take, for example,
this function:

void f(QList<Foo>& x)
{
x.prepend(...); // change '...' to some Foo object
x.append(...); // change '...' to some Foo object
}

Now suppose someone passes a TSortedList object to this function:

void g()
{
TSortedList<Foo> x;
f(x);
...what happens here??
}

In the '...what happens here??' part, anything you do to the TSortedList
is likely to cause problems since the list might not be sorted. E.g.,
if f() adds Foo objects to 'x' in some order other than the sorted
order, then the '...what happens here??' part is likely to cause serious
problems.

You can't blame this problem on references, since the same exact thing
would happen if you changed pass-by-reference to pass-by-pointer.

You can't blame this problem on the C++ compiler, because it can't
possibly detect one of these errors, particularly when the functions f()
and g() were part of two different .cpp files ("compilation units") that
were compiled on different days of the week.

You can't blame this problem on the author of g(), because he believed
the contract of TSortedList. In particular, he believed a TSortedList
was a kind-of a QList. After all that is the meaning of subtyping, and
subtyping is equated in C++ with inheritance. The author of g() simply
believed what you said in this line: 'class TSortedList : public QList',
and you can't blame him for believing what you said.

You can't blame this problem on the author of f(), because he believed
the contract of QList. In particular, he believed he can append()
and/or prepend() values in any order onto any QList. Besides, he wrote
and compiled his code long before you even thought of deriving
TSortedList, and by the rules of extensibility (e.g., see the sections
on Inheritance in the C++ FAQ, or any similar chapters in any book on
the subject), he is not required to predict the future - he is supposed
to be able to write code based on today's realities, and have tomorrow's
subclasses obey today's realities. That is the notion of is-a, and is
codified in many places, including the C++ FAQ, Liskov's
Substitutability Principle ("LSP"), and many other places.

So who is at fault? Ans: the author of TSortedList. Why is the author
of TSortedList at fault? Because of false advertising: he said
TSortedList was a kind-of a QList (or, using precise terminology, that
TSortedList was substitutable for QList), but in the end he violated
that substitutability by removing methods that were promised by QList.

To work a more down-to-earth example, suppose all Plumbers have a
fixPipes() method, and further suppose I claim to be a kind-of Plumber
but I don't have a fixPipes() method. My claim is false: I am not a
kind-of Plumber, but am instead a fraud. Any company that contracted
for my services under my false advertising would be in the right by
claiming I have defrauded them, after all, I claimed to be something I
am not. Similarly a used-car salesman that sells a car which lacks
brakes and/or an engine is in the wrong. Society wouldn't blame the
car, the engine, or the driver; they would blame the salesman for
falsely representing that a particular kind of car is substitutable for
the generally agreed-upon definition of "car." The good news is that in
OO, we don't have to rely on "generally agreed upon definitions."
Instead we look at the base class and that *precisely* defines what is
or is not a "car" (or in this case, a QList).


>>>No all he/she did wrong was assume no one would ever want to subclass
>>>their class. Not only is that arrogant and assumes themselves 
>>>infallible, it seems to me bad practice and against the whole idea of
>>>software reusability.
>>
>>You seem to have a wrong notion of how reusability and inheritance are
>>supposed to mix. Inheritance is not "for" reuse. One does *not* 
>>inherit from something to reuse that thing.
>
>I really don't get you here. I reread the appropriate sections in 
>your FAQ and I *still* don't get this. I can't escape thinking that 
>what I and you mean by "reusing code" is not the same thing 

Agreed: we are saying totally different things for reuse. What I'm
saying is that inheritance is NOT "for" reuse. Inheritance is for
subtyping - for substitutability. Ultimately inheritance is so my code
CAN *BE* reused; not so it can reuse. Has-a (AKA composition AKA
aggregation) is for reuse. Is-a (AKA inheritance AKA substitutability)
is for BEING REUSED.

Put it this way: you inherit from "it" to *be* what it *is*, not simply
to have what it has. If you simply want to have what it has, use has-a
(AKA aggregation AKA composition).

> - for me, 
>whenever you inherit something you inherit its structure (API) and 
>its code - that, to me, is reusing already written and tested code 
>which is a good thing. Hence inheritance = code reuse.

We are totally different here. And unfortunately the extended
experience of a vast majority of C++ programmers has proven your
approach is very short-sighted.

(BTW I will quickly add that your approach is perfectly fine in a very
small project, since in very small projects you can control the damage
of "improper" or "bad" inheritance. Some of my colleagues won't agree
and will say your approach is *always* wrong, and in a sense I would
agree. But from a practical basis, your approach doesn't really cost
too much in the way of time, money, or risk with a small enough project.
If you use your approach on a big project, however, everyone seems to
agree, and everyone's experience seems to prove, that your approach is
very dangerous and expensive.)


>
>>>> Sorry, but this is a very poor solution. It certainly works in a 
>>>> few cases, but it violates the first premise of pluggable software.
>>>> The three solutions described in the FAQ are much better: they 
>>>> handle all
>>>> the trivial cases that yours handles, and in addition they handle
>>>> the
>>>> more sophisticated "plugability" cases.
>>>
>>>I'll review your FAQ's suggestions again. Maybe I missed something, 
>>>but they all seemed to have fairly substantial caveats.
>>
>>They all do. But at least they all solve the pluggability problem, 
>>which is typically the core reason for using this sort of syntax / 
>>technique.
>
>Pluggability = ability to link in or out a "module" of code easily 
>yes?

Yes, sort of. The idea is to add a new feature without changing *any*
existing code -- to add something new without changing any existing .cpp
file, .h file, or any other chunk of code anywhere. Think Netscape
plug-ins or Internet Explorer plug-ins and you'll see what I mean:
people don't need to get a new version of Netscape / IE when some
company creates a new plug-in. People can plug the new plug-in into
their old browser without ANY change to ANY line of code within the
browser itself.


>If so, static class constructs worry me because I can't guarantee 
>their order before main(). 

That's why we use things like construct-on-first-use: so we don't rely
on their order before main().


>My approach solves this, and hence answers 
>the main caveat your FAQ stated.

Yes, but the cost of that benefit is to sacrifice plugability.


>>This is an interesting example. Please do three things:
>>
>>1. Send me the code for template class QList (Qt has a template called
>>QValueList, but I didn't find one called QList).
>
>Yeah Trolltech renamed QList to QPtrList in Qt 3.0.
>
>>2. What is the type
>>of QCollection::Item?
>
>MSVC thinks it's a void *. QGList I included should say more on this.
>
>>3. What is the '___' in TSortedList<____> that
>>caused this error? Or are you saying it always generates an error 
>>independent of any TSortedList<___> usage?? If the latter, better 
>>send me the whole TSortedList template.
>
>It's the latter. I've commented out the original code in what I've 
>sent to get it to compile. If you compare it to QSortedList.h, the 
>two are almost identical (which is intentional) but QSortedList.h 
>compiles perfectly whereas mine stops complaining with:
>
>>d:\atoms\tclient\include\tsortedlist.h(57) : error C2678: binary '=='
>>: no operator defined which takes a left-hand operand of type 'class 
>>type' (or there is no acceptable conversion)
>> d:\atoms\tclient\include\tsortedlist.h(56) : while compiling
>> class-template member function 'int __thiscall
>> TSortedList<class type>::compareItems(void *,void *)'
>>d:\atoms\tclient\include\tsortedlist.h(58) : error C2678: binary '<' :
>>no operator defined which takes a left-hand operand of type 'class 
>>type' (or there is no acceptable conversion)
>> d:\atoms\tclient\include\tsortedlist.h(56) : while compiling
>> class-template member function 'int __thiscall
>> TSortedList<class type>::compareItems(void *,void *)'
>
>This is at template definition, not at template use.

Very, very strange. Are you sure the code was compiled exactly as you
sent it to me? I.e., the definition for compareItems() was within the
class body as you had it in the .h file? I don't have MS VC++ 6
installed right now or I'd check it myself, but on the surface this
seems to be a totally bizarre error message since 'type' is the template
parameter, and therefore the compiler could never check to see if it had
an operator == or < or anything else (including simple assignment!).

In any case, your solution will, unfortunately, cause serious problems
if you use a TSortedList<Foo> when Foo doesn't inherit from
TSortedListItem. If you use TSortedList<Foo> when Foo doesn't inherit
from TSortedListItem, or if Foo multiply inherits from something along
with TSortedListItem, then it will end up calling a rather random
virtual function and it will end up doing rather random things. I would
consider this to be a very fragile piece of code, at best. The compiler
shouldn't stop you from doing what you want it to do.

**DING** I just found your bug. In TSortedList.cpp, all your methods
are listed like this:

int TSortedList<class type>::find( const type *d )
{
...
}

But instead they should be listed like this:

template<class type>
int TSortedList::find( const type *d )
{
...
}

Your syntax ("TSortedList<class type>::") explains everything, including
the bizarre use of 'class type' within the error messages. What
happened is that the compiler saw you using a TSortedList<class type>,
and it therefore tried to compile all the virtual methods within
TSortedList<class type>. When it saw that 'type' really isn't a genuine
class type, it complained (eventually) that the class called 'type'
doesn't have an == or < operator.

When you fix this problem, you will end up with additional problems,
mainly because you have moved template code into a .cpp file. The C++
FAQ covers this issue; suggest you read that for details. (It's
probably in the section on templates and/or containers.)


>>>Hardly important unless Visual Studio .NET has the same problem. MS 
>>>no longer consider MSVC6 a primary support product.
>>
>>Not true. Many companies will continue to use MS VC++ 6 for years to 
>>come. I know a company that's still using MS VC++ version 1.62 for 
>>some of their embedded systems programming.
>
>Companies may continue to use a product, but it's not in Microsoft's 
>commercial interests to encourage them. Based on historical 
>precident, it is extremely clear Microsoft take a bug much more 
>seriously if it's in the current top-of-the-line product. Bugs in 
>older products are more likely to be fixed if (a) new product's fix 
>can be retro-engineered easily and (b) if their reputation would 
>suffer if they didn't. I'm guessing (a) won't apply given the likely 
>substantial redesign to accommodate C#.

My point is that the issue is (was) important to thousands and thousands
and thousands of C++ programmers. Yes Microsoft might choose to ignore
it, but that doesn't mean the issue is no longer relevant or important.


>>>How, might I ask, would you suggest you implement try...finally 
>>>without macros in "the C++ way of doing things"?
>>
>>Use the C++ idiom that "destruction is resource reclamation."
>>
>>>I am assuming you
>>>will surely agree multithreaded programming is not fun without 
>>>try...finally (if you don't, I want reasons, it'll be interesting to
>>>see what you'd come up with).
>>
>>Constructor/destructor <==> resource acquisition/reclamation.
>
>That is a cracking idea I am kicking myself for not having thought of 
>earlier. 

I'm glad I could help.

>I was already concerned about the overhead of throwing an 
>exception every try...finally, but your approach is far simpler and 
>more efficient. It'll require quite a lot of code refitting, but I 
>think it's worth it.
>
>Thank you!

No problem. BTW I consider this an idiom of C++. Part of being
competent in using a language is knowing the syntax and semantics, but
another critical part is knowing the idioms of the language. You're an
expert (I assume) in certain varieties of assembler, and perhaps also in
C. As a competent C programmer, you know the C idioms, such as

while (*dest++ = *src++)
;

This, of course, is the idiom that copies an array of things pointed to
by 'src' into an array pointed to by 'dest', and it stops copying after
it copies the item whose value is zero. If the arrays are arrays of
'char', this is equivalent to strcpy(), since it copies everything
including the terminating '\0'. Obviously other types are similar.

Other idioms in C abound, such as Duff's device:

while (n >= 0) {
switch (n) {
default: xyzzy;
case 7: xyzzy;
case 6: xyzzy;
case 5: xyzzy;
case 4: xyzzy;
case 3: xyzzy;
case 2: xyzzy;
case 1: xyzzy;
}
n -= 8;
}

If you replace 'xyzzy' with some piece of code, this applies that piece
of code exactly 'n' times, but it is much faster than the equivalent:

while (n-- > 0) {
xyzzy;
}

Since the latter executes 'n' decrements, 'n' comparisons, and 'n'
conditional jumps, whereas Duff's device executes only 1/8'th as many
decrements, comparisons, or conditional jumps. Of course the key is
that there is no 'break' statement after each 'case' -- each case "falls
through" to the next case. The other key, obviously, is that the cases
are listed in backwards order.

The point, of course, is that this is another idiom of C, and competent
C programmers know these sorts of things. As you become better and
better at C++, you will learn the idioms of C++, and this is one of
them.


>>>I used to read the reports of the ANSI committee meetings regarding
>>>C++ as it was still being formalised and it always struck me as being
>>>an awful hodge-podge. The more I learn about it, the more I realise I
>>>was correct!
>>
>>Have you ever been on *any* ANSI or ISO standardization committee? If
>>not, it must be easy for you to sit there with zero experience and 
>>throw insults at the hard work of others who have selflessly 
>>sacrificed their time and money to do something big.
>
>I apologise if you interpreted my words as throwing insults for they 
>were not intended as such. 

Apology accepted. I was a little ticked off, but mainly because I get
frustrated when people who really don't know what it's like to steer a
very popular programming language assume they could do a better job.
I'm better now :-)

>I have the utmost respect and admiration 
>for any standardisation committee (with possible exception of the 
>POSIX threads committee, their poor design really screws C++ stack 
>unwinding which is unforgiveable given how recently it was designed).

Not knowing any better, I'd guess they were dealing with very subtle
constraints of existing code or existing practice. Most people on those
sorts of committees are competent, plus the entire world (literally) has
a chance to comment on the spec before it gets finalized, so if there
were gaping holes *that* *could* *be* *fixed* (e.g., without breaking
oodles and oodles of existing code), I'm sure *someone* *somewhere* in
the world would have pointed it out.

Like I said, I don't know the details of this particular situation, but
I would guess that they were fully aware of the problem, that they
investigated all the alternatives, and that they chose the "least bad"
of the alternatives.

>
>However, this does not changed my statement that C++ is an awful 
>hodge-podge. I am not saying everyone involved in standardisation 
>didn't move heaven and earth to make things as good as they could, 
>but with an albatross like keeping existing code compatibility with 
>AT&T C++ and C there was only so much that could be done. I remember 
>the passionate debates about what compromises to strike well.

Yes, existing code puts everyone in a very difficult position, and often
causes compromises. But that's the nature of the beast. The cost of
breaking existing code is much, much greater than canonizing it. Yes
there are compromises with purity, but without those compromises, no one
would use standard C++. After all, there is no law that requires
vendors to implement compilers or libraries that conform to the
standard, so the only way for things to work is for everyone (including
the standardization committees, the compiler vendors, the library
vendors, etc.) to do everything possible to avoid forcing everyone to
rewrite their code. If even 10% of the world's C++ code had to get
rewritten, I predict the C++ standard would get rejected by large
companies and therefore those large companies would ask their vendors to
support the old-fashioned, non-standard syntax/semantics, and all the
good that would have come as a result of having a standard would be for
naught.

>
>Put it this way: when you try something which seems logical in C it 
>generally works the way you think it should. 

Really? No less a light as Dennis Ritchie bemoans the precedence of
some of the operators, and certainly the rather bizarre use of 'static'
has caused more than one C programmer to wonder what's going on. Plus
the issue of order of evaluation, or aliasing, or any number of other
things has caused lots of consternation.

But I guess I agree to this extent: C++ is larger than C, and as such
C++ has more confusing issues. I believe that C99 is causing some of
those same problems, however, since C99 is much bigger than its
predecessor. The same thing will be true of C++0x: it will be bigger
and have more compromises.


>The same in C++ is much 
>less true - I keep finding myself running into limitations which have 
>no good reason. For example, the concept of destination type seems to 
>have no effect in C++ eg;
>
>TQString foo;
>foo="Hello world";
>
>Now TQString is a subclass of QString, and both have const char * 
>ctors. The compiler will refuse to compile the above code because 
>there are two methods of resolving it. 

I may not understand what you mean by "two methods of resolving it," but
I don't understand why the compiler doesn't do what you think it should
above. If TQString has a const char* ctor, then I think that should
promote "Hello world" to a TQString and then use TQString's assignment
operator to change 'foo'.

>Now, to me, that seems stupid 
>because quite clearly the destination type is TQString and the 
>shortest route to that is to use the TQString const char * ctor ie; I 
>clearly am inferring to use the shortest route. The same sort of 
>thing applies to overloading functions - you cannot overload based on 
>return type, something I find particularly annoying.

Another C++ idiom lets you do just that. I'll have to show that one to
you when I have more time. Ask if you're interested.

>
>>>I never had much experience with it, but Objective C
>>>always seemed a lot cleaner.
>>
>>If ObjC is so much better, why is it so unpopular?
>
>Lots of reasons. If I remember correctly, there were many problems 
>with the run-time library on different platforms. There were issues 
>regarding Next and Apple and all that. Of course, as well, there were 
>culture issues - programmer inclinations. Also, there was good 
>competition between many C++ vendors which brought C++ tools to a 
>decent quality pretty quickly.
>
>Computer history is strewn with cases of an inferior product 
>destroying a superior product. It's hardly unique.

I agree. I guess my point is simply this: any popular language is going
to have warts that an unpopular language will not. Take Eiffel for
example. Way back when Eiffel was very young, Bertrand Meyer derided
C++'s 'friend' construct, claiming it violated encapsulation. Then he
began to get real users who were building real systems using Eiffel, and
suddenly he began to see how something like the 'friend' construct
actually *improves* encapsulation. So he added it to Eiffel. At first
the language seemed cleaner and simpler, then gradually it added more
stuff as it became more practical.

C++ is saddled with three basic goals: it tries to be a good procedural
programming language ("C++ as a better C"), and at the same time a good
OO language, and at the same time a good language for programming with
"generics." Trying to be a jack of all trades is difficult, and
ultimately involves compromises. However if you pointed out any
particular compromise, I could probably tell you why it was done and in
fact could (I hope!) make you realize that "cleaning up" that compromise
would cause more harm than good.

In any case, I agree that good products don't always win in the
marketplace.


>>>I take it you come from a Smalltalk background?
>>
>>Not at all. My C++ "smells like" C++, not like Smalltalk or assembler
>>or anything else. Similarly my C code smells like C code, and it uses
>>C idioms, etc., and my Java smells like Java, etc. I am language 
>>neutral, e.g., I've been a member of both the ANSI C++ and ANSI 
>>Smalltalk committees.
>
>In which case you are a better programmer than I. I essentially 
>program the same in any language using an internal methodology and my 
>measure of my liking a language is how little it distorts what I 
>actually want to do (hence my strong dislike of Java and 
>VisualBasic). Nothing I program is what other people call a typical 
>style of that language. You may think that irresponsible and arrogant 
>of me, but I know it is an innate quality of mine - it's the same 
>when I learn human languages (I still retain my own speech formation 
>and pronounciation irrespective).
>
>Hence, I am more of a functional programmer than anything else. 

Do you really mean "functional" or "procedural" here? The Functional
style is rather difficult to do in C++ (think Scheme). Functional
programming means never allowing any changes to any piece of data, so
instead of inserting something into a linked list, one creates a new
linked list and returns the new linked list that contains the new item.

>It is 
>my dream to some day design an imperative/functional hybrid language 
>which would perfectly reflect how I like to program.
>
>>>I know this will
>>>sound approaching blasphemous - and I don't mean at all to be 
>>>offensive, but merely to garner an opinion - but I have always 
>>>considered OO to be a good way of organising maintainable source but
>>>really crap for designing code.
>>
>>Another really big error. OO is primarily a design approach. The 
>>concept of "OO programming" is very close to a misnomer, since OO 
>>programming cannot stand on its own - it needs OO *design*.
>
>No, I must disagree with you there: design is independent of 
>language. 

Nope, not true at all. A design that works for Functional languages is
horrible for Procedural languages, and vice versa. And both those
designs are wholly inappropriate for OO languages, Logic-oriented
languages, or Constraint-oriented languages. In short, the paradigm
*very* much effects the design.

Try your belief out sometime. Try implementing your favorite program in
Prolog (logic-oriented) or Scheme (function-oriented) and see what
happens. Guaranteed that if your program is nontrivial, a radically
different design will emerge. Either that or you'll constantly be
fighting with the paradigm and the underlying language, trying to force,
for example, Prolog to be procedural.

I'll go further: design isn't even independent of language *within* a
paradigm. In other words, a design that is appropriate for Smalltalk is
typically inappropriate for C++, even when you are trying very hard to
use OO thinking throughout.

>I have never agreed with OO design as my university 
>lecturers found out - I quite simply think it's wrong. Computers 
>don't work naturally with objects - it's an ill-fit.
>
>What computers do do is work with data. If you base your design 
>entirely around data, you produce far superior programs. 

In your experience, this may be true. But trust me: it's a big world
out there, and in the *vast* majority of that world, your view is very
dangerous.

Be careful: you are painting yourself into a very narrow corner. You
may end up limiting your career as a result.


>Now I will 
>agree OO is good for organising source for improved maintainability, 
>but as a design approach I think it lacking.

You really should read "OO Design Patterns" by Gamma, et al (also
published by Addison Wesley). Read especially chapter 2. I think
you'll see a whole world of OO design -- and you'll see ways to use OO
at the design level that are totally different (and, I dare say, totally
superior) to the approach you are describing here.

Or take the example of IBM's AS/400. In the early 90s, IBM retained me
to train and mentor all their developers in Rochester MN (and some at
Endicott NY) because they had decided to rewrite the kernel of their
AS/400 operating system using OO design and C++. When we started, they
had a business problem: it took their people 9 months to add certain
features and capabilities. This particular category of "feature and
capability" was added often enough that they wanted to fix that problem.
But being the kernel of an operating system, they couldn't do *anything*
that added *any* overhead.

This project ended up being around half a person-millennium (150-200
developers over a 3 year period). I ended up training and mentoring
them all, and we had lots and lots of design sessions. When they were
finished, the things that used to take 9 months could be done by a
single person in less than a day. The success-story was written up in
Communications of the ACM -- it was the lead article in the Special
Issue on Object-Oriented Experiences. It was also written up in IEEE
Software and perhaps a few other places. (And, by the way, there was no
loss of performance as a result. That was *very* hard to achieve, but
we did it. In the end, customers gained 2x MIPS/dollar.)

The point is that these benefits came as result of OO *design*, not as a
result of programming-level issues.

One more example: UPS (another of my clients; in fact I was there just
last week) has new "rating" and "validation" rules that change every 6
months. For example, if Detroit passes a law saying it's no longer
legal to drive hazardous materials through its downtown area, the code
needs to change to prevent any package containing hazmat from going
through downtown Detroit. In their old system, which was built using
your style of C++, it took 5 months out of every 6 to integrate these
sorts of changes. Then someone created a framework using OO design (not
just C++ programming), and as a result, they could do the same thing in
2 weeks.

Okay, one more - this is the last one -- I promise :-) IBM has an 800
number you can call when you want to buy a ThinkPad or some other
equipment. This division generates *billions* of dollars per year, and
as a result, quality and performance were very important. But
flexibility was also important, because companies like Dell were
promoting their build-to-order systems, and were able to offer
on-the-spot deals that IBM's system couldn't match. Simply put, IBM's
high-performance and high-quality constraints were working against the
system's flexibility, and it was taking IBM wayyyyyy too long to add
promotional deals or other competitive ideas. (Their old approach was
built using non-OO *design* even though it was built using C++.)

When we were done with their system, they could create and install most
changes in minutes. All that without loss of performance and with an
improvement in quality/stability.


>An example: take your typical novice with OO. Tell them the rules and 
>look at what they design. Invariably, pure OO as designed against the 
>rules is as efficient as a one legged dog. 

The way you have learned OO, yes, it will have performance problems.
But the way I am proposing OO should be done, either it won't have
performance problems at all, or if it does, those problems will be
reparable.

>In fact, in my opinion, OO 
>experience is actually learning when to break pure OO and experienced 
>OO advocates do not realise that they so automatically break the pure 
>application of what they advocate.

We agree that purity is never the goal. Pure OO or pure procedural or
pure anything else. The goal is (or *should* be) to achieve the
business objectives. In my experience, OO *design* brings the real
value, and not just programming-level issues.


>A practical example: at university, we had to design a program to 
>sort post office regional codes. The typical class effort, for which 
>they received top marks, sorted the list in about ten to twenty 
>seconds. My effort did it so quickly there wasn't a delay in the 
>command prompt returing - and may I add, I received a bare pass mark 
>because I adopted a data-centric solution and not an OO one. Now I 
>couldn't fault that (the story of my entire degree), but it painfully 
>reminded me of how OO is fundamentally incorrect for computers - good 
>for humans, but not computers.

I agree with everything except your last phrase. OO design is good for
both people and computers.

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Tue, 30 Jul 2002 22:39:24 +0200

On 28 Jul 2002 at 22:31, Marshall Cline wrote:

Firstly, may I ask your permission to distribute a digest of our 
conversation to others? I believe quite a few people could do with 
reading it because (and this may worry you) I am considered one of 
the better C++ people out of our class' graduates. If I didn't know, 
I'm very sure they didn't simply because it was never taught.

> I spoke last time about being a prisoner of our pasts. My past
> includes acting as "senior technology consultant" to IBM throughout
> North America, which meant advising on product strategy, mentoring,
> and (most relevant to this situation) performing internal audits. The
> audits included a number of important engagements with IBM's clients,
> and required me to perform assessments of people and technology. 
> During these audits and assessments, I saw a lot of large projects
> that failed because of overengineering. Many of the technologists on
> these sick or dead projects had a similar perspective to what you
> articulated above. Their basic approach was often that overengineering
> is better than underengineering, that it's cheaper in the long run,
> and perhaps cheaper in the short run, so let's overengineer just in
> case.

I think there are two types of overengineering: controlled and 
uncontrolled. The latter happens when the people doing the design 
aren't really sure what they're doing. The former happens when the 
designers take into proper account the likely extensions in the 
future, possible client changes in specification, ramifications on 
maintainability etc. and balance all of these against time of 
implementation, worth to the project etc. Essentially, what I am 
really saying, is if you spend plenty of time on *design* then your 
project comes in on time and within budget.

BTW, have you heard of extreme programming 
(http://www.extremeprogramming.org/)? Daft name, but it's an 
interesting looking way of managing and overseeing computer projects. 
It certainly is less intrusive than auditing, and establishes more 
trust between customer and provider.

> As a result of seeing in excess of one hundred million dollars worth
> of effort (and numerous careers) washed down the drain, I tend to make
> sure there is a realistic ROI before adding any effort that has a
> future-payback.

Again I think we're saying precisely the same thing with different 
words.

Let me give you a bit of background on myself (helps later on): My 
role in recent years is saving troubled projects. I am brought in 
when things have gone horribly wrong - for example, my last two 
positions were saving a handheld GPS project in Canada and saving a 
EuroFighter component test bench control software project here in 
Spain. Usually, I come in, assess the situation (code, employees and 
most importantly management) and fix it. In both projects, I have 
been spectacularly successful, albeit at the cost of my own job - to 
save a troubled project you need to work on many areas, but the most 
obstinate in my experience is management who employ a "pass the buck" 
methodology whilst firing good programmers to divert the blame. In 
the end, I always come up against what was killing the project 
beforehand, at which stage it's time to move on.

However, my background is in a little British computer called an 
Acorn which ran on ARM processors (nowadays Acorn is liquidated and 
ARM, its offshoot, is one of the bigger UK companies). Acorn's ran an 
OS called RISC-OS which was the last general purpose all-assembler OS 
ever written. And I will tell you now, it was vastly ahead of 
anything else at the time - and I include Unix. Obviously, everything 
in the system was designed around writing in assembler, and hence 
large applications (DTP, editors, spreadsheets, photo-editing, music 
composition etc.) often were entirely written in hand-coded ARM. 
Hence all of us did stuff which most people consider unlikely in 
assembler - for example, we used what you could call an object in 
that some code would have instance data and a ctor and destructor. We 
had the equivalent of virtual functions using API offset tables. Some 
silly people used self-modifying code, which is worse that goto's 
IMHO.

What is important to get from this is that until the US 
multinationals crushed our indigenous European computer industry, we 
were in many ways considerably ahead of the status quo. This is why I 
don't fit easily into boxes others like to assign me to.

> >>Ouch, that's certainly a very dubious design style. It's a typical
> >>hacker's style, and it comes from the Smalltalk world, but it's
> >>generally inappropriate for C++ or Java or any other statically
> >>typed OO language.
> >
> >Can you point me to resources explaining why this is bad and not just
> > a question of individual style? 
> 
> Sure no problem. Start with our book ("C++ FAQs", Addison Wesley),
> then go to Scott Meyer's books ("Effective C++" and "More Effective
> C++", also Addison Wesley), and probably most any other book that
> deals with design/programming style in C++.

Not being able to obtain these books easily (I live in Spain plus 
money is somewhat tight right now), I looked around the web for more 
on this. I specifically found what not to do when inheriting plus how 
deep subclassing usually results in code coupling increasing. Is that 
the general gist?

> >I would have thought it /better/ for 
> >statically typed languages because the compiler is given more 
> >knowledge with which to optimise.
> 
> Nope, it's a very Smalltalk-ish style, and it causes lots of problems
> in a statically typed OO language since today's statically typed OO
> languages (C++, Java, Eiffel, etc.) equate inheritance with subtyping.
> In any language that equates inheritance with subtyping, using
> inheritance as a reuse mechanism, as opposed to using inheritance
> strictly for subtyping purposes, ultimately causes lots of design and
> extensibility problems. It can even effect performance.

In other words, precisely the trap I was falling myself into. I 
should however mention that having examined my code, I was peforming 
this trap only in the areas where Qt wasn't providing what I needed. 
In the code generated entirely by myself, I tend to use a top-down 
approach with an abstract base class defining the reusable parts.

I should mention that much of the subclassing I have had to do will 
disappear with future versions of Qt as they have very kindly mostly 
agreed with my ideas. Hence, in fact, until v4.0, it's mostly stop-
gap code.

> >Again, I'd like to know precisely why this style would be a
> >poor choice for some other app.
> 
> Mostly because it creates all sorts of problems for users. Take, for
> example, your TSortedList class. You have removed the append() and
> prepend() methods because you can't implement them properly in your
> class. Nonetheless someone might easily pass an object of your
> derived class via pointer or reference to its base class, and within
> that function the methods you tried to remove are suddenly available
> again, only this time with potentially disastrous results. Take, for
> example, this function:
> 
> void f(QList<Foo>& x)
> {
> x.prepend(...); // change '...' to some Foo object
> x.append(...); // change '...' to some Foo object
> }
> 
> Now suppose someone passes a TSortedList object to this function:
> 
> void g()
> {
> TSortedList<Foo> x;
> f(x);
> ...what happens here??
> }

Err, prepend and append aren't virtual in the base class, so the base 
class' versions would be called. I had realised previously that it's 
a very bad idea to disable virtual inherited methods - or if you were 
to, you'd want a fatal exception in there to trap during debug.

> In the '...what happens here??' part, anything you do to the
> TSortedList is likely to cause problems since the list might not be
> sorted. E.g., if f() adds Foo objects to 'x' in some order other than
> the sorted order, then the '...what happens here??' part is likely to
> cause serious problems.
> 
> You can't blame this problem on references, since the same exact thing
> would happen if you changed pass-by-reference to pass-by-pointer.
> 
> You can't blame this problem on the C++ compiler, because it can't
> possibly detect one of these errors, particularly when the functions
> f() and g() were part of two different .cpp files ("compilation
> units") that were compiled on different days of the week.
> 
> You can't blame this problem on the author of g(), because he believed
> the contract of TSortedList. In particular, he believed a TSortedList
> was a kind-of a QList. After all that is the meaning of subtyping,
> and subtyping is equated in C++ with inheritance. The author of g()
> simply believed what you said in this line: 'class TSortedList :
> public QList', and you can't blame him for believing what you said.
> 
> You can't blame this problem on the author of f(), because he believed
> the contract of QList. In particular, he believed he can append()
> and/or prepend() values in any order onto any QList. Besides, he
> wrote and compiled his code long before you even thought of deriving
> TSortedList, and by the rules of extensibility (e.g., see the sections
> on Inheritance in the C++ FAQ, or any similar chapters in any book on
> the subject), he is not required to predict the future - he is
> supposed to be able to write code based on today's realities, and have
> tomorrow's subclasses obey today's realities. That is the notion of
> is-a, and is codified in many places, including the C++ FAQ, Liskov's
> Substitutability Principle ("LSP"), and many other places.
> 
> So who is at fault? Ans: the author of TSortedList. Why is the
> author of TSortedList at fault? Because of false advertising: he said
> TSortedList was a kind-of a QList (or, using precise terminology, that
> TSortedList was substitutable for QList), but in the end he violated
> that substitutability by removing methods that were promised by QList.

Worse I think would be redefining inherited methods to do something 
completely different. But yes, I understand now.

The rule should be that subclasses must always behave like their base 
class(es). Code reuse should be done via composition.

Hence, that TSortedList should now derive off QGList which doesn't 
have the append and prepend methods so I can safely ensure it does 
what its parent does.

> Put it this way: you inherit from "it" to *be* what it *is*, not
> simply to have what it has. If you simply want to have what it has,
> use has-a (AKA aggregation AKA composition).

Yes, I definitely understand you now. It is a pity explanations like 
this weren't more universally available, because I know a lot of C++ 
programmers learned from MSVC's online help (eg; initially me - it's 
where I learned C from as well). I however did subscribe to the 
Association of C & C++ Users which is why I know about 
standardisation debates - but even though at the time I subscribed 
for the C coverage, I did read the C++ sections.

A lot of people say pointers are the devil's tool in C and I have met 
a disturbing number of programmers who just don't understand them. 
However, it seems to me pointers are child's play in unenforced 
danger when compared to problems like you and your FAQ have 
mentioned. If more warning were out there, we'd all have less 
problems with other people's code.

> (BTW I will quickly add that your approach is perfectly fine in a very
> small project, since in very small projects you can control the damage
> of "improper" or "bad" inheritance. Some of my colleagues won't agree
> and will say your approach is *always* wrong, and in a sense I would
> agree. But from a practical basis, your approach doesn't really cost
> too much in the way of time, money, or risk with a small enough
> project. If you use your approach on a big project, however, everyone
> seems to agree, and everyone's experience seems to prove, that your
> approach is very dangerous and expensive.)

Unfortunately what I am working on now I expect to exceed 100,000 
lines before I'll consider it reasonably done. I'll explain later.

> [content clipped]
> **DING** I just found your bug. In TSortedList.cpp, all your methods
> are listed like this:
> [code chopped]
> Your syntax ("TSortedList<class type>::") explains everything,
> including the bizarre use of 'class type' within the error messages. 
> What happened is that the compiler saw you using a TSortedList<class
> type>, and it therefore tried to compile all the virtual methods
> within TSortedList<class type>. When it saw that 'type' really isn't
> a genuine class type, it complained (eventually) that the class called
> 'type' doesn't have an == or < operator.

I don't see how the syntax is consistent then. From what I can see 
template<pars> X where X is the code to be parametrised - or are you 
saying I declare the methods in the class definition and move the 
code to inline void TSortedList<class type>::foo() after the template 
class definition?

Either way, this was my first ever template class (yes, in all three 
years of using C++) and I copied heavily off QSortedList.h (which I 
enclosed last time) which might I point out compiles absolutely fine. 
So why my class, almost identical, does not and Qt's one does I do 
not know.

> When you fix this problem, you will end up with additional problems,
> mainly because you have moved template code into a .cpp file. The C++
> FAQ covers this issue; suggest you read that for details. (It's
> probably in the section on templates and/or containers.)

It says you can't move template code into a .cpp file :)

I think you can just put it in the header file though? I'm still not 
sure why it threw an error :(

> >[a bloody good suggestion]
> >Thank you!
> 
> No problem. BTW I consider this an idiom of C++. Part of being
> competent in using a language is knowing the syntax and semantics, but
> another critical part is knowing the idioms of the language. You're
> an expert (I assume) in certain varieties of assembler, and perhaps
> also in C. As a competent C programmer, you know the C idioms, such
> as
> 
> while (*dest++ = *src++)
> ;
> 
> This, of course, is the idiom that copies an array of things pointed
> to by 'src' into an array pointed to by 'dest', and it stops copying
> after it copies the item whose value is zero. If the arrays are
> arrays of 'char', this is equivalent to strcpy(), since it copies
> everything including the terminating '\0'. Obviously other types are
> similar.

That kind of dangerous code brought out a compiler bug in a version 
of GCC and MSVC 5 if I remember correctly. The increments weren't 
always done with the load and store when full optimisation was on. 
Solution: use comma operator.

> Other idioms in C abound, such as Duff's device:
> 
> while (n >= 0) {
> switch (n) {
> default: xyzzy;
> case 7: xyzzy;
> case 6: xyzzy;
> case 5: xyzzy;
> case 4: xyzzy;
> case 3: xyzzy;
> case 2: xyzzy;
> case 1: xyzzy;
> }
> n -= 8;
> }
> 
> If you replace 'xyzzy' with some piece of code, this applies that
> piece of code exactly 'n' times, but it is much faster than the
> equivalent:
> 
> while (n-- > 0) {
> xyzzy;
> }
> 
> Since the latter executes 'n' decrements, 'n' comparisons, and 'n'
> conditional jumps, whereas Duff's device executes only 1/8'th as many
> decrements, comparisons, or conditional jumps. Of course the key is
> that there is no 'break' statement after each 'case' -- each case
> "falls through" to the next case. The other key, obviously, is that
> the cases are listed in backwards order.

This is called loop unrolling in assembler and I thought compilers 
did it for you because modern processors run so much faster than 
system memory that the compsci measurement of execution time is often 
way way off - smaller code on modern processors goes faster than 
bigger code, even with loads of pipeline flushes from the conditional 
branches because the L1 cache is 10x system memory speed.

> The point, of course, is that this is another idiom of C, and
> competent C programmers know these sorts of things. As you become
> better and better at C++, you will learn the idioms of C++, and this
> is one of them.

Actually, using a switch() statement is bad on modern deep pipeline 
processors. It's better to use a function pointer table and calculate 
the index because then the data cache effectively maintains a branch 
history for you.

If you ask me about embedded systems, I don't doubt I'm as good as 
they get. All this high-level stuff though I must admit is beyond me 
a bit. But more on that later.

> >I have the utmost respect and admiration 
> >for any standardisation committee (with possible exception of the
> >POSIX threads committee, their poor design really screws C++ stack
> >unwinding which is unforgiveable given how recently it was designed).
> 
> Not knowing any better, I'd guess they were dealing with very subtle
> constraints of existing code or existing practice. 

Twas a completely new API AFAIK.

> Most people on
> those sorts of committees are competent, plus the entire world
> (literally) has a chance to comment on the spec before it gets
> finalized, so if there were gaping holes *that* *could* *be* *fixed*
> (e.g., without breaking oodles and oodles of existing code), I'm sure
> *someone* *somewhere* in the world would have pointed it out.

From the usenet posts I've read, people did point out POSIX thread 
cancellation did not offer C++ an opportunity to unwind the stack, 
but they ignored it and went with a setjump/longjmp solution instead. 
Now if your platform's setjmp implementation unwinds the stack - 
fantastic. If not, severe memory leakage. All they needed was the 
ability to set a function to call to perform the cancellation - under 
C++, that would be best done by throwing an exception. Still, we can 
hope it will be added in the near future.

> Like I said, I don't know the details of this particular situation,
> but I would guess that they were fully aware of the problem, that they
> investigated all the alternatives, and that they chose the "least bad"
> of the alternatives.

The above support for C++ (and other languages) costs maybe an hour 
to add and is completely portable. I can't see how it wasn't done by 
competent and fair designers. I have read rabid posts on usenet why 
you shouldn't be writing mulithreaded C++ and such bollocks - not 
sure if that's involved. The whole issues of threads gets many Unix 
people's knickers in a right twist - they seem to think they're 
"wrong", much like exceptions are "wrong" in C++ for some people. 
Weird.

> Yes, existing code puts everyone in a very difficult position, and
> often causes compromises. But that's the nature of the beast. The
> cost of breaking existing code is much, much greater than canonizing
> it. [clipped rest]

Have you noticed the world's most popular programming languages tend 
to be evolved rather than designed? ;)

> >Put it this way: when you try something which seems logical in C it
> >generally works the way you think it should. 
> 
> Really? No less a light as Dennis Ritchie bemoans the precedence of
> some of the operators, and certainly the rather bizarre use of
> 'static' has caused more than one C programmer to wonder what's going
> on. Plus the issue of order of evaluation, or aliasing, or any number
> of other things has caused lots of consternation.

Yeah, I've read his comments. There's one operator in particular - is 
it &&? - which is very suspect in precedence.

However, that said, once you get used to the way of C logic it stays 
remarkably consistent. I personally recommend sticking "static" 
before everything you want to be static and don't rely on default 
behaviour - people's confusion with static clears up remarkably 
quickly if you do that.

> But I guess I agree to this extent: C++ is larger than C, and as such
> C++ has more confusing issues. I believe that C99 is causing some of
> those same problems, however, since C99 is much bigger than its
> predecessor. The same thing will be true of C++0x: it will be bigger
> and have more compromises.

It's also a case of history. K&R C was 90% done by just a few guys. 
C++ is a collection of different enhancements over C by completely 
different people with different intentions, and then with a good 
dollop of academic theory thrown in for good measure. Hence its non-
uniformity and lack of consistency.

Note that I have not yet found an academic who thinks C is an 
excellent example of a procedural language :)

> >TQString foo;
> >foo="Hello world";
> >
> >Now TQString is a subclass of QString, and both have const char *
> >ctors. The compiler will refuse to compile the above code because
> >there are two methods of resolving it. 
> 
> I may not understand what you mean by "two methods of resolving it,"
> but I don't understand why the compiler doesn't do what you think it
> should above. If TQString has a const char* ctor, then I think that
> should promote "Hello world" to a TQString and then use TQString's
> assignment operator to change 'foo'.

I completely agree. However, MSVC wants you to put a TQString("Hello 
world") around every const char * :(

I'm running into similar problems with the << and >> operators - I've 
subclassed QDataStream with TQDataStream because QDataStream is 
default big endian and doesn't provide support for 64 bit integers. 
Every single time I use << or >> I'm an ambiguous resoluton error 
when clearly the source or destination object is a TQDataStream.

Most of my C++ problems are arising from "repairing" Trolltech's 
code. While they will fix things similarly in future Qt's as a result 
of my suggestions, it still leaves now.

> > The same sort of
> >thing applies to overloading functions - you cannot overload based on
> > return type, something I find particularly annoying.
> 
> Another C++ idiom lets you do just that. I'll have to show that one
> to you when I have more time. Ask if you're interested.

Is that like this:
bool node(TQString &dest, u32 idx)
bool node(TKNamespaceNodeRef &ref, u32 idx)
...

> >Computer history is strewn with cases of an inferior product 
> >destroying a superior product. It's hardly unique.
> 
> I agree. I guess my point is simply this: any popular language is
> going to have warts that an unpopular language will not. Take Eiffel
> for example. Way back when Eiffel was very young, Bertrand Meyer
> derided C++'s 'friend' construct, claiming it violated encapsulation. 
> Then he began to get real users who were building real systems using
> Eiffel, and suddenly he began to see how something like the 'friend'
> construct actually *improves* encapsulation. So he added it to
> Eiffel. At first the language seemed cleaner and simpler, then
> gradually it added more stuff as it became more practical.

Still, Eiffel is considered much cleaner than C++ - however, it's not 
as popular. cf. my statement above about popular languages not being 
designed.

> However if you
> pointed out any particular compromise, I could probably tell you why
> it was done and in fact could (I hope!) make you realize that
> "cleaning up" that compromise would cause more harm than good.

Ok:
1. Why didn't C++ have separated support for code reuse and subtyping 
(like Smalltalk)?
2. Why don't return types determine overload?
3. Why can't the compiler derive non-direct copy construction? eg;
class A { A(B &); } class B { B(C &}; } class C { C(const char *); }
A foo="Hello";

In C++, you must rewrite that as A foo(B(C("Hello"))); - it's not 
done for you, nor is there any way of fixing it except modifying A to 
have a copy constructor taking const char * - which isn't possible if 
you don't have the source to A or B.

> Do you really mean "functional" or "procedural" here? The Functional
> style is rather difficult to do in C++ (think Scheme). Functional
> programming means never allowing any changes to any piece of data, so
> instead of inserting something into a linked list, one creates a new
> linked list and returns the new linked list that contains the new
> item.

I mean "functional" in terms of I tell you what to do not how to do 
it. 

Also above, the new linked list isn't created, merely a potential for 
a separate new linked list is. You're right that it's "as if".

> >>Another really big error. OO is primarily a design approach. The
> >>concept of "OO programming" is very close to a misnomer, since OO
> >>programming cannot stand on its own - it needs OO *design*.
> >
> >No, I must disagree with you there: design is independent of 
> >language. 
> 
> Nope, not true at all. A design that works for Functional languages
> is horrible for Procedural languages, and vice versa. And both those
> designs are wholly inappropriate for OO languages, Logic-oriented
> languages, or Constraint-oriented languages. In short, the paradigm
> *very* much effects the design.

I would have said it affects the /implementation/ rather than the 
design. You're right that say doing a sort in Haskell is completely 
different than doing it in C - but I would call that a difference in 
implementation because (a) the algorithm used is identical and (b) 
the two solutions give identical output.

> Try your belief out sometime. Try implementing your favorite program
> in Prolog (logic-oriented) or Scheme (function-oriented) and see what
> happens. Guaranteed that if your program is nontrivial, a radically
> different design will emerge. Either that or you'll constantly be
> fighting with the paradigm and the underlying language, trying to
> force, for example, Prolog to be procedural.

No, a radically different /implementation/ will emerge. That's simply 
because implementing the design is best done one way in one language 
and differently in a different language.

> I'll go further: design isn't even independent of language *within* a
> paradigm. In other words, a design that is appropriate for Smalltalk
> is typically inappropriate for C++, even when you are trying very hard
> to use OO thinking throughout.

I'm getting a feeling that once again it's a disagreement about 
terminology rather than opinion. I would treat the word "design" as 
that in its purest sense - algorithms. Everything after that has 
increasing amounts of implementation - so, for example, the object 
structure would involve some amount of implementation detail.

> >I have never agreed with OO design as my university 
> >lecturers found out - I quite simply think it's wrong. Computers
> >don't work naturally with objects - it's an ill-fit.
> >
> >What computers do do is work with data. If you base your design
> >entirely around data, you produce far superior programs. 
> 
> In your experience, this may be true. But trust me: it's a big world
> out there, and in the *vast* majority of that world, your view is very
> dangerous.

I have applied my skills to many projects: public, private and 
personal and I have not found my data-centric approach to have failed 
yet. It has nothing to do with code maintainability nor much other 
than efficiency - but that's why I use an impure OO for 
maintainability - but if you rate superiority of a program based on 
its excellence in functioning, my approach works very well. I 
contrast with OO designed projects and quite simply, on average they 
do not perform as well.

Now regarding the TCO of the code, I would personally say my code is 
extremely maintainable using my OO-like source filing system. You, I 
would imagine, would say how can I sleep at night when performing 
such atrocities to commonly held standards? (you wouldn't be the 
first to ask this).

Of course, in all this, I am referring to C and assembler and what 
I'd call C+ because I mostly wrote C with some small C++ extras. This 
project I'm working on now is the first to use multiple inheritance 
and templates and a fair bit more.

> Be careful: you are painting yourself into a very narrow corner. You
> may end up limiting your career as a result.

Possibly, but I would doubt it. I may have some unique opinions on 
this but what the customer cares about is (a) will it work and (b) 
can we look after it well into the future. My case history strongly 
supports both of these criteria, so a priori I'm on the right path.

> >Now I will 
> >agree OO is good for organising source for improved maintainability,
> >but as a design approach I think it lacking.
> 
> You really should read "OO Design Patterns" by Gamma, et al (also
> published by Addison Wesley). Read especially chapter 2. I think
> you'll see a whole world of OO design -- and you'll see ways to use OO
> at the design level that are totally different (and, I dare say,
> totally superior) to the approach you are describing here.

Is that about a vector graphics editor called Lexi? I have written 
two vector graphic editors, the latter in OPL for a Psion Series 3 
(OPL is quite like BASIC - no objects). Interestingly, the approach 
Gamma follows is almost identical to my own - I used dynamic code 
loading to load tool modules with a fixed API thus permitting 
infinite extensibility. Encapsulation of the API plus building a 
portable framework are two things I have done many times - I wrote my 
first framework library in 1992 some four years before going 
professional.

That Gamma book amused me - it attaches lots of fancy names to real 
cabbage and thistle programming. However, his conclusion is valid - 
in modern times, most programmers wouldn't know half that book, and 
that's worrying - hence the need for such a book.

> This project ended up being around half a person-millennium (150-200
> developers over a 3 year period). I ended up training and mentoring
> them all, and we had lots and lots of design sessions. When they were
> finished, the things that used to take 9 months could be done by a
> single person in less than a day. The success-story was written up in
> Communications of the ACM -- it was the lead article in the Special
> Issue on Object-Oriented Experiences. It was also written up in IEEE
> Software and perhaps a few other places. (And, by the way, there was
> no loss of performance as a result. That was *very* hard to achieve,
> but we did it. In the end, customers gained 2x MIPS/dollar.)
> 
> The point is that these benefits came as result of OO *design*, not as
> a result of programming-level issues.

I'm sure OO design greatly improved the likely wasp's nest of 
spaghetti that existed in there previously. But I'm not seeing how OO 
design is better than any other approach from this example - there 
are many methods that could have been employed to achieve the same 
result.

> One more example: UPS (another of my clients; in fact I was there just
> last week) has new "rating" and "validation" rules that change every 6
> months. For example, if Detroit passes a law saying it's no longer
> legal to drive hazardous materials through its downtown area, the code
> needs to change to prevent any package containing hazmat from going
> through downtown Detroit. In their old system, which was built using
> your style of C++, it took 5 months out of every 6 to integrate these
> sorts of changes. Then someone created a framework using OO design
> (not just C++ programming), and as a result, they could do the same
> thing in 2 weeks.

Any good framework here, OO or not, would have solved most of their 
dynamic change problem. In fact, I'd plug in some sort of scripting 
capability so such items were easy to change.

> >An example: take your typical novice with OO. Tell them the rules and
> > look at what they design. Invariably, pure OO as designed against
> >the rules is as efficient as a one legged dog. 
> 
> The way you have learned OO, yes, it will have performance problems.
> But the way I am proposing OO should be done, either it won't have
> performance problems at all, or if it does, those problems will be
> reparable.

ie; You're bending OO to suit real-world needs, which is precisely 
what I said experienced OO people do.

> >In fact, in my opinion, OO 
> >experience is actually learning when to break pure OO and experienced
> > OO advocates do not realise that they so automatically break the
> >pure application of what they advocate.
> 
> We agree that purity is never the goal. Pure OO or pure procedural or
> pure anything else. The goal is (or *should* be) to achieve the
> business objectives. In my experience, OO *design* brings the real
> value, and not just programming-level issues.

Has it not occurred to you that it's merely a /consequence/ of OO 
rather than innate quality that it has these beneficial effects?

> I agree with everything except your last phrase. OO design is good
> for both people and computers.

Right, firstly, before I start this section, I'd like to thank you 
for your time and patience - I've noticed some of what I didn't know 
and you explained to me was already online in your FAQ, so I 
apologise for wasting your time in this regard. Furthermore, I should 
mention that if you give me permission to distribute this 
correspondence, you will not only have done me a great favour but 
also the same to others. Certainly, if it takes you as long to reply 
as me, you're investing considerable time which a busy man as 
yourself surely cannot easily spare.

I, as I have already mentioned, come from a rather unique programming 
background. We were probably most comparable to the Unix culture 
except we were more advanced and we always had a very strong free 
software tradition where we released code and source into the public 
domain - furthermore, many commercial apps came with source too. 
Hence, there was great chance for learning off others, and much of 
this recent furore about OO etc. in my humble opinion is merely fancy 
names for a collection of old techniques.

Now as I mentioned a number of times, I believe a data-centric 
approach is superior to OO because it more accurately fits the way a 
computer works. This is not to say many of the advantages of OO do 
not still hold - in fact, I daresay many OO experts actually are data-
centric too without realising it. My criticism of OO therefore is 
that it isn't /intuitively/ "correct" ie; pure OO is rarely the 
optimal solution.

I had an idea back in 1994 for advancing procedural programming to 
the next level (this was independent of OO - honestly, I barely even 
knew what it was at the time) - I effectively wanted to do what OO 
has done in knocking us onwards a notch - however, as it would be, I 
considered then and still do today that my solution is superior.

Basically, it revolves entirely around data. Responsibility for data, 
whether in memory, disc or across a network is devolved entirely to 
the kernel. One may create data streams between data in an arbitrary 
fashion - how it actually is peformed (translations etc.) is however 
the kernel sees fit. Data is strongly typed so you can't stick 
incompatible types of data together - however data can be converted 
from one type to another via convertors which are essentially 
specialised plug ins which can be installed. Often, conversion is 
implicitly performed for you although either you can choose a route 
or it can dynamically create one based on best past performances. Of 
course, converters can offer out their input in more than one format 
or indeed offer a compound document as some or all of its subdatas.

Now the next part of the picture is components - these are tiny 
programs which do one thing and one thing well to data. A good 
analogy would be "more" or "grep" in Unix - but it goes way beyond 
that because components are much like a COM object or Qt Widget in 
that you can just plonk them somewhere and they do their thing. Then, 
the theory is, to build any application, you merely create a *web* of 
simple data processing components. For example, a spell checker 
component would accept text data and check it either with the user or 
with the component managing the data - there is no concept of data 
ownership in my proposal (kernel owns everything)

This model, I believe, compares extremely well to OO. You get lots of 
code reuse, a dynamic and extremely flexible linking mechanism, a web 
rather than a hierarchy and automatic distribution across multiple 
processors (and indeed machines). It's clearly functionally biased 
because it simply sets up the data relations and the kernel works out 
the best way to actually perform the processing. You get lots of 
stuff for free eg; OLE, data recovery in case of program crash and 
indeed limited graphical programming like some of those UML editors. 
You get the advantages of dynamic linking without business' dislike 
of source exposure as with Java or VB.

Furthermore, you get automatic /data/ reuse as well as code reuse - 
data just as much as code can be distributed across multiple machines 
for performance and/or security reasons. And of course, maintainence 
costs are low because the component set you use are as individual or 
fine-grained as you like them.

Now hopefully you'll be agreeing with me that this is all good - 
however, if you're like the other experts I've proposed this to, your 
first question will be "oh but how to implement it?" because the 
balancing act between all the different requirements means severe 
inefficiency. And you'd be right - I've made two prior attempts at 
this and failed both times - and right now, I'm making my third 
attempt which I'm self-financing myself for six months. The theory 
goes, produce a technology demonstration, if it runs at all 
reasonably then obtain venture capital, start a company and two years 
later we have a product. Five years later it's more or less complete. 
If anything goes wrong, return to working on whatever pays a lot for 
a while, then try again in a few years. Either way, the spin off 
benefits of each past attempt have been enormous, so really I can't 
lose.

So, thoughts? I'm particularly interested in what you see as design 
flaws - I know MIT did research into this for a while but stopped. 
Would you agree it's a viable future? I've had Carl Sassenrath (he 
did much the OS for the Commodore Amiga) and Stuart Swales (did much 
of RISC-OS I mentioned earlier) both agree it's probably right, but 
both wondered about implementation. I should be especially interested 
in seeing what a static OO based person things - neither Carl nor 
Stuart are static code nor OO advocates hugely.

Furthermore, any advice about soliciting venture capital in Europe 
would be useful (yes, I know it's like squeezing blood from a stone 
here) - ever since the indigenous industry withered and died here, 
it's been very hard to obtain capital for blue-sky projects without 
the Americans buying them up. I'm unable to obtain a work visa to the 
US (on the banned list), so that's out - and besides, as far as I can 
see, only IBM out of the big US software companies would be 
interested as only IBM's goals would be advanced by such a project. 
Oh BTW, did I mention it runs on Win32/64, Linux and MacOS X when 
they get the new FreeBSD kernel in - and yes, all computer 
irrespective of endian automatically work in unison. I'd also like it 
to stay in Europe so it (or rather I) stays free from software 
patents.

Anyway, any comments you may like to offer would be greatly 
appreciated. You've already earned yourself an acknowledgement in the 
projects docs for helpful tips and suggestions.

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Tue, 30 Jul 2002 02:14:13 -0500

Niall Douglas wrote:
>On 28 Jul 2002 at 22:31, Marshall Cline wrote:
>
>Firstly, may I ask your permission to distribute a digest of our 
>conversation to others? I believe quite a few people could do with 
>reading it because (and this may worry you) I am considered one of 
>the better C++ people out of our class' graduates. If I didn't know, 
>I'm very sure they didn't simply because it was never taught.

Sure, no prob.


>>I spoke last time about being a prisoner of our pasts. My past 
>>includes acting as "senior technology consultant" to IBM throughout 
>>North America, which meant advising on product strategy, mentoring, 
>>and (most relevant to this situation) performing internal audits. The
>>audits included a number of important engagements with IBM's clients, 
>>and required me to perform assessments of people and technology. 
>>During these audits and assessments, I saw a lot of large projects 
>>that failed because of overengineering. Many of the technologists on 
>>these sick or dead projects had a similar perspective to what you 
>>articulated above. Their basic approach was often that overengineering
>>is better than underengineering, that it's cheaper in the long run, 
>>and perhaps cheaper in the short run, so let's overengineer just in 
>>case.
>
>I think there are two types of overengineering: controlled and 
>uncontrolled. The latter happens when the people doing the design 
>aren't really sure what they're doing. The former happens when the 
>designers take into proper account the likely extensions in the 
>future, possible client changes in specification, ramifications on 
>maintainability etc. and balance all of these against time of 
>implementation, worth to the project etc. Essentially, what I am 
>really saying, is if you spend plenty of time on *design* then your 
>project comes in on time and within budget.

Agreed.


>BTW, have you heard of extreme programming 
>(http://www.extremeprogramming.org/)? 

Yes, though I tend to prefer its umbrella concept, Agile programming.


>Daft name, but it's an 
>interesting looking way of managing and overseeing computer projects. 
>It certainly is less intrusive than auditing, and establishes more 
>trust between customer and provider.
>
>>As a result of seeing in excess of one hundred million dollars worth 
>>of effort (and numerous careers) washed down the drain, I tend to make
>>sure there is a realistic ROI before adding any effort that has a 
>>future-payback.
>
>Again I think we're saying precisely the same thing with different 
>words.
>
>Let me give you a bit of background on myself (helps later on): My 
>role in recent years is saving troubled projects. I am brought in 
>when things have gone horribly wrong - for example, my last two 
>positions were saving a handheld GPS project in Canada and saving a 
>EuroFighter component test bench control software project here in 
>Spain. Usually, I come in, assess the situation (code, employees and 
>most importantly management) and fix it. In both projects, I have 
>been spectacularly successful, albeit at the cost of my own job - to 
>save a troubled project you need to work on many areas, but the most 
>obstinate in my experience is management who employ a "pass the buck" 
>methodology whilst firing good programmers to divert the blame. 

Sigh - what a shame. I've seen that too many times.

>In 
>the end, I always come up against what was killing the project 
>beforehand, at which stage it's time to move on.
>
>However, my background is in a little British computer called an 
>Acorn which ran on ARM processors (nowadays Acorn is liquidated and 
>ARM, its offshoot, is one of the bigger UK companies). Acorn's ran an 
>OS called RISC-OS which was the last general purpose all-assembler OS 
>ever written. And I will tell you now, it was vastly ahead of 
>anything else at the time - and I include Unix. Obviously, everything 
>in the system was designed around writing in assembler, and hence 
>large applications (DTP, editors, spreadsheets, photo-editing, music 
>composition etc.) often were entirely written in hand-coded ARM. 
>Hence all of us did stuff which most people consider unlikely in 
>assembler - for example, we used what you could call an object in 
>that some code would have instance data and a ctor and destructor. We 
>had the equivalent of virtual functions using API offset tables. Some 
>silly people used self-modifying code, which is worse that goto's 
>IMHO.

:-)


>What is important to get from this is that until the US 
>multinationals crushed our indigenous European computer industry, we 
>were in many ways considerably ahead of the status quo. This is why I 
>don't fit easily into boxes others like to assign me to.
>
>>>>Ouch, that's certainly a very dubious design style. It's a typical 
>>>>hacker's style, and it comes from the Smalltalk world, but it's 
>>>>generally inappropriate for C++ or Java or any other statically 
>>>>typed OO language.
>>>
>>>Can you point me to resources explaining why this is bad and not just
>>>a question of individual style?
>>
>>Sure no problem. Start with our book ("C++ FAQs", Addison Wesley), 
>>then go to Scott Meyer's books ("Effective C++" and "More Effective
>>C++", also Addison Wesley), and probably most any other book that
>>deals with design/programming style in C++.
>
>Not being able to obtain these books easily (I live in Spain plus 
>money is somewhat tight right now), I looked around the web for more 
>on this. I specifically found what not to do when inheriting plus how 
>deep subclassing usually results in code coupling increasing. Is that 
>the general gist?

That's a start. But coupling between derived and base class is a
relatively smaller problem than what I'm talking about. Typically deep
hierarchies end up requiring a lot of dynamic type-checking, which boils
down to an expensive style of coding, e.g., "if the class of the object
is derived from X, down-cast to X& and call method f(); else if it's
derived from Y, down-cast to Y& and call g(); else if ...<etc>..." This
happens when new public methods get added in a derived class, which is
rather common in deep hierarchies. The if/else if/else if/else style of
programming kills the flexibility and extensibility we want to achieve,
since when someone creates a new derived class, they logically need to
go through all those if/else-if's and add another else-if. If they
forget one, the program goes into the "else" case, which is usually some
sort of an error message. I call that else-if-heimer's disease
(pronounced like "Alzheimer's" with emphasis on the word "forget").

Take this as an example. Suppose someone creates a HashTable class that
has methods insert(const Item&), contains(const Item&), remove(const
Item&), and size(). Suppose the author of HashTable prepared the class
to be a base class, e.g., virtual methods, protected data, etc.

Next someone comes along and wants to create class Bag (AKA MultiSet),
which is an unordered container into which you can insert more than one
copy of an Item. The methods of HashTable are almost perfect for Bag,
so they inherit Bag from HashTable. That solution makes a lot of sense
*if* you have the mindset that inheritance is for reuse, but it is
dubious (at best) if you take my approach.

So far we have Bag inheriting from HashTable, with minimal overrides.
Next someone comes along and wants to create Set, which semantically is
a "specialized Bag." Set is specialized in the sense that it contains
at most one copy of any Item. They draw a Venn diagram and prove that
*every* Set is already a Bag, so the "specialization" concept seems to
make a lot of sense, so they go ahead and inherit Set from Bag. This
again makes sense in an "inheritance is for reuse" mindset, but it is
increasingly dubious from my perspective. Note that to make the
"specialization" work, they override the insert(const Item&) method so
it makes sure the Set never gets duplicates.

Then someone comes along and wants to create a Dictionary (AKA Map)
class, which is conceptually a set-of-associations, that is, a
set-of-key/value-pairs. So they inherit Association from Item (after
all, they reason, an Association really is a special kind of Item), and
they inherit Dictionary from Set (after all, a Dictionary really is a
set-of-associations). They might add a method or two, e.g., to perform
the mapping from Key to Value, and they might provide insert(const
Association&) and perhaps even privatize insert(const Item&), just to
make sure no one accidentally inserts a non-Association Item into the
Dictionary. Again all this makes sense *if* you believe inheritance is
for reuse, and coincidentally you end up with a somewhat tall hierarchy,
but it's really strange and dubious from the perspective I'm espousing.

Now let's see how *each* of those three hierarchies will cause problems.

The problem with the first inheritance (Bag from HashTable) is
performance. For example, suppose we later discover that HashTable is
not the ideal data structure for constructing a Bag. If we had used
aggregation / has-a, we would have been able to change the internal data
structure to something else (e.g., skip-list, AVL tree, 2-3 tree,
red-black tree, etc., etc.) with almost zero ripple effect. (I use the
term "ripple effect" to describe the number of extra changes that are
needed. E.g., if we change this, we'll also have to change that; and if
we change that then we'll have to change this other thing. That's a
ripple-effect.)

However since we used inheritance, we're pretty much stuck with
HashTable forever. The reason is that inheritance is a very "public"
thing -- it tells the world what the derived class *is*. In particular,
users throughout our million-line-of-code system are passing Bags as
HashTables, e.g., converting a Bag* to HashTable* or Bag& to HashTable&.
All these conversions will break if we change the inheritance structure
of Bag, meaning the ripple effect is much higher.

The problem with the second inheritance (Set from Bag) is a logical
mistake, and sooner or later, it will cause the application to generate
errors. Even though a Venn diagram will prove that *every* Set actually
is a Bag, and even though conceptually Set is a "specialized" Bag, the
inheritance is improper.

A derived class's methods are allowed to weaken requirements
(preconditions) and/or strengthen promises (postconditions), but never
the other way around. In other words, you are free to override a method
from a base class provided your override requires no more and promises
no less than is required/promised by the method in the base class. If
an override logically strengthens a requirement/precondition, or if it
logically weakens a promise, it is "improper inheritance" and it will
cause problems. In particular, it will break user code, meaning it will
break some portion of our million-line app. Yuck.

The problem with Set inheriting from Bag is Set weakens the
postcondition/promise of insert(Item). Bag::insert() promises that
size() *will* increase (i.e., the Item *will* get inserted), but
Set::insert() promises something weaker: size() *might* increase,
depending on whether contains(Item) returns true or false. Remember:
it's perfectly normal and acceptable to weaken a
precondition/requirement, but it is dastardly evil to strengthen a
postcondition/promise.

To see how this weakening will break existing code, imagine that code
throughout our million-line system passes Set objects to functions
expecting Bag-references or Bag-pointers. Those other functions call
methods of the Set object, though the functions only know the interfaces
described by Bag. (Nothing unusual here; what I just described is the
very heart of dynamic binding.) Some of that code inserts 2 copies of a
particular Item, then removes it once, and divides something by
contains(Item) (knowing contains(Item) will be at least 1!), yet
contains(Item) will actually be 0!

Please don't assume the solution is to make insert(Item) non-virtual.
That would be jumping from the frying pan into the fire, since then
Bag::insert() would get called on a Set object, and there actually could
be 2 or 3 or more copies of the same Item inside a Set object!! No, the
real problem here isn't the override and it isn't the virtualness of the
method. The real problem here is that the *semantics* of Set are not
"substitutable for" those of Bag.

The solution here is (again) to use has-a rather than inheritance. Set
might have-a Bag, and Set::insert() would call its Bag's insert()
method, but would first check its Bag's contains() method. This would
be perfectly safe for everyone.

The third inheritance (Dictionary from Set) is another "improper
inheritance," though this time because it strengthens a
precondition/requirement. In particular, Set::insert() can accept any
Item but Dictionary::insert() has a stronger precondition: the parameter
must be a kind-of Association. The rules of C++ seem to help a little,
since Dictionary::insert(Association) doesn't override
Set::insert(Item), but that doesn't really solve anything since the
insert(Item) method is still accessible on a Dictionary object via a
Set& or Set*. Again making Set::insert() non-virtual will only make
things worse, since then that method will actually get called on a
Dictionary object provided it is called via a Set& or Set*, and that
method lets users insert *any* kind-of Item (not just Associations) into
the Dictionary. That insertion will undoubtedly cause a crash when an
Item is down-casted to a Association, e.g., to access the Association's
Key or Value.

As before, aggregation would be perfectly safe and reasonable here:
Dictionary could have-a Set, could insert Association objects (which
would automatically be up-casted to Item&), and when it accessed/removed
those Items, Dictionary could down-cast them back to Association&. The
latter down-cast is ugly, but at least it is logically safe --
Dictionary *knows* those Items actually are Associations, since no other
object anywhere can insert anything into the Set.

The message here is NOT that overrides are bad. The message here is
that tall hierarchies, particularly those built on the "inheritance is
for reuse" mantra, tend to result in improper inheritance, and improper
inheritance increases time, money, and risk, as well as (sometimes)
degrading performance.


>>>I would have thought it /better/ for
>>>statically typed languages because the compiler is given more 
>>>knowledge with which to optimise.
>>
>>Nope, it's a very Smalltalk-ish style, and it causes lots of problems 
>>in a statically typed OO language since today's statically typed OO 
>>languages (C++, Java, Eiffel, etc.) equate inheritance with subtyping.
>>In any language that equates inheritance with subtyping, using 
>>inheritance as a reuse mechanism, as opposed to using inheritance 
>>strictly for subtyping purposes, ultimately causes lots of design and 
>>extensibility problems. It can even effect performance.
>
>In other words, precisely the trap I was falling myself into. I 
>should however mention that having examined my code, I was peforming 
>this trap only in the areas where Qt wasn't providing what I needed. 
>In the code generated entirely by myself, I tend to use a top-down 
>approach with an abstract base class defining the reusable parts.

Good.

Remember, on a small enough project, you can use inheritance in pretty
much any way you want since you can easily fit the entire system into
your head at once. The problem is with larger systems, where very few
programmers can remember all the constraints. In large systems,
inheritance really has to mean subtyping, since then programmers can't
hurt themselves if they do what C++ is designed to do: pass a Derived
object via a Base& or Base*, then access methods on the Derived object
via those Base& or Base*.


>I should mention that much of the subclassing I have had to do will 
>disappear with future versions of Qt as they have very kindly mostly 
>agreed with my ideas. Hence, in fact, until v4.0, it's mostly stop- gap
code.
>
>>>Again, I'd like to know precisely why this style would be a poor 
>>>choice for some other app.
>>
>>Mostly because it creates all sorts of problems for users. Take, for 
>>example, your TSortedList class. You have removed the append() and
>>prepend() methods because you can't implement them properly in your 
>>class. Nonetheless someone might easily pass an object of your 
>>derived class via pointer or reference to its base class, and within 
>>that function the methods you tried to remove are suddenly available 
>>again, only this time with potentially disastrous results. Take, for 
>>example, this function:
>>
>> void f(QList<Foo>& x)
>> {
>> x.prepend(...); // change '...' to some Foo object
>> x.append(...); // change '...' to some Foo object
>> }
>>
>>Now suppose someone passes a TSortedList object to this function:
>>
>> void g()
>> {
>> TSortedList<Foo> x;
>> f(x);
>> ...what happens here??
>> }
>
>Err, prepend and append aren't virtual in the base class, so the base 
>class' versions would be called. 

As I pointed out below, that is worse than if they were virtual. In
particular, "the list might not be sorted" (see below) precisely because
f() could call x.prepend() and x.append() in random order. The
un-sorted-ness of the TSortedList could cause all sorts of problems in
the binary search routines. Try it and you'll see: if you have an
unsorted list and you apply binary search anyway, the results are random
at best.


>I had realised previously that it's 
>a very bad idea to disable virtual inherited methods - or if you were 
>to, you'd want a fatal exception in there to trap during debug.

Actually doing it with a virtual is about the only time you have a
prayer of being right. But even then, all the above options are bad if
the new behavior is disallowed by the "contract" in the base class (the
"contract" is the explicit preconditions and postconditions):

* If the base class's method, say Base::f(), says it might throw a Foo
or Bar exception, then an override is allowed to throw a Foo or Bar or
any class that inherits from either of those, BUT NOTHING ELSE.

* If Base::f() says it never throws an exception, the derived class must
never throw any exception of any type.

* If Base::f() says it might "do nothing," then overriding the behavior
with "Derived::f() { }" is perfectly legitimate. However if Base::f()
guarantees it always does something, then "{ }" in the derived class is
bad - a weakened promise.

However redefining ("overriding") a non-virtual is almost always bad,
since then the behavior of the object would depend on the type of the
pointer instead of simply on the type of the object. The goal for
objects to respond based on what they really are independent of who is
looking at them. In the real world, if I am pointing at a Mercedes
ES-500 but I simply call it a Car, it still drives the same way -- the
behavior of the object doesn't depend on the type of the pointer.


>>In the '...what happens here??' part, anything you do to the 
>>TSortedList is likely to cause problems since the list might not be 
>>sorted. E.g., if f() adds Foo objects to 'x' in some order other than
>>the sorted order, then the '...what happens here??' part is likely to 
>>cause serious problems.
>>
>>You can't blame this problem on references, since the same exact thing
>>would happen if you changed pass-by-reference to pass-by-pointer.
>>
>>You can't blame this problem on the C++ compiler, because it can't 
>>possibly detect one of these errors, particularly when the functions
>>f() and g() were part of two different .cpp files ("compilation
>>units") that were compiled on different days of the week.
>>
>>You can't blame this problem on the author of g(), because he believed
>>the contract of TSortedList. In particular, he believed a TSortedList
>>was a kind-of a QList. After all that is the meaning of subtyping, 
>>and subtyping is equated in C++ with inheritance. The author of g() 
>>simply believed what you said in this line: 'class TSortedList : 
>>public QList', and you can't blame him for believing what you said.
>>
>>You can't blame this problem on the author of f(), because he believed
>>the contract of QList. In particular, he believed he can append() 
>>and/or prepend() values in any order onto any QList. Besides, he 
>>wrote and compiled his code long before you even thought of deriving 
>>TSortedList, and by the rules of extensibility (e.g., see the sections
>>on Inheritance in the C++ FAQ, or any similar chapters in any book on 
>>the subject), he is not required to predict the future - he is 
>>supposed to be able to write code based on today's realities, and have
>>tomorrow's subclasses obey today's realities. That is the notion of 
>>is-a, and is codified in many places, including the C++ FAQ, Liskov's 
>>Substitutability Principle ("LSP"), and many other places.
>>
>>So who is at fault? Ans: the author of TSortedList. Why is the 
>>author of TSortedList at fault? Because of false advertising: he said
>>TSortedList was a kind-of a QList (or, using precise terminology, that
>>TSortedList was substitutable for QList), but in the end he violated 
>>that substitutability by removing methods that were promised by QList.
>
>Worse I think would be redefining inherited methods to do something 
>completely different. But yes, I understand now.
>
>The rule should be that subclasses must always behave like their base 
>class(es). 

Right.

>Code reuse should be done via composition.

Right.

>Hence, that TSortedList should now derive off QGList which doesn't 
>have the append and prepend methods so I can safely ensure it does 
>what its parent does.

What you really ought to do is check the *semantics* of QGList's
methods, in particular, read the preconditions and postconditions for
those methods. (I put these in the .h file for easy access, then use a
tool to copy them into HTML files; Qt seems to put them in separate
documentation files; either way is fine as long as they exist
somewhere.) Inheritance is an option if and only if *every* method of
TSortedList can abide by the corresponding preconditions and
postconditions in QGList.

Again, in a small enough project, you can use "improper inheritance" if
you want, but you must be very sure that no one ever uses a Base& or
Base* to point to a Derived object. (Personally I never use improper
inheritance, since the down-side cost is unlimited. In contrast, most
"bad programming practices" have a bounded cost, e.g., a "goto" might
increase the maintenance cost of its method, but it can never screw up
any other method, so its cost is bounded by the number of lines in the
method. However the down-side cost for improper inheritance goes on and
on: the more users use your code, the more places that can break.)


>>Put it this way: you inherit from "it" to *be* what it *is*, not 
>>simply to have what it has. If you simply want to have what it has, 
>>use has-a (AKA aggregation AKA composition).
>
>Yes, I definitely understand you now. It is a pity explanations like 
>this weren't more universally available, 

Glad I could help.

>because I know a lot of C++ 
>programmers learned from MSVC's online help (eg; initially me - it's 
>where I learned C from as well). I however did subscribe to the 
>Association of C & C++ Users which is why I know about 
>standardisation debates - but even though at the time I subscribed 
>for the C coverage, I did read the C++ sections.
>
>A lot of people say pointers are the devil's tool in C and I have met 
>a disturbing number of programmers who just don't understand them. 
>However, it seems to me pointers are child's play in unenforced 
>danger when compared to problems like you and your FAQ have 
>mentioned. 

Agreed. The reason improper inheritance has larger consequences than,
say, pointers is that improper inheritance is a *design* error, but
pointer problems are "merely" programming errors. (I recognize I'm
using the word "design" differently from you. Play along with my lingo
for a moment and pretend your inheritance hierarchies really are part of
your design.) Obviously the cost of fixing a design error are greater
than the cost of fixing a programming error, and that is my way of
explaining your (correct) observation above.


>If more warning were out there, we'd all have less 
>problems with other people's code.

Agreed. I get on my stump and shake my fist in the air every chance I
get. You are hereby deputized to do the same. Go get 'em!!

Seriously, proper use of inheritance really is important, and knowing
the difference is critical to success in OO, especially in large
systems.


>>(BTW I will quickly add that your approach is perfectly fine in a very
>>small project, since in very small projects you can control the damage
>>of "improper" or "bad" inheritance. Some of my colleagues won't agree
>>and will say your approach is *always* wrong, and in a sense I would 
>>agree. But from a practical basis, your approach doesn't really cost 
>>too much in the way of time, money, or risk with a small enough 
>>project. If you use your approach on a big project, however, everyone 
>>seems to agree, and everyone's experience seems to prove, that your 
>>approach is very dangerous and expensive.)
>
>Unfortunately what I am working on now I expect to exceed 100,000 
>lines before I'll consider it reasonably done. I'll explain later.
>
>>[content clipped]
>>**DING** I just found your bug. In TSortedList.cpp, all your methods
>>are listed like this: [code chopped]
>>Your syntax ("TSortedList<class type>::") explains everything,
>>including the bizarre use of 'class type' within the error messages. 
>>What happened is that the compiler saw you using a TSortedList<class
>>type>, and it therefore tried to compile all the virtual methods
>>within TSortedList<class type>. When it saw that 'type' really isn't 
>>a genuine class type, it complained (eventually) that the class called
>>'type' doesn't have an == or < operator.
>
>I don't see how the syntax is consistent then.

Just trust me and type it in. If the compiler gives you errors, then
tell me and I'll help you debug it.

>From what I can see 
>template<pars> X where X is the code to be parametrised - or are you 
>saying I declare the methods in the class definition and move the 
>code to inline void TSortedList<class type>::foo() after the template 
>class definition?

No, just change
void TSortedList<class type>::f() { ... }
to
template<class type> TSortedList::f() { ... }


>Either way, this was my first ever template class (yes, in all three 
>years of using C++) and I copied heavily off QSortedList.h (which I 
>enclosed last time) which might I point out compiles absolutely fine. 

I'm sure (without having seen it!) that QSortedList uses the syntax I
described above. Either that or they moved all their method definitions
into the body of the class.


>So why my class, almost identical, does not and Qt's one does I do 
>not know.

Trust me: change "void TSortedList<class type>::f() { ... }" to
"template<class type> TSortedList::f() { ... }".


>>When you fix this problem, you will end up with additional problems, 
>>mainly because you have moved template code into a .cpp file. The C++
>>FAQ covers this issue; suggest you read that for details. (It's 
>>probably in the section on templates and/or containers.)
>
>It says you can't move template code into a .cpp file :)

Oops, I thought I gave the other ideas. It's really simple: if you want
to create TSortedList's of Foo, Bar, and Baz, just add these lines at
the bottom of TSortedList.cpp:

template class TSortedList<Foo>;
template class TSortedList<Bar>;
template class TSortedList<Baz>;

Or, if you'd rather not change TSortedList.cpp itself (e.g., if you want
to reuse without changes that on other projects), then create
MyProjectTSortedList.cpp which says

#include "TSortedList.cpp" // note: not ".h"!!!
#include "Foo.hpp"
#include "Bar.hpp"
#include "Baz.hpp"

template class TSortedList<Foo>;
template class TSortedList<Bar>;
template class TSortedList<Baz>;

Note: the first line includes a .cpp file, not a .h file!!

The point is that MyProjectTSortedList.cpp would be compiled as part of
your project (e.g., in your makefile), but TSortedList.cpp would not.


>I think you can just put it in the header file though? I'm still not 
>sure why it threw an error :(

It's because the syntax you used ("void TSortedList<class type>::f() {
... }") is **NOT** the C++ way of defining a template member function.
That is the C++ way of saying you want to specialize the particular
template "TSortedList<type>", meaning the name 'type' will be
interpreted as a real typename as opposed to a formal parameter for the
template. In other words, when the compiler saw the syntax you used, it
tried to actually look for a type named 'type', and it tried to create
'TSortedList<type>'. When it did that, it noticed it needed to create
TSortedList<type>::compareItems() (again where 'type' here means a real
type name, not a formal parameter to the template), so it went through
the code and noticed you were casting a 'void*' to a 'type*' (which it
can do even if it doesn't know anything about the type 'type'), then you
dereferenced that 'type*' (which means the type of the thing is now
'type&'; again this is a legal type even if the compiler doesn't know
anything about the type named 'type'), then you used '==' or '<' to
compare two 'type' objects. At this point the compiler *should* have
said 'type' was incomplete, since that would have given you a better
hint, but instead it said 'type' doesn't have an '==' operator.

Is that more clear?


>>>[a bloody good suggestion]
>>>Thank you!
>>
>>No problem. BTW I consider this an idiom of C++. Part of being 
>>competent in using a language is knowing the syntax and semantics, but
>>another critical part is knowing the idioms of the language. You're 
>>an expert (I assume) in certain varieties of assembler, and perhaps 
>>also in C. As a competent C programmer, you know the C idioms, such 
>>as
>>
>> while (*dest++ = *src++)
>> ;
>>
>>This, of course, is the idiom that copies an array of things pointed 
>>to by 'src' into an array pointed to by 'dest', and it stops copying 
>>after it copies the item whose value is zero. If the arrays are 
>>arrays of 'char', this is equivalent to strcpy(), since it copies 
>>everything including the terminating '\0'. Obviously other types are 
>>similar.
>
>That kind of dangerous code brought out a compiler bug in a version 
>of GCC and MSVC 5 if I remember correctly. The increments weren't 
>always done with the load and store when full optimisation was on. 
>Solution: use comma operator.

I'll take your word for it, and I really don't want to argue over it,
but I'm *very* surprised that any C compiler ever shipped any version
that couldn't correctly compile the above. That snippet is an idiom
that is used in many, many C programs, and is probably part of the
compiler's test suite.


>>Other idioms in C abound, such as Duff's device:
>>
>> while (n >= 0) {
>> switch (n) {
>> default: xyzzy;
>> case 7: xyzzy;
>> case 6: xyzzy;
>> case 5: xyzzy;
>> case 4: xyzzy;
>> case 3: xyzzy;
>> case 2: xyzzy;
>> case 1: xyzzy;
>> }
>> n -= 8;
>> }
>>
>>If you replace 'xyzzy' with some piece of code, this applies that 
>>piece of code exactly 'n' times, but it is much faster than the
>>equivalent:
>>
>> while (n-- > 0) {
>> xyzzy;
>> }
>>
>>Since the latter executes 'n' decrements, 'n' comparisons, and 'n' 
>>conditional jumps, whereas Duff's device executes only 1/8'th as many 
>>decrements, comparisons, or conditional jumps. Of course the key is 
>>that there is no 'break' statement after each 'case' -- each case 
>>"falls through" to the next case. The other key, obviously, is that 
>>the cases are listed in backwards order.
>
>This is called loop unrolling in assembler and I thought compilers 
>did it for you because modern processors run so much faster than 
>system memory that the compsci measurement of execution time is often 
>way way off - smaller code on modern processors goes faster than 
>bigger code, even with loads of pipeline flushes from the conditional 
>branches because the L1 cache is 10x system memory speed.

I suppose some optimizers might unroll some loops, but the real problem,
as you obviously know, is cache misses. Plus compilers can't
necessarily know that any given loop will actually be a bottleneck, and
as you know, performing this sort of optimization on a non-bottleneck
loop would make space-cost worse without any improvement in overall
time-cost. If a program has 1,000 loops, how could a compiler guess
which of those are the bottlenecks? As has been demonstrated by
software engineering study after study over the years, programmers don't
even know where their own bottlenecks are, so I expect it will be a long
time before compilers can know.


>>The point, of course, is that this is another idiom of C, and 
>>competent C programmers know these sorts of things. As you become 
>>better and better at C++, you will learn the idioms of C++, and this 
>>is one of them.
>
>Actually, using a switch() statement is bad on modern deep pipeline 
>processors. It's better to use a function pointer table and calculate 
>the index because then the data cache effectively maintains a branch 
>history for you.
>
>If you ask me about embedded systems, I don't doubt I'm as good as 
>they get.

Sounds like it. I should remember that in case I get more embedded
systems work from TI or UPS or HP. In the mean time, learn proper
inheritance so you'll be ready for my phone call! :-)

(Don't sit by the phone waiting for it to ring - I don't have anything
'hot' right now.)

>All this high-level stuff though I must admit is beyond me 
>a bit. 

As you know, embedded systems programming is highly technical, and
presents enough of a challenge that the weak tend to get killed off -
they end up programming way up at the apps level using something soft
and gushy like Visual Basic or JavaScript. So the only survivors in
embedded systems are technically tough enough, at least at the
programming level.

Unfortunately most embedded systems programming doesn't also force
people to be really good at the design level. Most embedded systems
work is intense at the binary-level, always trying to squeeze 10
kilograms of stuff in a bag meant to hold only 5 kilograms. I think
that's especially true in the hand-held environment, but either world
tends to produce hot-shot programmers who can program their way out of
most situations, and aren't necessarily great at the high-level stuff.
But you'll make it - I can tell. You've already embraced the key
elements of good OO design (except for understanding that OO design
really means the structure of your inheritance relationships, and that
your algorithms are pluggable/replaceable and end up getting buried in
derived classes; more on that later).

>But more on that later.
>
>>>I have the utmost respect and admiration
>>>for any standardisation committee (with possible exception of the
>>>POSIX threads committee, their poor design really screws C++ stack
>>>unwinding which is unforgiveable given how recently it was designed).
>>
>>Not knowing any better, I'd guess they were dealing with very subtle 
>>constraints of existing code or existing practice.
>
>Twas a completely new API AFAIK.

Sigh. That's very sad. Oh well, I gave them the benefit of the doubt,
but apparently they weren't worthy of it.

>>Most people on
>>those sorts of committees are competent, plus the entire world
>>(literally) has a chance to comment on the spec before it gets 
>>finalized, so if there were gaping holes *that* *could* *be* *fixed* 
>>(e.g., without breaking oodles and oodles of existing code), I'm sure
>>*someone* *somewhere* in the world would have pointed it out.
>
>From the usenet posts I've read, people did point out POSIX thread 
>cancellation did not offer C++ an opportunity to unwind the stack, 
>but they ignored it and went with a setjump/longjmp solution instead. 
>Now if your platform's setjmp implementation unwinds the stack - 
>fantastic. If not, severe memory leakage. All they needed was the 
>ability to set a function to call to perform the cancellation - under 
>C++, that would be best done by throwing an exception. Still, we can
>hope it will be added in the near future.
>
>>Like I said, I don't know the details of this particular situation,
>>but I would guess that they were fully aware of the problem, that they
>>investigated all the alternatives, and that they chose the "least bad"
>>of the alternatives.
>
>The above support for C++ (and other languages) costs maybe an hour 
>to add and is completely portable. I can't see how it wasn't done by 
>competent and fair designers. I have read rabid posts on usenet why 
>you shouldn't be writing mulithreaded C++ and such bollocks - not 
>sure if that's involved. The whole issues of threads gets many Unix 
>people's knickers in a right twist - they seem to think they're 
>"wrong", much like exceptions are "wrong" in C++ for some people. 
>Weird.
>
>>Yes, existing code puts everyone in a very difficult position, and
>>often causes compromises. But that's the nature of the beast. The
>>cost of breaking existing code is much, much greater than canonizing
>>it. [clipped rest]
>
>Have you noticed the world's most popular programming languages tend 
>to be evolved rather than designed? ;)

Yes, and I think there's a reason. You might not like my reasoning, but
it goes like this: Businesses choose programming languages based on
business issues first and technology issues second. This is not a bad
thing. In fact, I believe businesses *ought* to worry primarily about
business issues such as acquiring skilled people and tools. Can we hire
programmers that already know this language? Are there a glut of
programmers, or are we going to have to pay enormous salaries, signing
bonuses, and relocation fees? Are the area universities churning out
bodies that know this language? Are the programmers any good? Is there
a ready supply of programmers we can "rent" (AKA contractors) so we
don't have to hire everyone? Are there outsourcing firms we can bring
in to finish the work as a contingency plan? Is there a ready supply of
consultants who can advise us on nuances of using this language? Those
are examples of the people-questions; next come a similar pile of
questions about tools, multiple vendors, long-term support, development
environments, maturity of tools, companies who can train our people in
using the tools, etc., etc.

And after all these questions are answered, somewhere down on the list
are things like the relative "cleanliness" of the language. Are the
constructs orthogonal? Is there appropriate symmetry? Are there
kludges in the syntax? Those things will effect the cost of the
software some, to be sure, but they aren't life-and-death issues like
whether we can buy/rent programmers or whether we can buy/license good
tools. I have a client that is using (foolishly) a really clean,
elegant language that almost nobody uses. Most programmers who use that
language for more than a month absolutely love it. But the client can't
buy or rent programmers or tools to save its life, and its multi-million
dollar project is in jeopardy as a result.

So far all I've said is that most businesses choose programming
languages based primarily on business considerations, not primarily on
technical considerations. There are some exceptions (such as the
company I just mentioned), and perhaps you even experienced one or two
exceptions, but I think almost anyone would agree that the basic premise
("most businesses choose...") is correct. I further assert that that is
a good thing, and you are free to disagree on that point, of course.
However I have to believe you agree with me regarding how things *are*,
even if you disagree with me about how things *ought* to be.

The conclusion of the argument is simple: Go back through the
business-level questions I mentioned above, and most if not all of the
answers would be "okay" if the language was an incremental extension of
some well-known, mature language. That means using an "evolved"
language lowers business risk, even if it adds technical warts or
reduces technical elegance. (At least it's *perceived* to lower
business risk, but business people make decisions based on perception
rather than reality anyway, so the perception of a reduced business risk
is a powerful argument in favor of an "evolved" language.)


>>>Put it this way: when you try something which seems logical in C it
>>>generally works the way you think it should. 
>>
>>Really? No less a light as Dennis Ritchie bemoans the precedence of
>>some of the operators, and certainly the rather bizarre use of
>>'static' has caused more than one C programmer to wonder what's going
>>on. Plus the issue of order of evaluation, or aliasing, or any number
>>of other things has caused lots of consternation.
>
>Yeah, I've read his comments. There's one operator in particular - is 
>it &&? - which is very suspect in precedence.
>
>However, that said, once you get used to the way of C logic it stays 
>remarkably consistent. I personally recommend sticking "static" 
>before everything you want to be static and don't rely on default 
>behaviour - people's confusion with static clears up remarkably 
>quickly if you do that.

Yes, C is closer to the machine, since its mantra is "no hidden
mechanism." C++ *strongly* rejects the no-hidden-mechanism mantra,
since its goal is ultimately to hide mechanism - to let the programmer
program in the language of the *problem* rather than in the language of
the *machine*. The C++ mantra is "pay for it if and only if you use
it." This means that C++ code can be just as efficient as C code,
though that is sometimes a challenge, but it also means that C++ code
can be written and understood at a higher level than C code -- C++ code
can be more expressive -- you can get more done with less effort. Of
course it is very hard to achieve *both* those benefits (more done with
less effort, just as efficient as C) in the same piece of code, but they
are generally achievable given a shift in emphasis from programming to
design (using my lingo for "design"). In other words, OO software
should usually put proportionally more effort into design than non-OO
software, and should have a corresponding reduction in the coding
effort. If you're careful, you can have dramatic improvements in
long-term costs, yet keep the short-term costs the same or better as
non-OO.

People who don't understand good OO design (my definition, again; sorry)
tend to screw things up worse with OO than with non-OO, since at least
with non-OO they don't *try* to achieve so many things at once -- they
just try to get the thing running correctly and efficiently with
hopefully a low maintenance cost. In OO, they try to use OO design (my
defn) in an effort to achieve all those *plus* new constraints, such as
a dramatic increase in software stability, a dramatic reduction in
long-term costs, etc. But unfortunately, after they spend more
time/money on design, they have a mediocre design at best, and that
mediocre design means they *also* have to pay at least as much
time/money on the coding stage. They end up with the worst of both
worlds. Yuck.

The difference, of course, is how good they are at OO design (using my
defn).


>>But I guess I agree to this extent: C++ is larger than C, and as such
>>C++ has more confusing issues. I believe that C99 is causing some of
>>those same problems, however, since C99 is much bigger than its
>>predecessor. The same thing will be true of C++0x: it will be bigger
>>and have more compromises.
>
>It's also a case of history. K&R C was 90% done by just a few guys. 
>C++ is a collection of different enhancements over C by completely 
>different people with different intentions, and then with a good 
>dollop of academic theory thrown in for good measure. Hence its non-
>uniformity and lack of consistency.

In his first presentation to the ANSI/ISO C++ committees, Bjarne
Stroustrup made a statement that would guide the rest of the committee's
work: "C++ is not D." The idea was that we were *not* free to break C
compatibility unless there was a compelling reason. So in addition to
everything you said above, C++ ends up trying to be a better C. It
basically wants to be successful at 3 distinct programming paradigms:
procedural, object-oriented, and generic. The crazy thing is, if you're
willing to put up with some little edge effects that were designed for
one of the other paradigms, it actually succeeds at all three!


>Note that I have not yet found an academic who thinks C is an 
>excellent example of a procedural language :)
>
>>>TQString foo;
>>>foo="Hello world";
>>>
>>>Now TQString is a subclass of QString, and both have const char *
>>>ctors. The compiler will refuse to compile the above code because
>>>there are two methods of resolving it. 
>>
>>I may not understand what you mean by "two methods of resolving it,"
>>but I don't understand why the compiler doesn't do what you think it
>>should above. If TQString has a const char* ctor, then I think that
>>should promote "Hello world" to a TQString and then use TQString's
>>assignment operator to change 'foo'.
>
>I completely agree. However, MSVC wants you to put a TQString("Hello 
>world") around every const char * :(

It shouldn't. Try this code and see if it causes any errors:

=================================================
class BaseString {
public:
BaseString(const char* s);
};

class DerivedString : public BaseString {
public:
DerivedString(const char* s);
};

int main()
{
DerivedString foo;
foo = "Hello world";
return 0;
}
=================================================

I think that properly represents the problem as you stated it:
>>>TQString foo;
>>>foo="Hello world";
>>>
>>>Now TQString is a subclass of QString, and both have const char *
>>>ctors. The compiler will refuse to compile the above code because
>>>there are two methods of resolving it. "

Let me know if the above compiles correctly. (It won't link, of course,
without an appropriate definition for the various ctors, but it ought to
compile as-is.)

If the above *does* compile as-is, let's try to figure out why you were
frustrated with the behavior of TQString.


>I'm running into similar problems with the << and >> operators - I've 
>subclassed QDataStream with TQDataStream because QDataStream is 
>default big endian and doesn't provide support for 64 bit integers. 

Again, I'd suggest trying something different than subclassing
QDataStream, but I don't know enough to know exactly what that should
be.


>Every single time I use << or >> I'm an ambiguous resoluton error 
>when clearly the source or destination object is a TQDataStream.
>
>Most of my C++ problems are arising from "repairing" Trolltech's 
>code. 

And, I would suggest, mostly because you are repairing it via
inheritance.


>While they will fix things similarly in future Qt's as a result 
>of my suggestions, it still leaves now.
>
>>> The same sort of
>>>thing applies to overloading functions - you cannot overload based on
>>> return type, something I find particularly annoying.
>>
>>Another C++ idiom lets you do just that. I'll have to show that one
>>to you when I have more time. Ask if you're interested.
>
>Is that like this:
>bool node(TQString &dest, u32 idx)
>bool node(TKNamespaceNodeRef &ref, u32 idx)
>...

Nope, I'm talking about actually calling different functions for the
following cases:

int i = foo(...);
char c = foo(...);
float f = foo(...);
double d = foo(...);
String s = foo(...);


>>>Computer history is strewn with cases of an inferior product 
>>>destroying a superior product. It's hardly unique.
>>
>>I agree. I guess my point is simply this: any popular language is
>>going to have warts that an unpopular language will not. Take Eiffel
>>for example. Way back when Eiffel was very young, Bertrand Meyer
>>derided C++'s 'friend' construct, claiming it violated encapsulation. 
>>Then he began to get real users who were building real systems using
>>Eiffel, and suddenly he began to see how something like the 'friend'
>>construct actually *improves* encapsulation. So he added it to
>>Eiffel. At first the language seemed cleaner and simpler, then
>>gradually it added more stuff as it became more practical.
>
>Still, Eiffel is considered much cleaner than C++ - however, it's not 
>as popular. cf. my statement above about popular languages not being 
>designed.

Eiffel was considered high-risk. The low-risk alternatives were
Objective-C and C++. Objective-C was higher risk because the OO part of
it was completely separated from the non-OO part -- they weren't
integrated into a seamless whole, and that meant you couldn't easily
"slide" back and forth between OO and non-OO. Naturally technical
people are a little concerned about the ability to "slide" back and
forth between OO and non-OO, but from a business standpoint, that
ability reduces risk -- it gives us the contingency plan of
buying/renting C programmers/tools/etc.


>>However if you
>>pointed out any particular compromise, I could probably tell you why
>>it was done and in fact could (I hope!) make you realize that
>>"cleaning up" that compromise would cause more harm than good.
>
>Ok:
>1. Why didn't C++ have separated support for code reuse and subtyping 
>(like Smalltalk)?

Because C++ is statically typed. The thing that lets Smalltalk separate
the two is that Smalltalk attaches *no* semantic meaning to inheritance.
For example, if you have a Smalltalk function that calls methods
'push()' and 'pop()' on an object passed in as a parameter, you might
call that parameter 'aStack'. But there is *no* requirement that you
pass an object of a class that actually inherits from Stack. In fact,
if I create a class Employee that doesn't inherit from Stack but happens
to have 'push()' and 'pop()' methods (pretend the boss 'pushes' an
employee to make him work harder and 'pops' an employee when he makes a
mistake), then I could pass an object of Employee to your function, and
the Smalltalk runtime system would be perfectly happy: as long as my
object actually has methods called 'push' and 'pop', the runtime system
won't object.

The previous example showed how inheritance is not required for is-a,
and in fact is-a is determined solely based on the methods an object
happens to have. This next example shows how the compiler *never*
complains about the type of a parameter. Suppose you have a function
that calls method 'f()' and 'g()' on its parameter-object, but suppose
for some reason it only calls 'g()' on Tuesdays. Now suppose I create a
class that has a method f() but doesn't have a method g(). I can pass
an object of that class to your function, and provided I don't happen to
pass it on a Tuesday, everything should be fine. Note that I can
inherit from anything I want, and what I inherit from has nothing to do
with the type of parameter you take, since all (*all*, **all**,
***all***, ****ALL****) parameters are of type 'Object', and
*everything* (including an instance of a class, a class itself, an
integer, everything) is an object.

Because Smalltalk has *no* compile-time type-safety (e.g., if you try to
add two things, the compiler never knows if they're actually addable,
and only at run-time is the addition known to be safe or unsafe),
Smalltalk lets you inherit anything from anything in a random fashion.
You *can* equate inheritance with subtyping if you want, but you don't
have to, and certainly most "interesting" Smalltalk programs do not.
You can even (effectively) remove a method that was inherited from a
base class. You can even have an object change its class on-the-fly.
It's not as unconstrained as self-modifying code, but it's very, very
flexible.

I liken the difference to the difference between an all-terrain-vehicle
(guys love these in the USA; they drive through streams and over
mountains, the works) and a Ferrari. The Ferrari will get you there a
lot faster, but you *must* stay on the road. In C++ (and any other
statically typed OO language) you *must* stay on the road -- you must
restrict your use of inheritance to that which is logically equivalent
to subtyping. In Smalltalk, there is no restriction to staying on the
road, and in fact it's a lot more fun if you drive through your
neighbor's lawn, through the rose bush, through the flower bed, etc.,
etc. Of course you might get stuck in a rut on-the-fly, and of course
there are no guarantees you'll actually get to where you're wanting to
go, but it's a lot more fun trying!

So if C++ wanted to be like Smalltalk, it could do what you want. But
given that C++ wants compile-time type-safety, it can't do what you
want.


>2. Why don't return types determine overload?

Because things like this would be ambiguous:

int f();
float f();
char f();

int main()
{
f();
...
}

Worse, if the three 'f()' functions were compiled in different
compilation units on different days of the week, the compiler might not
even know about the overloads and it not notice that the call is
ambiguous.

There's an interesting example in Bjarne's "Design and Evolution of C++"
that shows how type safety would commonly be compromised if C++ did what
you want. Suggest you get that book and read it -- your respect for the
language and its (seemingly random) decisions will go up a few notches.


>3. Why can't the compiler derive non-direct copy construction? eg;
>class A { A(B &); } class B { B(C &}; } class C { C(const char *); }
>A foo="Hello";

This was done to eliminate programming errors. The problem is to avoid
surprising the programmer with bizarre chains of conversions that no
human would ever think of on his own. For example, if someone
accidentally typed this code:

A x = cout;

and if your rule was allowed, that might actually compile to something
really bizarre such as constructing an iostream from the ostream, then
constructing an iostream::buffer from the iostream, then constructing a
std::string from the iostream::buffer, then constructing a Foo from the
std::string, then calling the 'operator const char*()' method on that
Foo, then constructing a C from that 'const char*', then constructing a
B from that C, then constructing the A from the B.

No programmer would think that is intuitively obvious. Put it this way:
most programmers find the automatic conversion/promotion "magical," and
are somewhat afraid of them as a result. The idea of limiting the
number of levels is to put a boundary on how much magic is allowed. We
don't mind hiding mechanism from the C++ programmer, but we don't want
to hide so much mechanism that no programmer could ever figure out
what's going on.

Code must be predictable, therefore magic must be limited.


>In C++, you must rewrite that as A foo(B(C("Hello"))); - it's not 
>done for you, nor is there any way of fixing it except modifying A to 
>have a copy constructor taking const char * - which isn't possible if 
>you don't have the source to A or B.
>
>>Do you really mean "functional" or "procedural" here? The Functional
>>style is rather difficult to do in C++ (think Scheme). Functional
>>programming means never allowing any changes to any piece of data, so
>>instead of inserting something into a linked list, one creates a new
>>linked list and returns the new linked list that contains the new
>>item.
>
>I mean "functional" in terms of I tell you what to do not how to do 
>it. 
>
>Also above, the new linked list isn't created, merely a potential for 
>a separate new linked list is. You're right that it's "as if".
>
>>>>Another really big error. OO is primarily a design approach. The
>>>>concept of "OO programming" is very close to a misnomer, since OO
>>>>programming cannot stand on its own - it needs OO *design*.
>>>
>>>No, I must disagree with you there: design is independent of 
>>>language. 
>>
>>Nope, not true at all. A design that works for Functional languages
>>is horrible for Procedural languages, and vice versa. And both those
>>designs are wholly inappropriate for OO languages, Logic-oriented
>>languages, or Constraint-oriented languages. In short, the paradigm
>>*very* much effects the design.
>
>I would have said it affects the /implementation/ rather than the 
>design. You're right that say doing a sort in Haskell is completely 
>different than doing it in C - but I would call that a difference in 
>implementation because (a) the algorithm used is identical and (b) 
>the two solutions give identical output.
>
>>Try your belief out sometime. Try implementing your favorite program
>>in Prolog (logic-oriented) or Scheme (function-oriented) and see what
>>happens. Guaranteed that if your program is nontrivial, a radically
>>different design will emerge. Either that or you'll constantly be
>>fighting with the paradigm and the underlying language, trying to
>>force, for example, Prolog to be procedural.
>
>No, a radically different /implementation/ will emerge. That's simply 
>because implementing the design is best done one way in one language 
>and differently in a different language.
>
>>I'll go further: design isn't even independent of language *within* a
>>paradigm. In other words, a design that is appropriate for Smalltalk
>>is typically inappropriate for C++, even when you are trying very hard
>>to use OO thinking throughout.
>
>I'm getting a feeling that once again it's a disagreement about 
>terminology rather than opinion. I would treat the word "design" as 
>that in its purest sense - algorithms. Everything after that has 
>increasing amounts of implementation - so, for example, the object 
>structure would involve some amount of implementation detail.

Agree that this is mainly a difference in terminology.

HOWEVER there is an important conceptual difference as well. (I'll use
the term "software development" to avoid using either term "design" or
"implementation.")

In OO software development, the inheritance hierarchies are more
fundamental and more foundational than the algorithms or data
structures. That may seem strange to you, but if so, it's mainly
because of the *way* you've tended to use inheritance in the past.

Here's why I say that: start with an OO system that has a base class
'Foo'. Base class Foo is abstract. In fact, it is PURE abstract: it
has no data structures and algorithms -- it is a pure "interface" -- all
its methods are pure virtual. Next we create, over time, 20 different
derived classes, each of which has a different data structure and
algorithm. The methods have a similar purpose, since their contracts
are similar, but there is *zero* code reuse between these derived
classes since all 20 inherit directly from 'Foo'.

So the algorithms and data structures are wholly replaceable. In fact,
we intend to use all 20 different data structures and algorithms in the
same program at the same time (not implying threading issues here; "same
time" simply means "in the same "run" of the program, all 20 classes are
used more-or-less continuously).

In OO systems that smell even remotely like this, the core algorithms
and data structures are very secondary to the design, and in fact can be
ignored during early design. During late design, someone will need to
carefully select the best algorithms and data structures, but during
early design all that matters is the inheritance structure, the method
signatures in the base class 'Foo', and the "contracts" for those
methods. If the contracts are set up right, then all 20 derived classes
will be able to require-no-more, promise-no-less (AKA "proper
inheritance"), and the derived classes can totally bury the algorithms
and data structures.

It's almost like specifying an API then later implementing it. When
you're specifying the API, all you care about is that the parameters and
specifications ("contracts") are correct, complete, and consistent. If
your API is clean enough, you actually *want* to be able to ignore the
specific algorithm / data structure that will be used behind that API,
since if you can bury that information behind the API, you know the
algorithm / data structure can be scooped out and replaced if someone
later comes up with a better one. The difference is that with OO, we
have an API that has 20 different implementations and they're all
pluggable, meaning the guy who is using the API never knows which
implementation he's working with. That forces us to "do the right
thing" by designing our API (i.e., methods in 'Foo', parameters, and
especially contracts) in a way that all 20 derived classes are "proper"
and that none of the 20 "leak" any of their private info (algorithms and
data structures) to the user.

If you're still with me, then inheritance is more foundational than
algorithms and/or data structures. Your past style of inheritance
equated inheritance with data structure, after all, inheritance was just
a way to group two chunks of software together. But now that you see
the above dynamic-binding-intensive approach, perhaps you see that
inheritance is an earlier lifecycle decision than algorithm. That's why
I call the inheritance graph a critical (*the* critical) part of design.
Get that right and the rest is replaceable.


>>>I have never agreed with OO design as my university 
>>>lecturers found out - I quite simply think it's wrong. Computers
>>>don't work naturally with objects - it's an ill-fit.
>>>
>>>What computers do do is work with data. If you base your design
>>>entirely around data, you produce far superior programs. 
>>
>>In your experience, this may be true. But trust me: it's a big world
>>out there, and in the *vast* majority of that world, your view is very
>>dangerous.
>
>I have applied my skills to many projects: public, private and 
>personal and I have not found my data-centric approach to have failed 
>yet. It has nothing to do with code maintainability nor much other 
>than efficiency - but that's why I use an impure OO for 
>maintainability - but if you rate superiority of a program based on 
>its excellence in functioning, my approach works very well. I 
>contrast with OO designed projects and quite simply, on average they 
>do not perform as well.

Re your last sentence, most OO software sucks because most OO designers
suck.

:-)

One other thing: I re-read what you wrote before and would like to
separate it into two things. You said,

>>>I have never agreed with OO design as my university 
>>>lecturers found out - I quite simply think it's wrong.

I agree with this part wholeheartedly. University lecturers typically
don't know the first thing about how to actually get something done with
OO. They have silly examples, and they tend to teach "purity" as if
some customer actually cares if the code using their particular
guidelines, rules, etc. Most of the time their guidelines are wrong,
and even when the guidelines are right, they are, after all, just
guidelines. The goal is still defined in business terms, not technical
terms. E.g., the schedule, the cost, the functionality, the user
acceptance testing, etc. have nothing to do with purity.

In fact, I disagree with most university lecturers at a deeper level
since I really hate the one-size-fits-all approach. Too many
technologists already know the answer before they hear the question.
They know that Professor Know It All said we should use this technique
and that approach, and even though they don't yet know anything about
the problem, they already know how to solve it! I always find it's
safer to listen before I speak, to ask before I answer. Having a
solution that's looking for a problem is dangerous. Lecturers who
(often implicitly) teach that approach to their students ultimately
cause their students to have just one bullet in their gun. If the
problem at hand *happens* to match the style, approach, and guidelines
that the student believes/uses, everything works great; if not, the
student will do a mediocre job.


>>>Computers
>>>don't work naturally with objects - it's an ill-fit.
>>>
>>>What computers do do is work with data. If you base your design
>>>entirely around data, you produce far superior programs. 

This is the part I was disagreeing about. You can see why, perhaps, in
the example I gave above (the 'Foo' class with 20 derived classes each
of which had its own distinct data structure and algorithm).


>Now regarding the TCO of the code, I would personally say my code is 
>extremely maintainable using my OO-like source filing system. You, I 
>would imagine, would say how can I sleep at night when performing 
>such atrocities to commonly held standards? (you wouldn't be the 
>first to ask this).
>
>Of course, in all this, I am referring to C and assembler and what 
>I'd call C+ because I mostly wrote C with some small C++ extras. This 
>project I'm working on now is the first to use multiple inheritance 
>and templates and a fair bit more.
>
>>Be careful: you are painting yourself into a very narrow corner. You
>>may end up limiting your career as a result.
>
>Possibly, but I would doubt it. I may have some unique opinions on 
>this but what the customer cares about is (a) will it work and (b) 
>can we look after it well into the future. My case history strongly 
>supports both of these criteria, so a priori I'm on the right path.

Well, if you're pretty convinced you're already using the right overall
approach, then you'll have no reason to question that overall approach
and that means it will be a lot harder for you to learn new overall
approaches. To me, the most important area to stay teachable is the big
stuff. The first step in learning something new (especially something
new at the paradigm level, i.e., a new way of approaching software as
opposed to merely a new wrinkle that fits neatly within your current
overall approach) is to a priori decide you might be on the wrong track.
But that's up to you, of course.

(BTW I learn something new at the paradigm level every year or so. I
can't take it more often than that, but I really try to emotionally rip
up *all* my preconceptions and start over every year or so. My buddy
and I get together every year or so, write a paper to tell the world how
great we were and what great things we'd done, then sit back and say to
each other, "That was then, this is now. We're dinosaurs but we don't
know it yet. The weather has probably already changed but we had our
heads down and didn't notice. How has the weather changed? How do we
need to adapt-or-die?" After keeping that up for a few minutes, we
actually begin to believe it, and pretty soon we're looking scared and
worried - like we really could miss the train that's leaving the
station. Ultimately we'd end up reinventing ourselves, coming up with a
new way of approaching projects, stretching, trying out radically new
ideas, and very much not thinking we're a priori on the right path.
It's always be painful and humbling, but it's made me who I am today.
And who I am today is, of course, not good enough any more, so I'll have
to change again! :-)


>>>Now I will 
>>>agree OO is good for organising source for improved maintainability,
>>>but as a design approach I think it lacking.
>>
>>You really should read "OO Design Patterns" by Gamma, et al (also
>>published by Addison Wesley). Read especially chapter 2. I think
>>you'll see a whole world of OO design -- and you'll see ways to use OO
>>at the design level that are totally different (and, I dare say,
>>totally superior) to the approach you are describing here.
>
>Is that about a vector graphics editor called Lexi? I have written 
>two vector graphic editors, the latter in OPL for a Psion Series 3 
>(OPL is quite like BASIC - no objects). Interestingly, the approach 
>Gamma follows is almost identical to my own - I used dynamic code 
>loading to load tool modules with a fixed API thus permitting 
>infinite extensibility. Encapsulation of the API plus building a 
>portable framework are two things I have done many times - I wrote my 
>first framework library in 1992 some four years before going 
>professional.
>
>That Gamma book amused me - it attaches lots of fancy names to real 
>cabbage and thistle programming. However, his conclusion is valid - 
>in modern times, most programmers wouldn't know half that book, and 
>that's worrying - hence the need for such a book.

Yes it does attach some fancy names to blue-collar programming ideas.
The thing I wanted you to see from it was how inheritance is not used as
a reuse mechanism - how you inherit from something to *be* *called* by
(the users of) that thing, not so you can *call* that thing.


>>This project ended up being around half a person-millennium (150-200
>>developers over a 3 year period). I ended up training and mentoring
>>them all, and we had lots and lots of design sessions. When they were
>>finished, the things that used to take 9 months could be done by a
>>single person in less than a day. The success-story was written up in
>>Communications of the ACM -- it was the lead article in the Special
>>Issue on Object-Oriented Experiences. It was also written up in IEEE
>>Software and perhaps a few other places. (And, by the way, there was
>>no loss of performance as a result. That was *very* hard to achieve,
>>but we did it. In the end, customers gained 2x MIPS/dollar.)
>>
>>The point is that these benefits came as result of OO *design*, not as
>>a result of programming-level issues.
>
>I'm sure OO design greatly improved the likely wasp's nest of 
>spaghetti that existed in there previously. But I'm not seeing how OO 
>design is better than any other approach from this example - there 
>are many methods that could have been employed to achieve the same 
>result.

Two things:

1. If what you said in the last sentence is true, where's the beef? If
these other approaches could do the same thing, why didn't they?

2. I think you've missed the point I was making. The point was that
this project used inheritance the way I'm proposing it should be used,
and that's very different from the "inheritance is for reuse" approach.
It's not about OO vs. non-OO. It's about how the two different styles
of OO produce different results.


>>One more example: UPS (another of my clients; in fact I was there just
>>last week) has new "rating" and "validation" rules that change every 6
>>months. For example, if Detroit passes a law saying it's no longer
>>legal to drive hazardous materials through its downtown area, the code
>>needs to change to prevent any package containing hazmat from going
>>through downtown Detroit. In their old system, which was built using
>>your style of C++, it took 5 months out of every 6 to integrate these
>>sorts of changes. Then someone created a framework using OO design
>>(not just C++ programming), and as a result, they could do the same
>>thing in 2 weeks.
>
>Any good framework here, OO or not, would have solved most of their 
>dynamic change problem. In fact, I'd plug in some sort of scripting 
>capability so such items were easy to change.

Again I think you're missing the point. The point was similar to the
above: inheritance-to-be-reused vs. inheritance-for-reuse.


>>>An example: take your typical novice with OO. Tell them the rules and
>>> look at what they design. Invariably, pure OO as designed against
>>>the rules is as efficient as a one legged dog. 
>>
>>The way you have learned OO, yes, it will have performance problems.
>>But the way I am proposing OO should be done, either it won't have
>>performance problems at all, or if it does, those problems will be
>>reparable.
>
>ie; You're bending OO to suit real-world needs, 

Not necessarily. I'm certainly not a believer in purity of any sort,
and I'm certainly willing to eject *anybody's* definition of purity to
achieve some greater goal. But I've also used OO *to* *do* performance
tuning. That is, I've used various OO idioms and design approaches as a
way to dynamically select the best algorithm for a particular task. The
OO part of that is the selection, and is also the pluggability of new
algorithms as they become known, and is also the fact that we can mix
and match pieces of the problem using one algorithm and other pieces
using another (as opposed to having one function 'f()' that has a single
algorithm inside). I'm not saying none of this could be done without
OO; I am rather saying that the performance tuning was, in some cases,
within the spirit of what OO is all about.


>which is precisely 
>what I said experienced OO people do.

Certainly I'm both willing and have done so. But I don't seem to view
that like you seem to view it. I don't see it as if something is
lacking in OO. Rather I see it like I have a toolbox, and one of those
tools says "OO," and I end up choosing the right combination of tools
for the job. When I combine OO with non-OO, that doesn't bother me or
indicate something is wrong with either OO or non-OO.

In contrast, it feels like what you're saying is, "If you can't do the
*whole* thing using OO, something is wrong with OO." 


>>>In fact, in my opinion, OO 
>>>experience is actually learning when to break pure OO and experienced
>>> OO advocates do not realise that they so automatically break the
>>>pure application of what they advocate.
>>
>>We agree that purity is never the goal. Pure OO or pure procedural or
>>pure anything else. The goal is (or *should* be) to achieve the
>>business objectives. In my experience, OO *design* brings the real
>>value, and not just programming-level issues.
>
>Has it not occurred to you that it's merely a /consequence/ of OO 
>rather than innate quality that it has these beneficial effects?

I don't think either one is true.

1. Re "a /consequence/ of OO": I think the benefits are a consequence of
clear thinking, not OO. OO helps organize the design, and helps by
providing snippets of design that have been used elsewhere. And OO
programming helps if and only if the design uses OO. But any given OO
hierarchy could be designed in two totally different ways, one sideways
from the other, basically swapping methods for derived classes, so
therefore the benefits are not a consequence to OO itself - they are a
consequence of clear thinking and proper use of software technology in
general.

2. Re an "innate quality that it has": OO certainly does NOT innately
have any beneficial effects. You can write FORTRAN code in any
language, including C++ or Java or Smalltalk or Eiffel, and certainly a
misuse of OO (or of an OO programming language) can produce problems
that are worse than any benefits they might accrue.


>>I agree with everything except your last phrase. OO design is good
>>for both people and computers.
>
>Right, firstly, before I start this section, I'd like to thank you 
>for your time and patience - I've noticed some of what I didn't know 
>and you explained to me was already online in your FAQ, so I 
>apologise for wasting your time in this regard. Furthermore, I should 
>mention that if you give me permission to distribute this 
>correspondence, you will not only have done me a great favour but 
>also the same to others. 

Go for it.

>Certainly, if it takes you as long to reply 
>as me, you're investing considerable time which a busy man as 
>yourself surely cannot easily spare.
>
>I, as I have already mentioned, come from a rather unique programming 
>background. We were probably most comparable to the Unix culture 
>except we were more advanced and we always had a very strong free 
>software tradition where we released code and source into the public 
>domain - furthermore, many commercial apps came with source too. 
>Hence, there was great chance for learning off others, and much of 
>this recent furore about OO etc. in my humble opinion is merely fancy 
>names for a collection of old techniques.
>
>Now as I mentioned a number of times, I believe a data-centric 
>approach is superior to OO because it more accurately fits the way a 
>computer works. This is not to say many of the advantages of OO do 
>not still hold - in fact, I daresay many OO experts actually are data-
>centric too without realising it. My criticism of OO therefore is 
>that it isn't /intuitively/ "correct" ie; pure OO is rarely the 
>optimal solution.

Here again, you seem to be saying that if OO isn't optimal for 100% of
the solution, then something's wrong with it. I take the opposite tact,
mainly because I am *not* a promoter for any given language or paradigm.
In fact, I would be highly suspicious if someone (including you) claimed
to have a technique that is optimal for 100% of the solution to any
given problem, and especially if it was optimal for 100% of the solution
of 100% of the problems. I simply do not believe that there exists any
one-size-fits-all techniques, including OO, yours, or anybody else's.


>I had an idea back in 1994 for advancing procedural programming to 
>the next level (this was independent of OO - honestly, I barely even 
>knew what it was at the time) - I effectively wanted to do what OO 
>has done in knocking us onwards a notch - however, as it would be, I 
>considered then and still do today that my solution is superior.
>
>Basically, it revolves entirely around data. Responsibility for data, 
>whether in memory, disc or across a network is devolved entirely to 
>the kernel. One may create data streams between data in an arbitrary 
>fashion - how it actually is peformed (translations etc.) is however 
>the kernel sees fit. Data is strongly typed so you can't stick 
>incompatible types of data together - however data can be converted 
>from one type to another via convertors which are essentially 
>specialised plug ins which can be installed. Often, conversion is 
>implicitly performed for you although either you can choose a route 
>or it can dynamically create one based on best past performances. Of 
>course, converters can offer out their input in more than one format 
>or indeed offer a compound document as some or all of its subdatas.
>
>Now the next part of the picture is components - these are tiny 
>programs which do one thing and one thing well to data. A good 
>analogy would be "more" or "grep" in Unix - but it goes way beyond 
>that because components are much like a COM object or Qt Widget in 
>that you can just plonk them somewhere and they do their thing. Then, 
>the theory is, to build any application, you merely create a *web* of 
>simple data processing components. For example, a spell checker 
>component would accept text data and check it either with the user or 
>with the component managing the data - there is no concept of data 
>ownership in my proposal (kernel owns everything)
>
>This model, I believe, compares extremely well to OO. You get lots of 
>code reuse, a dynamic and extremely flexible linking mechanism, a web 
>rather than a hierarchy and automatic distribution across multiple 
>processors (and indeed machines). It's clearly functionally biased 
>because it simply sets up the data relations and the kernel works out 
>the best way to actually perform the processing. You get lots of 
>stuff for free eg; OLE, data recovery in case of program crash and 
>indeed limited graphical programming like some of those UML editors. 
>You get the advantages of dynamic linking without business' dislike 
>of source exposure as with Java or VB.
>
>Furthermore, you get automatic /data/ reuse as well as code reuse - 
>data just as much as code can be distributed across multiple machines 
>for performance and/or security reasons. And of course, maintainence 
>costs are low because the component set you use are as individual or 
>fine-grained as you like them.
>
>Now hopefully you'll be agreeing with me that this is all good - 
>however, if you're like the other experts I've proposed this to, your 
>first question will be "oh but how to implement it?" because the 
>balancing act between all the different requirements means severe 
>inefficiency. And you'd be right - I've made two prior attempts at 
>this and failed both times - and right now, I'm making my third 
>attempt which I'm self-financing myself for six months. The theory 
>goes, produce a technology demonstration, if it runs at all 
>reasonably then obtain venture capital, start a company and two years 
>later we have a product. Five years later it's more or less complete. 
>If anything goes wrong, return to working on whatever pays a lot for 
>a while, then try again in a few years. Either way, the spin off 
>benefits of each past attempt have been enormous, so really I can't 
>lose.
>
>So, thoughts? I'm particularly interested in what you see as design 
>flaws 

Please compare and contrast with web-services. Obviously you're not
married to XML like most web-services are, but they also have a concept
of components / services through which data flows. Is there some
similarity? Even at the conceptual level?


>- I 
>know MIT did research into this for a while but stopped. Would you 
>agree it's 
>a viable future? 

Actually my first question about viability is not about how to implement
it efficiently, but instead *assuming* you find a way to implement it
efficiently, will anybody buy into it anyway? Large companies pay
roughly half a year's salary per programmer to train their people in a
new style of programming, such as going from non-OO to OO. They also
pay roughly 1/3 that for tools and other support stuff. Plus the
managers have to learn enough about what's going on to manage it
effectively, and that's scary for them, particularly the older guys who
are close to retirement ("Why should I bother learning something new? I
might not be any good at it! I'm in control now, so even if what we're
doing isn't maximally efficient to the company, changing it would risk
the delicate balance that keeps me employed, so what's in it for me to
change?").

If we figure $200K (US) per programmer (burdened salary including
hardware, software, benefits, management overhead, building, air
conditioning, etc.), and if we figure a medium sized organization with
100 programmers, then we're talking about $130M to embrace this new
paradigm. Will they really derive a benefit that is equivalent to at
least $100M? What's the pay-back? And if these medium sized companies
don't ante up for your new approach, why should the little guys do it?

I'm not trying to discourage you - just trying to ask if you know what
the pay-back really is. I'm also trying to remind you about how none of
the front runners in OO survived, and ultimately it took a couple of
decades before that paradigm took hold.


>I've had Carl Sassenrath (he did much the OS for the 
>Commodore Amiga) and Stuart Swales (did much of RISC-OS I mentioned 
>earlier) both agree it's probably right, but both wondered about 
>implementation. I should be especially interested in seeing what a 
>static OO 
>based person things - neither Carl nor Stuart are static code nor OO 
>advocates hugely.

Please explain what you mean by "static code."


>Furthermore, any advice about soliciting venture capital in Europe 
>would be 
>useful (yes, I know it's like squeezing blood from a stone here) - 
>ever since 
>the indigenous industry withered and died here, it's been very hard 
>to obtain 
>capital for blue-sky projects without the Americans buying them up. 

It's also hard here. There really has to be a decent business case.
Honestly, if you're wanting to make money, you're better off using the
Harvard Business School model: sell it first, *then* figure out how to
deliver it. Building a better mousetrap and hoping somebody out there
cares is an extremely risky business. Sometimes it works, but I think
most of those are historical accidents - they happened to accidentally
build the right thing at the right time.

Seems to me you have two goals: stretch yourself via your idea (and in
the process learn something new), and build a business that helps you
keep the lights on, and possibly more. If the latter is your primary
goal, seriously thinking about starting with a business case or even
some sales attempts. If the former, then ignore everything else I've
said and have a wonderful six months -- and may be, just may be, you'll
hit the jackpot and actually sell the thing when you're done.


>I'm 
>unable to obtain a work visa to the US (on the banned list), so 
>that's out - and 
>besides, as far as I can see, only IBM out of the big US software 
>companies 
>would be interested as only IBM's goals would be advanced by such a 
>project. Oh BTW, did I mention it runs on Win32/64, Linux and MacOS X 
>when they get the new FreeBSD kernel in - and yes, all computer 
>irrespective 
>of endian automatically work in unison. I'd also like it to stay in 
>Europe so it (or rather I) stays free from software patents.
>
>Anyway, any comments you may like to offer would be greatly 
>appreciated. You've already earned yourself an acknowledgement in the 
>projects docs for helpful tips and suggestions.

Thanks for the acknowledgements.

Okay, here's another comment for you to think about (no need to reply):
What specific application area will it be most suited for? Will it be
most suited for embedded systems? handhelds? web servers? apps
servers? client-side programs? thin-client? Similarly, what specific
industries are you seeing it fit best? Banking, insurance,
transportation, etc.

May be these questions don't make sense, because may be I'm giving the
wrong categories. E.g., may be the real question is whether it will be
developing software development tools, database engines, or other
"horizontal" apps, as opposed to the various vertical markets. Whatever
the correct categorization is, think about whether you know who would
use it and for what (and please forgive me if I missed it above; it's
very late). Ultimately if you have a reasonably crisp sense of who
would buy it, you will be better able to target those programmers.
Programmers nowadays want highly customized tools. You mentioned GUI
builders earlier. I personally despise those things, since they have
dumbed down programming and turned it into something more akin to
brick-laying, but I recognize that programmers nowadays really crave
those easy-to-use tools. Many programmers nowadays don't seem to like
writing code. They don't mind dragging the mouse on the screen, and
clicking on a pull-down list of options, but many of them really don't
want to actually type in code. I'm the opposite. Give me an Emacs and
a command-line compiler, then stay out of my way. Which really means I
no longer "fit in" anywhere. Oh well, that's life I guess.

I wandered - let me get back on track. The point was that programmers
nowadays don't want generic tools that can be customized for their
industry. They want specific tools that have already been customized
for their industry. They want tools that are brain-dead easy to use,
tools that don't require them to write much code. Will your vision be
able to serve those sorts of programmers? If so, is the plan to do the
customization with the VC money?

My not-so-hidden-agenda here is to help you begin to think like a
"packaged software" company. Unlike custom software companies and
end-user shops that build apps based on a spec from a well-defined
"customer," the packaged-software companies spend *enormous* amounts of
time and money doing market research before they write anything. Here
again it's the Harvard B-school approach: they're figuring out what will
sell, then they build that. So think about that: what will sell? is
there any latent frustration out there that would spur companies into
changing their toolset and programmers? what is that frustration?

(I wish I could help you by answering some of these questions, but
frankly the frustration I'm seeing these days isn't, "We're having
trouble delivering value because _____", but is instead, "We're having
trouble because we're being asked to do the same tasks as always but
with 25% fewer people," or "We're having trouble because we're not
viewed as adding any business value to the company." In other words,
I'm just not seeing a great groundswell of frustration from technical
people and/or first-line managers who are moaning in unison, "Our tools
and/or languages suck; if we only had better tools and/or languages,
life would be wonderful." I don't see why a new tool and a new language
(and *especially* a new language within a new paradigm) would be viewed
by the typical manager as a cost-saving measure. Like I said earlier,
I'm not trying to throw a wet towel on your idea; more just being honest
about what I do and don't see in the marketplace today.)

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Thu, 1 Aug 2002 01:12:45 +0200

On 30 Jul 2002 at 2:14, Marshall Cline wrote:

> >Not being able to obtain these books easily (I live in Spain plus
> >money is somewhat tight right now), I looked around the web for more
> >on this. I specifically found what not to do when inheriting plus how
> > deep subclassing usually results in code coupling increasing. Is
> >that the general gist?
> 
> That's a start. But coupling between derived and base class is a
> relatively smaller problem than what I'm talking about. Typically
> deep hierarchies end up requiring a lot of dynamic type-checking,
> which boils down to an expensive style of coding, e.g., "if the class
> of the object is derived from X, down-cast to X& and call method f();
> else if it's derived from Y, down-cast to Y& and call g(); else if
> ...<etc>..." This happens when new public methods get added in a
> derived class, which is rather common in deep hierarchies. The
> if/else if/else if/else style of programming kills the flexibility and
> extensibility we want to achieve, since when someone creates a new
> derived class, they logically need to go through all those
> if/else-if's and add another else-if. If they forget one, the program
> goes into the "else" case, which is usually some sort of an error
> message. I call that else-if-heimer's disease (pronounced like
> "Alzheimer's" with emphasis on the word "forget").

When you say dynamic type checking, you mean using typeid()? I had 
thought that was to be avoided - in fact, any case where a base class 
needs to know about its subclasses indicates you've got the 
inheritance the wrong way round?

> [stuff about bad class design deleted]
> However since we used inheritance, we're pretty much stuck with
> HashTable forever. The reason is that inheritance is a very "public"
> thing -- it tells the world what the derived class *is*. In
> particular, users throughout our million-line-of-code system are
> passing Bags as HashTables, e.g., converting a Bag* to HashTable* or
> Bag& to HashTable&. All these conversions will break if we change the
> inheritance structure of Bag, meaning the ripple effect is much
> higher.

One of the things I look for when designing my class inheritances is 
whether I could say, chop out one of the base classes and plug in a 
similar but different one. I'm just trying to think where I learned 
that lesson - I think it was that cellular automata game I wrote to 
refresh my C++ (http://www.nedprod.com/programs/Win32/Flow/) and I 
realised it's best when the base class knows nothing at all about its 
subclasses except that it has some and furthermore than subclasses 
know as little as possible about what they inherit off (ie; they know 
the inherited API obviously, but as little as possible about what 
/provides/ the API). I think, if memory serves, it had to do with the 
tools in the game. Of course, officially this is called reducing 
coupling.

> A derived class's methods are allowed to weaken requirements
> (preconditions) and/or strengthen promises (postconditions), but never
> the other way around. In other words, you are free to override a
> method from a base class provided your override requires no more and
> promises no less than is required/promised by the method in the base
> class. If an override logically strengthens a
> requirement/precondition, or if it logically weakens a promise, it is
> "improper inheritance" and it will cause problems. In particular, it
> will break user code, meaning it will break some portion of our
> million-line app. Yuck.
> 
> The problem with Set inheriting from Bag is Set weakens the
> postcondition/promise of insert(Item). Bag::insert() promises that
> size() *will* increase (i.e., the Item *will* get inserted), but
> Set::insert() promises something weaker: size() *might* increase,
> depending on whether contains(Item) returns true or false. Remember:
> it's perfectly normal and acceptable to weaken a
> precondition/requirement, but it is dastardly evil to strengthen a
> postcondition/promise.

This is quite ephemeral and subtle stuff. Correct application appears 
to require considering a lot of variables.

> Please don't assume the solution is to make insert(Item) non-virtual.
> That would be jumping from the frying pan into the fire, since then
> Bag::insert() would get called on a Set object, and there actually
> could be 2 or 3 or more copies of the same Item inside a Set object!! 
> No, the real problem here isn't the override and it isn't the
> virtualness of the method. The real problem here is that the
> *semantics* of Set are not "substitutable for" those of Bag.

This is quite a similar point to what you made two replies ago or so. 
I'm not sure of the distinction between this example and the one 
which you explained to me a few emails ago - ie; this example is 
supposed to prove deep inheritance trees are evil but yet it would 
seem you are proving the same point as before regarding bad 
inheritance.

Or are you saying that the deeper the tree, the much greater chance 
it's a symptom of bad design?

> As before, aggregation would be perfectly safe and reasonable here:
> Dictionary could have-a Set, could insert Association objects (which
> would automatically be up-casted to Item&), and when it
> accessed/removed those Items, Dictionary could down-cast them back to
> Association&. The latter down-cast is ugly, but at least it is
> logically safe -- Dictionary *knows* those Items actually are
> Associations, since no other object anywhere can insert anything into
> the Set.
> 
> The message here is NOT that overrides are bad. The message here is
> that tall hierarchies, particularly those built on the "inheritance is
> for reuse" mantra, tend to result in improper inheritance, and
> improper inheritance increases time, money, and risk, as well as
> (sometimes) degrading performance.

So, let me sum up: inheritance trees should be more horizontal than 
vertical because in statically typed languages, that tends to be the 
better design? Horizontal=composition, vertical=subclassing.

No, I've got what you mean and I understand why. However, the point 
is not different to what I understood a few days ago although I must 
admit, Better = Horizontal > Vertical is a much easier rule of thumb 
than all these past few days of discussion. You can use that rule in 
your next book if you want :)

> * If Base::f() says it never throws an exception, the derived class
> must never throw any exception of any type.

That's an interesting one. I understand why already, I can infer it 
from above. However, if the documentation of my subclass says it can 
throw an exception and we're working with a framework which 
exclusively uses my subclasses, then all framework code will 
ultimately have my code calling it ie; bottom of the call stack will 
always be my code. Hence, in this situation, it is surely alright to 
throw that exception?

I say this because my data streams project can throw a TException in 
any code at any point in time (and there are explicit and loud 
warnings in the documentation to this regard). Of course, one runs 
into problems if the base class when calling its virtual methods does 
not account for the possibility of an exception.

> >Hence, that TSortedList should now derive off QGList which doesn't
> >have the append and prepend methods so I can safely ensure it does
> >what its parent does.
> 
> What you really ought to do is check the *semantics* of QGList's
> methods, in particular, read the preconditions and postconditions for
> those methods. (I put these in the .h file for easy access, then use
> a tool to copy them into HTML files; Qt seems to put them in separate
> documentation files; either way is fine as long as they exist
> somewhere.) Inheritance is an option if and only if *every* method of
> TSortedList can abide by the corresponding preconditions and
> postconditions in QGList.

Actually, to tell you the truth, I had already looked through QList 
and QGList to ensure my altering of TSortedList wouldn't cause 
problems - those disabled methods weren't called internally within 
QGList, so I was fairly safe the list would always remain sorted.

However, I hadn't been impressed with the gravity of decisions like 
that. That's different now.

> Again, in a small enough project, you can use "improper inheritance"
> if you want, but you must be very sure that no one ever uses a Base&
> or Base* to point to a Derived object. (Personally I never use
> improper inheritance, since the down-side cost is unlimited. In
> contrast, most "bad programming practices" have a bounded cost, e.g.,
> a "goto" might increase the maintenance cost of its method, but it can
> never screw up any other method, so its cost is bounded by the number
> of lines in the method. However the down-side cost for improper
> inheritance goes on and on: the more users use your code, the more
> places that can break.)

It seems to me that one should avoid these cases as well - but if you 
do have to do it, then a great big flashing neon warning sign in the 
docs is in order.

> >If more warning were out there, we'd all have less 
> >problems with other people's code.
> 
> Agreed. I get on my stump and shake my fist in the air every chance I
> get. You are hereby deputized to do the same. Go get 'em!!
> 
> Seriously, proper use of inheritance really is important, and knowing
> the difference is critical to success in OO, especially in large
> systems.

Already have a page planned called "programming guidelines". It's 
mostly about applying the data-centric approach correctly and 
efficienctly, but it'll have a section about writing good C++ as 
well. I'll put a recommendation in for your book - I notice my old 
buddies at ACCU (it's UK ACCU, quite distinct from US ACCU) gave your 
book the thumbs-up which is very rare (http://www.accu.org/cgi-
bin/accu/rvout.cgi?from=0ti_cp&file=cp000371a). They normally slam 
almost everything they review - and in fact, your book didn't make 
the recommended reading list 
(http://www.accu.org/bookreviews/public/reviews/0hr/advanced_c__.htm).

> >So why my class, almost identical, does not and Qt's one does I do
> >not know.
> 
> Trust me: change "void TSortedList<class type>::f() { ... }" to
> "template<class type> TSortedList::f() { ... }".

I feel slightly sheepish now. It's quite obvious really :( - still, 
last time I can't tell you the joy I felt when it finally linked for 
the first time!

> >> while (*dest++ = *src++)
> > [...]
> >That kind of dangerous code brought out a compiler bug in a version
> >of GCC and MSVC 5 if I remember correctly. The increments weren't
> >always done with the load and store when full optimisation was on.
> >Solution: use comma operator.
> 
> I'll take your word for it, and I really don't want to argue over it,
> but I'm *very* surprised that any C compiler ever shipped any version
> that couldn't correctly compile the above. That snippet is an idiom
> that is used in many, many C programs, and is probably part of the
> compiler's test suite.

Only on full optimisation. What happens is the loop exits without 
incrementing either dest or src correctly - often, this is 
unimportant as rarely you'd use dest or src afterwards in cases like 
above. Very architecture specific - a RISC architecture is more 
likely to have this problem than a CISC.

> I suppose some optimizers might unroll some loops, but the real
> problem, as you obviously know, is cache misses. Plus compilers can't
> necessarily know that any given loop will actually be a bottleneck,
> and as you know, performing this sort of optimization on a
> non-bottleneck loop would make space-cost worse without any
> improvement in overall time-cost. If a program has 1,000 loops, how
> could a compiler guess which of those are the bottlenecks? As has
> been demonstrated by software engineering study after study over the
> years, programmers don't even know where their own bottlenecks are, so
> I expect it will be a long time before compilers can know.

It knows using a "make the loop code x% of the processor L1 cache" 
approach. If there's lots of code in loop, it gets unrolled much 
less. Obviously, also, the writer gives hints with #pragma's.

More recently, we've seen the introduction of intelligent branch 
predictors (old days they did a dest - PC compare, if negative it 
assumed the branch would be taken) which effectively cache branch 
decisions and assign a history profile. That saves those tricks I 
mentioned before about removing if...else's and routing the decision 
through effectively a state-table.

Of course, Intel went ever further with the Pentium 4 and had the 
thing execute the code either side of the branch and throw whichever 
not taken away. We really really need to get away from x86 - the ARM 
is some obscene number of times more efficient per clock cycle than 
any x86 processor and it was designed around 1986 (on the back of an 
envelope down the Wrestlers pub in Cambridge if you're curious)! 
Amazingly for a guy drinking a pint of beer, he did a better job than 
DEC did with the Alpha instruction set despite them spending millions 
:)

> >If you ask me about embedded systems, I don't doubt I'm as good as
> >they get.
> 
> Sounds like it. I should remember that in case I get more embedded
> systems work from TI or UPS or HP. In the mean time, learn proper
> inheritance so you'll be ready for my phone call! :-)
> 
> (Don't sit by the phone waiting for it to ring - I don't have anything
> 'hot' right now.)

Well, given I can't obtain a US work visa, I doubt I'd be a suitable 
candidate anyway! However, I will tell you something - in my early 
days I worked with some very high level engineers who I learned 
bucket-loads off. Since then, *I've* been the high level engineer 
doling out the buckets and to be honest, I could do with getting back 
under the wing of someone really good. Otherwise, as you mentioned 
below, it's easy to stale because you're not getting challenged in 
your presumptions enough.

Problem, as always, is finding a good wing. They're a rare prize 
nowadays.

> >All this high-level stuff though I must admit is beyond me 
> >a bit. 
> 
> As you know, embedded systems programming is highly technical, and
> presents enough of a challenge that the weak tend to get killed off -
> they end up programming way up at the apps level using something soft
> and gushy like Visual Basic or JavaScript. So the only survivors in
> embedded systems are technically tough enough, at least at the
> programming level.
> 
> Unfortunately most embedded systems programming doesn't also force
> people to be really good at the design level. Most embedded systems
> work is intense at the binary-level, always trying to squeeze 10
> kilograms of stuff in a bag meant to hold only 5 kilograms. I think
> that's especially true in the hand-held environment, but either world
> tends to produce hot-shot programmers who can program their way out of
> most situations, and aren't necessarily great at the high-level stuff.

Actually, those embedded systems are becoming disgustingly powerful - 
that GPS receiver was a 80Mhz 32 bit processor with 16Mb of RAM and 
optionally 64Mb of ROM. On that, you can write your applications in 
Visual Basic and it'll actually go. Of course, that's what Windows CE 
is all about.

Embedded systems programming as we knew it is dying out. When 
desktops went all powerful, a lot of us assembler guys went into tiny 
systems but now they've gone all powerful, it's a rapidly shrinking 
market. The writing is definitely on the wall - move onto OO and C++ 
and such or else become unemployable.

> But you'll make it - I can tell. You've already embraced the key
> elements of good OO design (except for understanding that OO design
> really means the structure of your inheritance relationships, and that
> your algorithms are pluggable/replaceable and end up getting buried in
> derived classes; more on that later).

Aw, thanks for the show of confidence! I think anyone who is flexible 
enough to always be learning how to improve inevitably will succeed. 
As the chinese say, it's all about the reed bending in the wind.

> >Have you noticed the world's most popular programming languages tend
> >to be evolved rather than designed? ;)
> 
> Yes, and I think there's a reason. You might not like my reasoning,
> but it goes like this: Businesses choose programming languages based
> on business issues first and technology issues second. This is not a
> bad thing. In fact, I believe businesses *ought* to worry primarily
> about business issues such as acquiring skilled people and tools. Can
> we hire programmers that already know this language? Are there a glut
> of programmers, or are we going to have to pay enormous salaries,
> signing bonuses, and relocation fees? Are the area universities
> churning out bodies that know this language? Are the programmers any
> good? Is there a ready supply of programmers we can "rent" (AKA
> contractors) so we don't have to hire everyone? Are there outsourcing
> firms we can bring in to finish the work as a contingency plan? Is
> there a ready supply of consultants who can advise us on nuances of
> using this language? Those are examples of the people-questions; next
> come a similar pile of questions about tools, multiple vendors,
> long-term support, development environments, maturity of tools,
> companies who can train our people in using the tools, etc., etc.

Have you heard that someone somewhere has decided to lower the cost 
of IT to industry by farming it out to the third world like 
semiconductors etc? Possibly it's mostly UK, but see 
http://www.contractoruk.co.uk/news040702.html. There'll be an article 
from me in the same periodical soon where I urge specialisation to 
prevent the inevitable massive job losses and heavy reduction in 
earnings.

> And after all these questions are answered, somewhere down on the list
> are things like the relative "cleanliness" of the language. Are the
> constructs orthogonal? Is there appropriate symmetry? Are there
> kludges in the syntax? Those things will effect the cost of the
> software some, to be sure, but they aren't life-and-death issues like
> whether we can buy/rent programmers or whether we can buy/license good
> tools. I have a client that is using (foolishly) a really clean,
> elegant language that almost nobody uses. Most programmers who use
> that language for more than a month absolutely love it. But the
> client can't buy or rent programmers or tools to save its life, and
> its multi-million dollar project is in jeopardy as a result.

What's the language?

> So far all I've said is that most businesses choose programming
> languages based primarily on business considerations, not primarily on
> technical considerations. There are some exceptions (such as the
> company I just mentioned), and perhaps you even experienced one or two
> exceptions, but I think almost anyone would agree that the basic
> premise ("most businesses choose...") is correct. I further assert
> that that is a good thing, and you are free to disagree on that point,
> of course. However I have to believe you agree with me regarding how
> things *are*, even if you disagree with me about how things *ought* to
> be.
> 
> The conclusion of the argument is simple: Go back through the
> business-level questions I mentioned above, and most if not all of the
> answers would be "okay" if the language was an incremental extension
> of some well-known, mature language. That means using an "evolved"
> language lowers business risk, even if it adds technical warts or
> reduces technical elegance. (At least it's *perceived* to lower
> business risk, but business people make decisions based on perception
> rather than reality anyway, so the perception of a reduced business
> risk is a powerful argument in favor of an "evolved" language.)

Actually, I completely agree with everything you've said. Already had 
twigged it to be so!

And a personal component is that I've tried my hand with designed 
languages eg; Haskell or Java. And I found them annoying (although 
Haskell is seriously seriously powerful) for various reasons. My own 
personal thought is that designed languages may technically be 
perfect, but much of writing software is an art more than engineering 
and hence designed languages are often too sterile for my tastes.

You probably won't like me saying this, but half the reason why I 
like C and C++ is because they permit me to be really really stupid 
if I want to. It's a very personal reason, but I think a lot of 
programmers feel the same.

> Yes, C is closer to the machine, since its mantra is "no hidden
> mechanism." C++ *strongly* rejects the no-hidden-mechanism mantra,
> since its goal is ultimately to hide mechanism - to let the programmer
> program in the language of the *problem* rather than in the language
> of the *machine*. The C++ mantra is "pay for it if and only if you
> use it." This means that C++ code can be just as efficient as C code,
> though that is sometimes a challenge, but it also means that C++ code
> can be written and understood at a higher level than C code -- C++
> code can be more expressive -- you can get more done with less effort.
> Of course it is very hard to achieve *both* those benefits (more done
> with less effort, just as efficient as C) in the same piece of code,
> but they are generally achievable given a shift in emphasis from
> programming to design (using my lingo for "design"). In other words,
> OO software should usually put proportionally more effort into design
> than non-OO software, and should have a corresponding reduction in the
> coding effort. If you're careful, you can have dramatic improvements
> in long-term costs, yet keep the short-term costs the same or better
> as non-OO.

That's an interesting point - that as the languages evolve, more time 
proportionally needs to go into design.

> People who don't understand good OO design (my definition, again;
> sorry) tend to screw things up worse with OO than with non-OO, since
> at least with non-OO they don't *try* to achieve so many things at
> once -- they just try to get the thing running correctly and
> efficiently with hopefully a low maintenance cost. In OO, they try to
> use OO design (my defn) in an effort to achieve all those *plus* new
> constraints, such as a dramatic increase in software stability, a
> dramatic reduction in long-term costs, etc. But unfortunately, after
> they spend more time/money on design, they have a mediocre design at
> best, and that mediocre design means they *also* have to pay at least
> as much time/money on the coding stage. They end up with the worst of
> both worlds. Yuck.
> 
> The difference, of course, is how good they are at OO design (using my
> defn).

I would personally say it's about how good they are at *design* full 
stop period. I still hold that it's unimportant whether you use OO or 
not - it's merely one of the tools in the toolbox and its merit of 
use entirely depends on the situation.

> It shouldn't. Try this code and see if it causes any errors:

Actually, I tried:
--- cut ---
class BaseString {
public:
BaseString(const char* s);
BaseString &operator=(const char *);
};

class DerivedString : public BaseString {
public:
DerivedString();
DerivedString(const BaseString &s);
DerivedString(const char* s);
DerivedString &operator=(const char *);
};

int main()
{
DerivedString foo("foofoo") ;
foo = "Hello world";
return 0;
}
--- cut ---

> I think that properly represents the problem as you stated it:
> >>>TQString foo;
> >>>foo="Hello world";
> >>>
> >>>Now TQString is a subclass of QString, and both have const char *
> >>>ctors. The compiler will refuse to compile the above code because
> >>>there are two methods of resolving it. "
> 
> Let me know if the above compiles correctly. (It won't link, of
> course, without an appropriate definition for the various ctors, but
> it ought to compile as-is.)
> 
> If the above *does* compile as-is, let's try to figure out why you
> were frustrated with the behavior of TQString.

Yes, it compiles fine. And no, I'm not sure why it does when TQString 
does especially when I've faithfully replicated the constructor 
hierarchy above.

> >I'm running into similar problems with the << and >> operators - I've
> > subclassed QDataStream with TQDataStream because QDataStream is
> >default big endian and doesn't provide support for 64 bit integers. 
> 
> Again, I'd suggest trying something different than subclassing
> QDataStream, but I don't know enough to know exactly what that should
> be.
> 
> >Every single time I use << or >> I'm an ambiguous resoluton error
> >when clearly the source or destination object is a TQDataStream.

I got it to work by copying every operator<< and >> into my 
TQDataStream, and have it do:
TQDataStream &operator>>(u8 &i) { return 
static_cast<QDataStream>(*this) >> (s8 &) i, *this; }

... for each one. I figure it's not inheriting the << and >> 
operators from QDataStream correctly? Maybe something similar is 
happening to TQString. I'll investigate.

> >Most of my C++ problems are arising from "repairing" Trolltech's
> >code. 
> 
> And, I would suggest, mostly because you are repairing it via
> inheritance.

That's because (a) Trolltech will make their classes like mine in the 
future (b) I don't want my users having to learn new classes and (c) 
there are a fair few times you need to pass my derived classes into 
Qt. I've done this by casting up with either static or dynamic casts. 
This is okay given I've mostly followed the rules you gave before 
about extending functionality and not changing existing 
functionality.

> >> [overloading based on return type]
> >>Another C++ idiom lets you do just that. I'll have to show that one
> >>to you when I have more time. Ask if you're interested.
> >
> >Is that like this:
> >bool node(TQString &dest, u32 idx)
> >bool node(TKNamespaceNodeRef &ref, u32 idx)
> >...
> 
> Nope, I'm talking about actually calling different functions for the
> following cases:
> 
> int i = foo(...);
> char c = foo(...);
> float f = foo(...);
> double d = foo(...);
> String s = foo(...);

Ok, I'm interested now. You can point me at a webpage if one exists.

> >1. Why didn't C++ have separated support for code reuse and subtyping
> > (like Smalltalk)?
> [explanation chopped]
> So if C++ wanted to be like Smalltalk, it could do what you want. But
> given that C++ wants compile-time type-safety, it can't do what you
> want.

I personally would probably have had it use static typing when it 
could, but when the compiler didn't know it would complain unless you 
added a modifier to say it was a dynamic cast - then the check gets 
delayed till run time. As it happens, surely that's happened anyway 
(albeit relatively recently) with dynamic_cast<>().

My point is, it could have been made possible to utilise the best of 
both worlds but with a bias toward static typing.

> >2. Why don't return types determine overload?
> 
> Because things like this would be ambiguous:
> 
> int f();
> float f();
> char f();
> 
> int main()
> {
> f();
> ...
> }

That's easy - if there's an f() returning void, it's the correct one 
to call. If there isn't, it's a compile error - you'd need (char) f() 
or something to say which to call.

> Worse, if the three 'f()' functions were compiled in different
> compilation units on different days of the week, the compiler might
> not even know about the overloads and it not notice that the call is
> ambiguous.

That can happen anyway surely if you're talking different scopes?

> There's an interesting example in Bjarne's "Design and Evolution of
> C++" that shows how type safety would commonly be compromised if C++
> did what you want. Suggest you get that book and read it -- your
> respect for the language and its (seemingly random) decisions will go
> up a few notches.

I read Bjarne's original C++ book and found it nearly impenetrable. 
Of course, that was then and this is now, but he didn't seem to me to 
write in an overly clear style. Quite laden with technogrammar.

> >3. Why can't the compiler derive non-direct copy construction? eg;
> >class A { A(B &); } class B { B(C &}; } class C { C(const char *); }
> >A foo="Hello";
> 
> This was done to eliminate programming errors. The problem is to
> avoid surprising the programmer with bizarre chains of conversions
> that no human would ever think of on his own. For example, if someone
> accidentally typed this code:
> [a good example cut]
> No programmer would think that is intuitively obvious. Put it this
> way: most programmers find the automatic conversion/promotion
> "magical," and are somewhat afraid of them as a result. The idea of
> limiting the number of levels is to put a boundary on how much magic
> is allowed. We don't mind hiding mechanism from the C++ programmer,
> but we don't want to hide so much mechanism that no programmer could
> ever figure out what's going on.

Yeah, mechanism hiding is something anyone who used too many macros 
knows well about. It's virtually impossible to debug those.

You've convinced me on that part. I would say however that it would 
be a lot more possible if the compiler gave you debug info on what it 
had done. In fact, I'd say current compilers could do with a lot 
friendlier method of showing you what they'd done other than looking 
at the disassembly as currently :(

> HOWEVER there is an important conceptual difference as well. (I'll
> use the term "software development" to avoid using either term
> "design" or "implementation.")
> 
> In OO software development, the inheritance hierarchies are more
> fundamental and more foundational than the algorithms or data
> structures. That may seem strange to you, but if so, it's mainly
> because of the *way* you've tended to use inheritance in the past.
> 
> Here's why I say that: start with an OO system that has a base class
> 'Foo'. Base class Foo is abstract. In fact, it is PURE abstract: it
> has no data structures and algorithms -- it is a pure "interface" --
> all its methods are pure virtual. Next we create, over time, 20
> different derived classes, each of which has a different data
> structure and algorithm. The methods have a similar purpose, since
> their contracts are similar, but there is *zero* code reuse between
> these derived classes since all 20 inherit directly from 'Foo'.
> 
> So the algorithms and data structures are wholly replaceable. In
> fact, we intend to use all 20 different data structures and algorithms
> in the same program at the same time (not implying threading issues
> here; "same time" simply means "in the same "run" of the program, all
> 20 classes are used more-or-less continuously).
> 
> In OO systems that smell even remotely like this, the core algorithms
> and data structures are very secondary to the design, and in fact can
> be ignored during early design. During late design, someone will need
> to carefully select the best algorithms and data structures, but
> during early design all that matters is the inheritance structure, the
> method signatures in the base class 'Foo', and the "contracts" for
> those methods. If the contracts are set up right, then all 20 derived
> classes will be able to require-no-more, promise-no-less (AKA "proper
> inheritance"), and the derived classes can totally bury the algorithms
> and data structures.
> 
> It's almost like specifying an API then later implementing it. When
> you're specifying the API, all you care about is that the parameters
> and specifications ("contracts") are correct, complete, and
> consistent. If your API is clean enough, you actually *want* to be
> able to ignore the specific algorithm / data structure that will be
> used behind that API, since if you can bury that information behind
> the API, you know the algorithm / data structure can be scooped out
> and replaced if someone later comes up with a better one. The
> difference is that with OO, we have an API that has 20 different
> implementations and they're all pluggable, meaning the guy who is
> using the API never knows which implementation he's working with. 
> That forces us to "do the right thing" by designing our API (i.e.,
> methods in 'Foo', parameters, and especially contracts) in a way that
> all 20 derived classes are "proper" and that none of the 20 "leak" any
> of their private info (algorithms and data structures) to the user.
> 
> If you're still with me, then inheritance is more foundational than
> algorithms and/or data structures. Your past style of inheritance
> equated inheritance with data structure, after all, inheritance was
> just a way to group two chunks of software together. But now that you
> see the above dynamic-binding-intensive approach, perhaps you see that
> inheritance is an earlier lifecycle decision than algorithm. That's
> why I call the inheritance graph a critical (*the* critical) part of
> design. Get that right and the rest is replaceable.

It's odd you know because having thought about it, I do place 
algorithms I think important to performance in a neutral API 
container so they can be changed later on. For example, in that 
EuroFigher test bench, I had every input and output in the system 
given a unique name which a central device database looked up and 
mapped appropriately (with application of the correct calibration, 
metrification into units etc.). Now that central database did its 
searches using a binary search but I always had thought that if the 
item count exceeded about fifty, it'd be better to move to a hash 
table.

So, it's quite possible I was doing all along what your opinion is 
without realising it.

> >I have applied my skills to many projects: public, private and 
> >personal and I have not found my data-centric approach to have failed
> > yet. It has nothing to do with code maintainability nor much other
> >than efficiency - but that's why I use an impure OO for
> >maintainability - but if you rate superiority of a program based on
> >its excellence in functioning, my approach works very well. I
> >contrast with OO designed projects and quite simply, on average they
> >do not perform as well.
> 
> Re your last sentence, most OO software sucks because most OO
> designers suck.

Heh, that's agreed. However, can you see my point that when a newbie 
designs OO they tend to get it wrong? Hence my point that good OO 
isn't intuitive, and hence my point that there is something wrong 
with OO because a better system would be intuitive ie; complete 
newbie has a good chance of generating a good design?

> One other thing: I re-read what you wrote before and would like to
> separate it into two things. You said,
> 
> >>>I have never agreed with OO design as my university 
> >>>lecturers found out - I quite simply think it's wrong.
> 
> I agree with this part wholeheartedly. University lecturers typically
> don't know the first thing about how to actually get something done
> with OO. They have silly examples, and they tend to teach "purity" as
> if some customer actually cares if the code using their particular
> guidelines, rules, etc. Most of the time their guidelines are wrong,
> and even when the guidelines are right, they are, after all, just
> guidelines. The goal is still defined in business terms, not
> technical terms. E.g., the schedule, the cost, the functionality, the
> user acceptance testing, etc. have nothing to do with purity.
> 
> In fact, I disagree with most university lecturers at a deeper level
> since I really hate the one-size-fits-all approach. Too many
> technologists already know the answer before they hear the question.
> They know that Professor Know It All said we should use this technique
> and that approach, and even though they don't yet know anything about
> the problem, they already know how to solve it! I always find it's
> safer to listen before I speak, to ask before I answer. Having a
> solution that's looking for a problem is dangerous. Lecturers who
> (often implicitly) teach that approach to their students ultimately
> cause their students to have just one bullet in their gun. If the
> problem at hand *happens* to match the style, approach, and guidelines
> that the student believes/uses, everything works great; if not, the
> student will do a mediocre job.

I was very surprised to read you saying this. I had always thought 
most people involved with standardisation tend to come from academia 
and hence tend to have a view that they teach purity - the theory - 
which is completely divorced from commericial and indeed practical 
realities. Hence academia's strong dislike of C and indeed C++. They 
seem to prefer Modula-2 and Java respectively.

A little example is calculation of algorithm time based on operations 
eg; a load, a store etc. That may have worked ten years ago, but 
modern compilers have a frightening ability to rework your code and 
furthermore, in modern processors one load may cost up to 1000 times 
more than another load. Ok, so across the program it tends to average 
out, but nevertheless there is a growing chasm between what's good on 
paper and what's good in reality or, put another way, there is a 
growing chasm between the pure theory taught at universities and the 
realities of software engineering. Most companies will readily say 
universities produce poor quality graduates - indeed, there are 
people receiving first class degrees who *cannot* *program* a 
computer!

> >>>Computers
> >>>don't work naturally with objects - it's an ill-fit.
> >>>
> >>>What computers do do is work with data. If you base your design
> >>>entirely around data, you produce far superior programs. 
> 
> This is the part I was disagreeing about. You can see why, perhaps,
> in the example I gave above (the 'Foo' class with 20 derived classes
> each of which had its own distinct data structure and algorithm).

I'm afraid I don't. In your 20 derived classes, each is in fact its 
own autonomous data processor whose only commonality is that they 
share an API. The API is good for the programmer, but doesn't help 
the data processing one jot.

Hence my view that OO is good for organising source (intuitively it 
produces good source organisation) but poor for program design (ok, 
program algorithms in your terms).

> >>Be careful: you are painting yourself into a very narrow corner. 
> >>You may end up limiting your career as a result.
> >
> >Possibly, but I would doubt it. I may have some unique opinions on
> >this but what the customer cares about is (a) will it work and (b)
> >can we look after it well into the future. My case history strongly
> >supports both of these criteria, so a priori I'm on the right path.
> 
> Well, if you're pretty convinced you're already using the right
> overall approach, then you'll have no reason to question that overall
> approach and that means it will be a lot harder for you to learn new
> overall approaches. To me, the most important area to stay teachable
> is the big stuff. The first step in learning something new
> (especially something new at the paradigm level, i.e., a new way of
> approaching software as opposed to merely a new wrinkle that fits
> neatly within your current overall approach) is to a priori decide you
> might be on the wrong track. But that's up to you, of course.

I wouldn't sit down every night writing replies to you for four hours 
if I were not open to new ideas. You are pretty obviously someone I 
can learn off and indeed already have done so. Merely, what I am 
saying, is that on this one point you have not proved to me that I am 
wrong (yet), nor in fact does it seem I am having much success 
proving you are wrong to you.

> (BTW I learn something new at the paradigm level every year or so. I
> can't take it more often than that, but I really try to emotionally
> rip up *all* my preconceptions and start over every year or so. My
> buddy and I get together every year or so, write a paper to tell the
> world how great we were and what great things we'd done, then sit back
> and say to each other, "That was then, this is now. We're dinosaurs
> but we don't know it yet. The weather has probably already changed
> but we had our heads down and didn't notice. How has the weather
> changed? How do we need to adapt-or-die?" After keeping that up for
> a few minutes, we actually begin to believe it, and pretty soon we're
> looking scared and worried - like we really could miss the train
> that's leaving the station. Ultimately we'd end up reinventing
> ourselves, coming up with a new way of approaching projects,
> stretching, trying out radically new ideas, and very much not thinking
> we're a priori on the right path. It's always be painful and humbling,
> but it's made me who I am today. And who I am today is, of course, not
> good enough any more, so I'll have to change again! :-)

:-)

I would suggest you're being a little aggressive with yourself. If 
you took my approach (which is trying to stay at the entrance to as 
many alleyways as possible rather than go right up one to its 
extremity and then have to backtrack) then (a) no one would know you 
for coming up with new ideas and you'd be a lot less rich and well-
known and (b) possibly, you may be less rigid in your approach.

But no, I appreciate the point and fully agree with it. Only through 
embracing change and the constant opportunites to improve that it 
provides can we become our potential. Your method must be quite 
draining psychologically though.

> >>The point is that these benefits came as result of OO *design*, not
> >>as a result of programming-level issues.
> >
> >I'm sure OO design greatly improved the likely wasp's nest of 
> >spaghetti that existed in there previously. But I'm not seeing how OO
> > design is better than any other approach from this example - there
> >are many methods that could have been employed to achieve the same
> >result.
> 
> Two things:
> 
> 1. If what you said in the last sentence is true, where's the beef? 
> If these other approaches could do the same thing, why didn't they?
> 
> 2. I think you've missed the point I was making. The point was that
> this project used inheritance the way I'm proposing it should be used,
> and that's very different from the "inheritance is for reuse"
> approach. It's not about OO vs. non-OO. It's about how the two
> different styles of OO produce different results.

My point was that there are alternative methods of structuring and 
designing your code that have nothing to do with OO whatsoever. 
Furthermore, I believe what you call OO is in fact a composite of a 
number of different approaches many of which exist absolutely fine 
without having objects nor inheritence nor anything like it.

My fundamental point is that I think that you have integrated many 
beneficial and good programming practices into your internal 
conceptualisation of what OO is and means, and you are having 
difficulty separating them and treating them as what they are. I 
personally prefer to treat these things more seperately as I believe 
it offers me a great selection of tools from the toolbox as it were, 
but it's entirely a personal choice.

> >>>An example: take your typical novice with OO. Tell them the rules
> >>>and
> >>> look at what they design. Invariably, pure OO as designed against
> >>>the rules is as efficient as a one legged dog. 
> >>
> >>The way you have learned OO, yes, it will have performance problems.
> >>But the way I am proposing OO should be done, either it won't have
> >>performance problems at all, or if it does, those problems will be
> >>reparable.
> >
> >ie; You're bending OO to suit real-world needs, 
> 
> Not necessarily. I'm certainly not a believer in purity of any sort,
> and I'm certainly willing to eject *anybody's* definition of purity to
> achieve some greater goal. But I've also used OO *to* *do*
> performance tuning. That is, I've used various OO idioms and design
> approaches as a way to dynamically select the best algorithm for a
> particular task. The OO part of that is the selection, and is also
> the pluggability of new algorithms as they become known, and is also
> the fact that we can mix and match pieces of the problem using one
> algorithm and other pieces using another (as opposed to having one
> function 'f()' that has a single algorithm inside). I'm not saying
> none of this could be done without OO; I am rather saying that the
> performance tuning was, in some cases, within the spirit of what OO is
> all about.

Ah ha! - proof of my point above!

I am definitely thinking that if you and I were asked to sit down and 
design a solution to some software problem, our designs would be 
almost identical. How we would describe our own designs however would 
be completely different ie; we are arguing about perception.

> >which is precisely 
> >what I said experienced OO people do.
> 
> Certainly I'm both willing and have done so. But I don't seem to view
> that like you seem to view it. I don't see it as if something is
> lacking in OO. Rather I see it like I have a toolbox, and one of
> those tools says "OO," and I end up choosing the right combination of
> tools for the job. When I combine OO with non-OO, that doesn't bother
> me or indicate something is wrong with either OO or non-OO.

Right - then you must forgive me, because I had interpreted your 
repeated glowing testamonials of the efficacy of OO as you saying 
it's the best generalised solution to all programming problems.

> In contrast, it feels like what you're saying is, "If you can't do the
> *whole* thing using OO, something is wrong with OO." 

That is what I am saying - but *only* because OO is pushed as the 
currently best-known approach to writing programs. I have absolutely 
no problem with people saying OO is useful so long as it's not made 
out to be better than everything else.

> Here again, you seem to be saying that if OO isn't optimal for 100% of
> the solution, then something's wrong with it. I take the opposite
> tact, mainly because I am *not* a promoter for any given language or
> paradigm. In fact, I would be highly suspicious if someone (including
> you) claimed to have a technique that is optimal for 100% of the
> solution to any given problem, and especially if it was optimal for
> 100% of the solution of 100% of the problems. I simply do not believe
> that there exists any one-size-fits-all techniques, including OO,
> yours, or anybody else's.

What then do you feel is problematic with a data-centric approach? 
Why isn't it a better one-size-fits-all approach? Surely you would 
agree that if you base your design on quantities of data and the 
overheads of the media in which they reside, you naturally and 
intuitively produce a much more efficient design?

> >So, thoughts? I'm particularly interested in what you see as design
> >flaws 
> 
> Please compare and contrast with web-services. Obviously you're not
> married to XML like most web-services are, but they also have a
> concept of components / services through which data flows. Is there
> some similarity? Even at the conceptual level?

Good question.

The difference is in orientation. XML builds on top of the existing 
paradigm using existing software and its structure. Hence, the range 
of data it can process and how it processes it is quite limited 
(despite what its advocates might say).

What I propose goes the other way round - the programming is shaped 
by the needs of the data (rather than the other way round with XML). 
Of course, this needs a complete rewrite of all the software, but 
more on that later.

Fundamentally of course, XML is based around separating content from 
structure in order to achieve data portability. Now this is a 
laudable idea (and also one I think a pipedream) and partly of course 
my idea does the same. However, the fact I use much tinier data 
processors (ie; much finer granularity) and very different way of 
interfacing two formats of data I feel makes my solution far 
superior.

Of course, if they take XML much beyond what's already agreed, then I 
could have a problem on my hands. However, I think the same old 
propriatary data problems will raise their head and will subvert the 
possible potential. In the end, my method is completely compatible 
with XML, so I can always bind in XML facilities.

> [all very valid points about business]
> I'm not trying to discourage you - just trying to ask if you know what
> the pay-back really is. I'm also trying to remind you about how none
> of the front runners in OO survived, and ultimately it took a couple
> of decades before that paradigm took hold.

I completely agree with all these very valid points. But then I 
didn't explain my business model to you, only the technical model. 
The idea is to completely avoid business, because they won't buy it. 
The target for the first two years is actually slashdot readers. Let 
me explain:

Have you ever seen or used something that just impressed you with its 
quality? Have you ever really enjoyed programming for a certain 
language or operating system because it was so well designed?

In other words, I'm targeting the 20% of programmers or so who 
actually like programming and do it outside of work for fun. ie; a 
good majority of slashdot readers.

The runtime is obviously free and the SDK will mostly be free too ie; 
investors can't expect a return for the first two years. This is 
because in fact we're building a software base, without which no new 
paradigm stands a chance in hell.

We start making money when we put in the networking code sometime 
into the third year. This is where we start tying together all the 
advantages of this system and *leveraging* it. You get things like 
distributed computing, guaranteed email, integrated scheduling - all 
the stuff businesses like. More importantly, AFAIK only MS products 
do this kind of thing currently so my product would do it for Linux 
and Macs.

One we have a foothold into company intranets and a growing base of 
already skilled programmers, I think you're beginning to see how I'm 
intending to overcome all that which has killed languages, BeOS and 
plenty more. The biggest and most important is that programming for 
my proposed system should be the stuff of dreams (BeOS nearly got 
there, but in the end it didn't run on Windows - that's what killed 
it).

> >I've had Carl Sassenrath (he did much the OS for the 
> >Commodore Amiga) and Stuart Swales (did much of RISC-OS I mentioned
> >earlier) both agree it's probably right, but both wondered about
> >implementation. I should be especially interested in seeing what a
> >static OO based person things - neither Carl nor Stuart are static
> >code nor OO advocates hugely.
> 
> Please explain what you mean by "static code."

As in between Smalltalk and C++, or Objective C and C++, or even to 
extremes between interpreted and compiled languages. Generally I mean 
that Carl and Stuart from what I've observed tend to shift more 
processing into run-time (or what I call "dynamic code").

> >Furthermore, any advice about soliciting venture capital in Europe
> >would be useful (yes, I know it's like squeezing blood from a stone
> >here) - ever since the indigenous industry withered and died here,
> >it's been very hard to obtain capital for blue-sky projects without
> >the Americans buying them up. 
> 
> It's also hard here. There really has to be a decent business case.
> Honestly, if you're wanting to make money, you're better off using the
> Harvard Business School model: sell it first, *then* figure out how to
> deliver it. Building a better mousetrap and hoping somebody out there
> cares is an extremely risky business. Sometimes it works, but I think
> most of those are historical accidents - they happened to accidentally
> build the right thing at the right time.

I have to agree although the HBS method is unethical. Yeah, I know 
that sounds /so/ idealistic, but I am European and we live with the 
influence of Sartre!

> Seems to me you have two goals: stretch yourself via your idea (and in
> the process learn something new), and build a business that helps you
> keep the lights on, and possibly more. If the latter is your primary
> goal, seriously thinking about starting with a business case or even
> some sales attempts. If the former, then ignore everything else I've
> said and have a wonderful six months -- and may be, just may be,
> you'll hit the jackpot and actually sell the thing when you're done.

It was more that I'm sick and tired of fixing other people's 
grandiose mistakes only for them to show no appreciation and boot me 
out. I also don't much like constantly fighting people once their 
desperation phase (ie; when they're open to any new idea) has passed. 
I'd just prefer to do something that I personally enjoy doing and am 
interested in. If there were any blue-sky research happening in OS 
design here in Europe, then I'd be there. But here, most of the work 
is middleware, databases and web page design. Getting anything 
/interesting/ involves a big pay cut and job insecurity.

Now if I can get this project to pay me enough to live off, I'd 
actually forsake a high cash flow for a while. I'd prefer to be 
insecure and poor under my own management than someone else ;)

> >Anyway, any comments you may like to offer would be greatly 
> >appreciated. You've already earned yourself an acknowledgement in the
> > projects docs for helpful tips and suggestions.
> 
> Thanks for the acknowledgements.

You're welcome!

> Okay, here's another comment for you to think about (no need to
> reply): What specific application area will it be most suited for? 

Anything processing data. A game, for example, would be a very poor 
fit.

> Will it be most suited for embedded systems? handhelds? web servers?

None of those three. In fact, it's likely to require very significant 
overheads - it certainly uses a *lot* of processes and threads plus 
it uses a lot of caching, so memory use will be high.

However, I honestly don't know. I look at COM and I think my solution 
is likely to require less overhead. I won't know until I have working 
code to benchmark. I will say though I have gone to great lengths to 
optimise the system.

> apps servers? client-side programs? thin-client? Similarly, what
> specific industries are you seeing it fit best? Banking, insurance,
> transportation, etc.

Ultimately, I forsee it becoming the future standard for all 
operating systems and it linking every computer on the planet 
together ie; it replacing everything we know today as the de facto 
solution. Until something better comes along to replace it of course.

> May be these questions don't make sense, because may be I'm giving the
> wrong categories. E.g., may be the real question is whether it will
> be developing software development tools, database engines, or other
> "horizontal" apps, as opposed to the various vertical markets. 
> Whatever the correct categorization is, think about whether you know
> who would use it and for what (and please forgive me if I missed it
> above; it's very late). Ultimately if you have a reasonably crisp
> sense of who would buy it, you will be better able to target those
> programmers.

As mentioned above, it all starts with the programmers. No business 
will touch a technology without a hiring base - this is why MS has 
all those training courses.

> Programmers nowadays want highly customized tools. You
> mentioned GUI builders earlier. I personally despise those things,
> since they have dumbed down programming and turned it into something
> more akin to brick-laying, but I recognize that programmers nowadays
> really crave those easy-to-use tools. Many programmers nowadays don't
> seem to like writing code. They don't mind dragging the mouse on the
> screen, and clicking on a pull-down list of options, but many of them
> really don't want to actually type in code. I'm the opposite. Give
> me an Emacs and a command-line compiler, then stay out of my way. 
> Which really means I no longer "fit in" anywhere. Oh well, that's
> life I guess.

I'm writing this project using MSVC simply because it has good 
multithreading debug facilities whereas Unix seriously does not. I 
too don't much care for those graphical builder things, but you're 
right - I do know some managers who have learned how to throw stuff 
together using VBA which is of course built into every MS Office. 
Have you seen the development interface? It's hidden under the tools 
menu, but it *is* a free copy of VisualBasic. Almost identical in 
every facet. Why people buy VB nowadays is beyond me.

One point is the universities again - they teach you to use graphical 
tools which means you don't learn how it all works under the bonnet. 
Of course, a lot of industry megabucks goes in there to ensure that 
that will only get worse.

> I wandered - let me get back on track. The point was that programmers
> nowadays don't want generic tools that can be customized for their
> industry. They want specific tools that have already been customized
> for their industry. They want tools that are brain-dead easy to use,
> tools that don't require them to write much code. Will your vision be
> able to serve those sorts of programmers? If so, is the plan to do
> the customization with the VC money?

Initially, we use existing tools much as NT did first days with GCC. 
Then we slowly (and I mean after three to four years) add our own 
customised tools (eg; functional language). I think given our target 
market, that'll work ok.

> My not-so-hidden-agenda here is to help you begin to think like a
> "packaged software" company. Unlike custom software companies and
> end-user shops that build apps based on a spec from a well-defined
> "customer," the packaged-software companies spend *enormous* amounts
> of time and money doing market research before they write anything. 
> Here again it's the Harvard B-school approach: they're figuring out
> what will sell, then they build that. So think about that: what will
> sell? is there any latent frustration out there that would spur
> companies into changing their toolset and programmers? what is that
> frustration?

I'm thinking more why would a programmer choose to use my work. What 
would attract them to it, and keep them with it, and most importantly 
generate lots of freeware for it in their spare time. No code base = 
no killer app = dead product.

> viewed as adding any business value to the company." In other words,
> I'm just not seeing a great groundswell of frustration from technical
> people and/or first-line managers who are moaning in unison, "Our
> tools and/or languages suck; if we only had better tools and/or
> languages, life would be wonderful." I don't see why a new tool and a
> new language (and *especially* a new language within a new paradigm)
> would be viewed by the typical manager as a cost-saving measure. Like
> I said earlier, I'm not trying to throw a wet towel on your idea; more
> just being honest about what I do and don't see in the marketplace
> today.)

People in the western world often don't recognise how much perception 
influences their thoughts. The reason people don't think "why the 
hell won't windows let me do X" is the same reason why people don't 
think "I should live a healthy lifestyle because prevention is better 
than cure". It's a conceptual problem rooted in western culture, and 
particularly may I add in the US ie; an overly narrow perception of 
the causes of a problem or the role of an item - or, you could say 
over-reductionist and not enough considering of systemic problems 
(ie; our problem is not because MS haven't implemented feature X, 
it's because fundamentally Windows is not designed to do features 
like X).

I read an article once in Wired about how the Europeans are 
constantly going off and "reinventing the wheel" as they put it. I 
would instead say that a dissatisfaction with existing wheels meant 
we tried to improve on them by radically redesigning from the ground 
up rather than attempting to evolve existing ideas.

There's plenty plenty more on this problem of perception and asking 
the right questions by Fritjof Capra and others. The point is, I've 
deliberately designed my business model around the fact that you're 
completely right, no one will consider a radical departure from the 
norm until they have it sitting in front of them, demonstrating its 
vast superiority.

Of course, Microsoft will inevitably either try to buy the new idea 
out or generate a competing product. I'll cross that bridge if I come 
to it :)

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Thu, 1 Aug 2002 03:29:42 -0500

Niall Douglas wrote:
>On 30 Jul 2002 at 2:14, Marshall Cline wrote:
>
>>>Not being able to obtain these books easily (I live in Spain plus
>>>money is somewhat tight right now), I looked around the web for more
>>>on this. I specifically found what not to do when inheriting plus how
>>> deep subclassing usually results in code coupling increasing. Is
>>>that the general gist?
>>
>>That's a start. But coupling between derived and base class is a
>>relatively smaller problem than what I'm talking about. Typically
>>deep hierarchies end up requiring a lot of dynamic type-checking,
>>which boils down to an expensive style of coding, e.g., "if the class
>>of the object is derived from X, down-cast to X& and call method f();
>>else if it's derived from Y, down-cast to Y& and call g(); else if
>>...<etc>..." This happens when new public methods get added in a
>>derived class, which is rather common in deep hierarchies. The
>>if/else if/else if/else style of programming kills the flexibility and
>>extensibility we want to achieve, since when someone creates a new
>>derived class, they logically need to go through all those
>>if/else-if's and add another else-if. If they forget one, the program
>>goes into the "else" case, which is usually some sort of an error
>>message. I call that else-if-heimer's disease (pronounced like
>>"Alzheimer's" with emphasis on the word "forget").
>
>When you say dynamic type checking, you mean using typeid()? 

Perhaps, but there are lots of other ways to achieve it. There are two
general forms: the first is called a "capability query," where a
function 'f(Base* p)' asks the object at '*p' if it is capable of
performing some method 'm()'. The second is more like
'dynamic_cast<Derived*>(p)', where 'f(Base* p)' asks the object at '*p'
if its class is derived from 'Derived'.

In either case, the code ends up doing an if-then-else, and that
if-then-else is what causes the problems, since when someone creates a
new derived class, say 'Derived2', all those functions like 'f(Base* p)'
end up needing to get changed: to add another 'else if'.

>I had 
>thought that was to be avoided

Yes, that was my point earlier: deep hierarchies tend to result in
dynamic type checking, and that causes other problems.

>- in fact, any case where a base class 
>needs to know about its subclasses indicates you've got the 
>inheritance the wrong way round?

True.

However what I'm talking about is something different. I really need to
express the core idea more clearly. The idea of extensibility is
usually achieved by structuring your code so 99% (ideally 100%) of your
system is ignorant of the various derived classes. In particular,
pretend 'ABC' is an abstract base class with pure virtual methods but no
data and no code, and pretend 'Der1', 'Der2', ..., 'Der20' are concrete
derived classes that inherit directly from 'ABC'. The goal is for 99%
of the system to NEVER use the token Der1, Der2, ..., Der20 --- to pass
all these objects as 'f(ABC* p)' or 'g(ABC& x)'.

If the vast majority of the system, say 99% or 100% of the code is
ignorant about the names Der1, Der2, etc., then that 99% or 100% of the
system will be stable / unchanged if someone creates Der21 or Der22.

Now consider the case where one of the classes, say 'Der23', needs to
add a new (non-inherited) public method. Obviously the only way for
anyone to call that public method is to know the name 'Der23', since
they need to have parameters and/or locals of type 'Der23' or 'Der23*'
or 'Der23&'. If only a tiny part of the system, say 1% or less, knows
the name Der23, that's not too bad; but if a large portion of the system
knows that name (in order to call the "new public method"), then things
are starting to fall apart, since pretty soon someone will add 'Der24'
and add yet another "new public method" and will need to modify a large
portion of the system.

So "new public methods" cause problems when you use the above structure,
and deep hierarchies often result in derived classes defining new public
methods.

There are other problems with deep hierarchies, most especially the
reality that they often result in "improper inheritance," and that
causes unexpected violations of the base class's contract which
ultimately breaks the code of "the vast majority" of the system. This
proper-inheritance notion is the same as require-no-more,
promise-no-less, which you basically didn't like :-(

Big picture: Let's start over and ask, In addition to meeting the
requirements, what are we trying to achieve? Instead of saying
something vague like "reduce maintenance cost," I'll try to codify that
in terms of software stability: we want the bulk of the changes or
extensions to *not* require changes to the bulk of the system. A cutesy
way to say this is to eliminate the ripple effect. The point is to try
to build stuff in such a way that the bulk (say 99%) of the system is
stable when changes or extensions are made. The above is a partial
solution to this problem.


>>[stuff about bad class design deleted]
>>However since we used inheritance, we're pretty much stuck with
>>HashTable forever. The reason is that inheritance is a very "public"
>>thing -- it tells the world what the derived class *is*. In
>>particular, users throughout our million-line-of-code system are
>>passing Bags as HashTables, e.g., converting a Bag* to HashTable* or
>>Bag& to HashTable&. All these conversions will break if we change the
>>inheritance structure of Bag, meaning the ripple effect is much
>>higher.
>
>One of the things I look for when designing my class inheritances is 
>whether I could say, chop out one of the base classes and plug in a 
>similar but different one.

Interesting. That might be an artifact of the "other" approach to
inheritance, since that's almost exactly the opposite of what happens
with my code. Generally I design things so the base class is forever.
There is no benefit to unplugging it, and in fact it is extremely
unusual for it to be unpluggable since it codifies both the signatures
*and* the contracts that the various derived classes must abide by, and
plugging in another base class would almost always change those in some
way. But again, I emphasize that I never set up unpluggable base
classes *because* of the overall structure of my code, in particular the
structure where 99% of the system uses base-class pointers/references,
and that 99% is ignorant of any derived classes.

BTW another difference between the structure I'm talking about and the
"inheritance is for reuse" approach and/or the tall hierarchies approach
is the question, "Where is the code we're trying to reuse?" With
"inheritance is for reuse," the code we're trying to reuse is in the
base class, and it is typically the same in the tall hierarchies
approach. With the approach I'm talking about, the code we try to reuse
(that is, the code we want to *not* change) is the caller -- the 99% of
the system that is ignorant of the derived classes.

Said another way, the derived classes become application-specific
plug-ins, and the 99% can be thought of as glue code that holds them
together. Another interesting aspect of this is the call-graph
direction: it's the Hollywood model: "don't call us - we'll call you."
That's because, although the derived classes might occasionally call
some method in the 99%, the dominant call direction by far is for the
99% to call methods in the derived classes.

These last two paragraphs aren't describing a new model, but are trying
to give a couple of insights about the model I've been describing all
along.


>I'm just trying to think where I learned 
>that lesson - I think it was that cellular automata game I wrote to 
>refresh my C++ (http://www.nedprod.com/programs/Win32/Flow/) and I 
>realised it's best when the base class knows nothing at all about its 
>subclasses except that it has some and furthermore than subclasses 
>know as little as possible about what they inherit off (ie; they know 
>the inherited API obviously, but as little as possible about what 
>/provides/ the API). I think, if memory serves, it had to do with the 
>tools in the game. Of course, officially this is called reducing 
>coupling.
>
>>A derived class's methods are allowed to weaken requirements
>>(preconditions) and/or strengthen promises (postconditions), but never
>>the other way around. In other words, you are free to override a
>>method from a base class provided your override requires no more and
>>promises no less than is required/promised by the method in the base
>>class. If an override logically strengthens a
>>requirement/precondition, or if it logically weakens a promise, it is
>>"improper inheritance" and it will cause problems. In particular, it
>>will break user code, meaning it will break some portion of our
>>million-line app. Yuck.
>>
>>The problem with Set inheriting from Bag is Set weakens the
>>postcondition/promise of insert(Item). Bag::insert() promises that
>>size() *will* increase (i.e., the Item *will* get inserted), but
>>Set::insert() promises something weaker: size() *might* increase,
>>depending on whether contains(Item) returns true or false. Remember:
>>it's perfectly normal and acceptable to weaken a
>>precondition/requirement, but it is dastardly evil to strengthen a
>>postcondition/promise.
>
>This is quite ephemeral and subtle stuff. Correct application appears 
>to require considering a lot of variables.

I probably didn't describe it well since it's actually quite simple. In
fact, one of my concerns with most software is that it's not soft, and
in particular it has a large ripple effect from most any change. This
means a programmer has to understand how all the pieces fit together in
order to make most any change. In other words, the whole is bigger than
the sum of the parts.

The idea I described above means that the whole is *merely* the sum of
the parts. In other words, an average (read stupid) programmer can look
at a single derived class and its base class, and can, with total and
complete ignorance of how the rest of the system is structured, decide
unequivocally whether the derived class will break any existing code
that uses the base class. To use another metaphor, he can be embedded
in a huge forest, but he can know whether something new will break any
old thing by examining just one other leaf.

I rather like that idea since I have found the average programmer is,
well, average, and is typically unable or unwilling to understand the
whole. That means they screw up systems where the whole is bigger than
the sum of the parts -- they screw up systems where they must understand
a whole bunch of code in order to fully know whether a change will break
anything else.

I call this the "middle of the bell curve" problem. The idea is that
every company has a bell curve, and in the middle you've got this big
pile of average people, and unfortunately these average people can't
handle systems where the whole is bigger than the sum of the parts.
That means we end up relying on the hot-shots whenever we need to change
anything, and all the average guys sit around sucking their thumbs. I
think that's bad. It's bad for the hot-shots, since they can never do
anything interesting - they spend all their time fighting fires; it's
bad for the average people since they're under utilized; and it's bad
for the business since they're constantly in terror mode.

If we don't solve the middle-of-the-bell-curve problem, why have we
bothered with all these fancy tools, paradigms, etc.? In other words,
the hot-shots could *always* walk on water and do amazing things with
code, and they don't *need* OO or block-structured or GUI builders or
any other fancy thing. So if we don't solve the
middle-of-the-bell-curve problem, we might as well not have bothered
with all this OO stuff (or you could substitute "data centric stuff"),
since we really haven't changed anything: with or without our fancy
languages/techniques, the hot-shots can handle the complexity and the
average guys suck their thumbs.

Finding a way to utilize the middle of the bell curve is, in my mind, a
bulls eye for the industry as a whole. And I think the above approach
is a partial solution.


>>Please don't assume the solution is to make insert(Item) non-virtual.
>>That would be jumping from the frying pan into the fire, since then
>>Bag::insert() would get called on a Set object, and there actually
>>could be 2 or 3 or more copies of the same Item inside a Set object!! 
>>No, the real problem here isn't the override and it isn't the
>>virtualness of the method. The real problem here is that the
>>*semantics* of Set are not "substitutable for" those of Bag.
>
>This is quite a similar point to what you made two replies ago or so. 
>I'm not sure of the distinction between this example and the one 
>which you explained to me a few emails ago - ie; this example is 
>supposed to prove deep inheritance trees are evil but yet it would 
>seem you are proving the same point as before regarding bad 
>inheritance.

There were two new aspects here: an illustration of a deep hierarchy
that produced (as is typical) bad inheritance, and an exact codification
of exactly what I mean by "bad inheritance."


>Or are you saying that the deeper the tree, the much greater chance 
>it's a symptom of bad design?

Yes. And I'm also saying that there is a little algorithm that can
always determine if your inheritance is proper or not. It's the
require-no-more, promise-no-less test. It's equivalent to
substitutability, and ultimately it decides whether the derived class
will break any of the 99% of the user-code that uses base-class
pointers/references. (Of course the whole proper-inheritance thing only
has bite if you have base pointers/references referring to derived
objects.)


>>As before, aggregation would be perfectly safe and reasonable here:
>>Dictionary could have-a Set, could insert Association objects (which
>>would automatically be up-casted to Item&), and when it
>>accessed/removed those Items, Dictionary could down-cast them back to
>>Association&. The latter down-cast is ugly, but at least it is
>>logically safe -- Dictionary *knows* those Items actually are
>>Associations, since no other object anywhere can insert anything into
>>the Set.
>>
>>The message here is NOT that overrides are bad. The message here is
>>that tall hierarchies, particularly those built on the "inheritance is
>>for reuse" mantra, tend to result in improper inheritance, and
>>improper inheritance increases time, money, and risk, as well as
>>(sometimes) degrading performance.
>
>So, let me sum up: inheritance trees should be more horizontal than 
>vertical because in statically typed languages, that tends to be the 
>better design? Horizontal=composition, vertical=subclassing.

Yes, so long as you emphasize "tends." I emphasize "tends" and
deemphasize "should" because tall hierarchies are a yellow flag, not a
red flag. Improper inheritance is the red flag; tall hierarchies often
(not always) result in improper inheritance. The other yellow flag is
inheritance from a concrete class: that can cause performance problems
(see yesterday's description of Bag from HashTable), plus it can result
in accidental slicing via both the base class's copy ctor and assignment
operator (described a few days ago).


>No, I've got what you mean and I understand why. However, the point 
>is not different to what I understood a few days ago although I must 
>admit, Better = Horizontal > Vertical is a much easier rule of thumb 
>than all these past few days of discussion. You can use that rule in 
>your next book if you want :)

I've seen hierarchies with up to five levels, although all but the very
last were abstract base classes with almost no data or code. So again,
to me the real culprit is improper inheritance and/or inheritance from a
data structure. The former breaks user code and the later can cause
performance problems (creates a ripple effect when we try to improve
performance by changing to a different data structure).


>>* If Base::f() says it never throws an exception, the derived class
>>must never throw any exception of any type.
>
>That's an interesting one. I understand why already, I can infer it 
>from above. However, if the documentation of my subclass says it can 
>throw an exception and we're working with a framework which 
>exclusively uses my subclasses, 

(this is a structure I try to avoid; see above.)

>then all framework code will 
>ultimately have my code calling it ie; bottom of the call stack will 
>always be my code. Hence, in this situation, it is surely alright to 
>throw that exception?

Yes, because the 99% now knows about the derived class. Again, I try to
avoid this structure, but the big-picture goal is to keep the 99% stable
in the face of changes.


>I say this because my data streams project can throw a TException in 
>any code at any point in time (and there are explicit and loud 
>warnings in the documentation to this regard). Of course, one runs 
>into problems if the base class when calling its virtual methods does 
>not account for the possibility of an exception.

Bingo.


>>>Hence, that TSortedList should now derive off QGList which doesn't
>>>have the append and prepend methods so I can safely ensure it does
>>>what its parent does.
>>
>>What you really ought to do is check the *semantics* of QGList's
>>methods, in particular, read the preconditions and postconditions for
>>those methods. (I put these in the .h file for easy access, then use
>>a tool to copy them into HTML files; Qt seems to put them in separate
>>documentation files; either way is fine as long as they exist
>>somewhere.) Inheritance is an option if and only if *every* method of
>>TSortedList can abide by the corresponding preconditions and
>>postconditions in QGList.
>
>Actually, to tell you the truth, I had already looked through QList 
>and QGList to ensure my altering of TSortedList wouldn't cause 
>problems - those disabled methods weren't called internally within 
>QGList, so I was fairly safe the list would always remain sorted.

Even though QList and QGList don't call the disabled methods, you're not
as safe as you think. Pretend for the moment you're part of a larger
team of programmers. Someone out there could innocently pass a
TSortedList as a QList& or QList*, and then the called function could
(via the QList& or QList*) access the removed methods and screw things
up. For example, they could cause the list's contents to become
unsorted, and that could screw up TSortedList's binary search
algorithms. The only way to insulate yourself from this is to use has-a
or private/protected inheritance.


>Actually, those embedded systems are becoming disgustingly powerful - 
>that GPS receiver was a 80Mhz 32 bit processor with 16Mb of RAM and 
>optionally 64Mb of ROM. On that, you can write your applications in 
>Visual Basic and it'll actually go. Of course, that's what Windows CE 
>is all about.
>
>Embedded systems programming as we knew it is dying out. When 
>desktops went all powerful, a lot of us assembler guys went into tiny 
>systems but now they've gone all powerful, it's a rapidly shrinking 
>market. The writing is definitely on the wall - move onto OO and C++ 
>and such or else become unemployable.

You may have already worked with hand-held systems, but if not, they
might be the last bastion of tight, high-tech coding. Particularly
hand-held systems targeted at the consumer market, since that usually
means the company wants to squeeze the unit cost and extend the battery
life. In those cases, they worry about everything. Wasting memory
means the thing needs more RAM or flash, and that increases unit cost
and reduces battery life. Similarly wasting CPU cycles burns the
battery up pretty fast. So in the end they want it very small and very
fast, and that makes it challenging/fun.


>>And after all these questions are answered, somewhere down on the list
>>are things like the relative "cleanliness" of the language. Are the
>>constructs orthogonal? Is there appropriate symmetry? Are there
>>kludges in the syntax? Those things will effect the cost of the
>>software some, to be sure, but they aren't life-and-death issues like
>>whether we can buy/rent programmers or whether we can buy/license good
>>tools. I have a client that is using (foolishly) a really clean,
>>elegant language that almost nobody uses. Most programmers who use
>>that language for more than a month absolutely love it. But the
>>client can't buy or rent programmers or tools to save its life, and
>>its multi-million dollar project is in jeopardy as a result.
>
>What's the language?

Limbo. It's hosted within Inferno, an OS that was originally by Lucent,
but was sold to a UK company named Vita Nuova.

Limbo was designed by Dennis Ritchie and some other really smart folks
(BTW I had a chance to talk to Dennis on the phone as a result of this
engagement), and everyone involved gives it glowing reviews. But like I
said, my client is having a hard time finding people to work with it
since there simply aren't that many Limbo programmers out there.

Somewhat interesting approach. It's hosted via a virtual machine, and
it's compiled into a byte-code of sorts, but it's very different from
the stack-machine approach used by Java. It's much closer to a
register-machine, so the source code "a = b + c" compiles into one
instruction (pretend they're all of type 'int', which uses the 'w'
suffix for 'word'):

addw b, c, a // adds b+c, storing result into a

The corresponding Java instructions would be something like this:

iload b // pushes b
iload c // pushes c
iadd // pops c then b, adds, pushes the sum
istore a // pops the sum, stores into a

There are two benefits to the Limbo byte-code scheme: it tends to be
more compact, on average, and it's much closer to the underlying
hardware instructions so a JIT compiler is much smaller, faster, uses
less memory, and is easier to write. E.g., a Java JIT has to convert
all these stack instructions to a typical machine-code add, and that
transformation has to happen on the fly, whereas Limbo does most of that
transformation at compile-time.

It also has some very interesting data transmission techniques, based on
CAR Hoare's CSP (communicating sequential processes) model. By tying
their synchronization and data transmission technique to CSP, they
instantly know all sorts of facts that academics have proven about their
language/environment. For example, Hoare showed that CSP can be used to
build any other synchronization primitive (such as semaphores, or
Java-like monitors, or anything else), and a whole bunch of academics
created all sorts of reader/writer scenarios that can be exploited by
the language.

The Limbo implementation of CSP is via channels. You create a channel
in your code, then one thread reads from the channel and another thread
writes to it. Channels aren't normally tied to files, but they can be.
Pretty slick stuff, and somewhat related to what you're doing. For
example, you can have a channel of 'int', or a channel of 'Xyz' where
the latter is a struct that contains all sorts of stuff, or anything in
between.

The language takes some getting used to, primarily because it, unlike C
or C++, has *no* back door to let you do things that are nasty. E.g.,
there are no unchecked pointer casts, there is nothing corresponding to
a 'void*' type, function pointers have to specify all parameters
exactly, there is no is-a conversion or any other way to get a Foo
pointer to point at a Bar object, etc. Obviously the byte-code level
(called "Dis") lets you do these things, but Limbo itself tries to
protect idiots from being idiotic. (As you might guess, I had to write
some Dis code for some things. That was fine, of course, but it was
somewhat bizarre seeing that Limbo offered no alternative.)

You might want to read their articles about CSP. See www.vitanuova.com.
Also check out Lucent's web site.


>>Yes, C is closer to the machine, since its mantra is "no hidden
>>mechanism." C++ *strongly* rejects the no-hidden-mechanism mantra,
>>since its goal is ultimately to hide mechanism - to let the programmer
>>program in the language of the *problem* rather than in the language
>>of the *machine*. The C++ mantra is "pay for it if and only if you
>>use it." This means that C++ code can be just as efficient as C code,
>>though that is sometimes a challenge, but it also means that C++ code
>>can be written and understood at a higher level than C code -- C++
>>code can be more expressive -- you can get more done with less effort.
>> Of course it is very hard to achieve *both* those benefits (more done
>>with less effort, just as efficient as C) in the same piece of code,
>>but they are generally achievable given a shift in emphasis from
>>programming to design (using my lingo for "design"). In other words,
>>OO software should usually put proportionally more effort into design
>>than non-OO software, and should have a corresponding reduction in the
>>coding effort. If you're careful, you can have dramatic improvements
>>in long-term costs, yet keep the short-term costs the same or better
>>as non-OO.
>
>That's an interesting point - that as the languages evolve, more time 
>proportionally needs to go into design.

Not sure if it's the evolution of the language. I think it's the added
goals of a typical OO project. A typical OO project has all the
functional and non-functional goals of a typical non-OO project, but the
OO project often gets additional non-functional goals. For example, an
OO project is somehow supposed to be easier to adapt, generate more
reuse, lower maintenance cost, that sort of thing. So I think the added
design-time comes because we end up trying to do more things at once --
it's harder to achieve 3 goals than 1.

I think the extra design would have achieved at least some of these
things in non-OO languages as well, but I also believe (perhaps unlike
you) that an OO language makes it easier. Here's why I believe that: I
can achieve everything in straight C that I can in C++, since I can
always manually simulate a virtual-pointer / virtual-table mechanism,
hand-mangle function names to simulate function or operator overloading,
manually create the state transition tables used by exception handling,
etc. However there would be a lot of grunt bookkeeping code, and that
would eventually cloud the ultimate goal. Even if I chose a simplified
set of features to simulate, it would still add chaff to the code and
that would make it harder to work with. One real-world example of this
is X (AKA the X Windows System). X was written in C using OO design,
and they had all sorts of macros to handle inheritance and all the rest,
but eventually they lost a grip on the system because it had too much
bookkeeping crap.

My point in all of this is simply this: if you have only one goal, you
don't need much design time at all. If you have two goals, you need a
little more design time. If you have 20 goals you want to achieve at
the same time, you probably need a lot of design time (no matter what
language you are using). I think that might be the reason typical OO
projects have proportionally more design-time and less code-time.


>>People who don't understand good OO design (my definition, again;
>>sorry) tend to screw things up worse with OO than with non-OO, since
>>at least with non-OO they don't *try* to achieve so many things at
>>once -- they just try to get the thing running correctly and
>>efficiently with hopefully a low maintenance cost. In OO, they try to
>>use OO design (my defn) in an effort to achieve all those *plus* new
>>constraints, such as a dramatic increase in software stability, a
>>dramatic reduction in long-term costs, etc. But unfortunately, after
>>they spend more time/money on design, they have a mediocre design at
>>best, and that mediocre design means they *also* have to pay at least
>>as much time/money on the coding stage. They end up with the worst of
>>both worlds. Yuck.
>>
>>The difference, of course, is how good they are at OO design (using my
>>defn).
>
>I would personally say it's about how good they are at *design* full 
>stop period. I still hold that it's unimportant whether you use OO or 
>not - it's merely one of the tools in the toolbox and its merit of 
>use entirely depends on the situation.

I think you just weakened your argument about OO being non-intuitive.
Below you said:

>... can you see my point that when a newbie 
>designs OO they tend to get it wrong? Hence my point that good OO 
>isn't intuitive, and hence my point that there is something wrong 
>with OO because a better system would be intuitive ie; complete 
>newbie has a good chance of generating a good design?

The argument you're making seems reasonable: when a newbie designs OO he
gets it wrong, therefore good OO isn't intuitive, therefore there is
something wrong with OO.

However, once you admit that these people are no good at design full
stop period, the two "therefore"s go away -- the fact that they're no
good at design full stop period provides an alternative explanation for
why they get OO designs wrong.


>>It shouldn't. Try this code and see if it causes any errors:
>
>Actually, I tried:
>--- cut ---
>class BaseString {
>public:
> BaseString(const char* s);
> BaseString &operator=(const char *);
>};
>
>class DerivedString : public BaseString {
>public:
> DerivedString();
> DerivedString(const BaseString &s);
> DerivedString(const char* s);
> DerivedString &operator=(const char *);
>};
>
>int main()
>{
> DerivedString foo("foofoo") ;
> foo = "Hello world";
> return 0;
>}
>--- cut ---
>
>>I think that properly represents the problem as you stated it:
>> >>>TQString foo;
>> >>>foo="Hello world";
>> >>>
>> >>>Now TQString is a subclass of QString, and both have const char *
>> >>>ctors. The compiler will refuse to compile the above code because
>> >>>there are two methods of resolving it. "
>>
>>Let me know if the above compiles correctly. (It won't link, of
>>course, without an appropriate definition for the various ctors, but
>>it ought to compile as-is.)
>>
>>If the above *does* compile as-is, let's try to figure out why you
>>were frustrated with the behavior of TQString.
>
>Yes, it compiles fine. And no, I'm not sure why it does when TQString 
>does especially when I've faithfully replicated the constructor 
>hierarchy above.

Are any of the ctors in either TQString or QString listed as "explicit"?

Are there explicit copy ctors in either/both?

Note that your 'DerivedString' has a ctor that takes a (const
BaseString&), which is not a copy ctor. Is that intentional?

It seems very strange to me that QString would have an operator= that
takes a (const char*), but not one that takes a (const QString&). If it
really takes both, you might want to add them both.

Basically I'm curious and frustrated that I don't understand this one.
If you're willing, keep adding signatures from QString/TQString to
BaseString/DerivedString until the latter breaks. I'd be thrilled if
you can chase this one down, but I'll obviously understand if you can't.
(I *hate* irrational errors, because I'm always afraid I've missed
something else. Like your "template<class type>" bug, adding a pointer
cast made the error go away, but I don't think either of us were
comfortable until we found the real culprit.)


>>>> [overloading based on return type]
>>>>Another C++ idiom lets you do just that. I'll have to show that one
>>>>to you when I have more time. Ask if you're interested.
>>>
>>>Is that like this:
>>>bool node(TQString &dest, u32 idx)
>>>bool node(TKNamespaceNodeRef &ref, u32 idx)
>>>...
>>
>>Nope, I'm talking about actually calling different functions for the
>>following cases:
>>
>> int i = foo(...);
>> char c = foo(...);
>> float f = foo(...);
>> double d = foo(...);
>> String s = foo(...);
>
>Ok, I'm interested now. You can point me at a webpage if one exists.

No prob. To make sure we're on the same page, let's be explicit that
all the 'foo()' functions take the same parameter list, say an 'int' and
a 'double', so the only difference is their return types. I'll first
rewrite the "user code" using these parameters:

void sample(int a, double b)
{
int i = foo(a, b);
char c = foo(a, b);
float f = foo(a, b);
double d = foo(a, b);
String s = foo(a, b);
}

The rules of the game are simple: if we can get a totally separate
function to get called for each line above, we win.

The solution is trivial:

class foo {
public:
foo(int a, double b) : a_(a), b_(b) { }
operator int() const { ... }
operator char() const { ... }
operator float() const { ... }
operator double() const { ... }
operator String() const { ... }
private:
int a_;
double b_;
};

QED


>>>1. Why didn't C++ have separated support for code reuse and subtyping
>>> (like Smalltalk)?
>>[explanation chopped]
>>So if C++ wanted to be like Smalltalk, it could do what you want. But
>>given that C++ wants compile-time type-safety, it can't do what you
>>want.
>
>I personally would probably have had it use static typing when it 
>could, but when the compiler didn't know it would complain unless you 
>added a modifier to say it was a dynamic cast - then the check gets 
>delayed till run time. As it happens, surely that's happened anyway 
>(albeit relatively recently) with dynamic_cast<>().
>
>My point is, it could have been made possible to utilise the best of 
>both worlds but with a bias toward static typing.

I think your goal is admirable. However if you think a little deeper
about how this would actually get implemented, you would see it would
cause C++ to run much slower than the worst Smalltalk implementation,
and to generate huge piles of code for even trivial functions. E.g.,
consider:

void foo(QString& a, QString& b)
{
a = "xyz" + b;
}

Pretend QString's has a typical 'operator+' that is a non-member
function (possibly a 'friend' of QString). It needs to be a non-member
function to make the above legal. Pretend the signature of this
'operator+' is typical:

QString operator+ (const QString& x, const QString& y);

Thus the 'foo()' function simply promotes "xyz" to QString (via a
QString ctor), calls the operator+ function, uses QString's assignment
operator to copy the result, then destructs the temporary QString.

However if your relaxed rules above let someone pass things that are not
a QString (or one of its derived classes) for 'a' and/or 'b', things are
much worse. (And, unfortunately, if your relaxed rules do not allow
this, then I don't think you're getting much if any advantage to your
relaxed rules.)

In particular, if 'a' and/or 'b' might not be QString objects, the
compiler would need to generate code that checked, at run-time, if there
exists any 'operator+' that can take a 'char*' and whatever is the type
of 'a' (which it won't know until run-time). Not finding one, it would
search for valid pointer conversions on the left, e.g., 'const char*',
'void*', 'const void*'. Not finding any of those, it would also search
for any 'operator+' that takes the type of 'b' on the right. Finally,
if we assume 'b' actually is a QString, it would find a match since it
could promote the type of 'b' from 'QString&' to 'const QString&'
(that's called a cv-conversion).

However it's not done yet. To make the only candidate 'operator+' work,
it has to try to convert the left-hand parameter from 'char*' to
whatever is on the left-side of the 'operator+' (which it would discover
at run-time to be 'const QString&'). Eventually it will discover this
can be done in three distinct steps: promote the 'char*' to 'const
char*', call the QString ctor that takes a 'const char*', then bind a
'const QString&' to the temporary QString object. Now if finally has
enough information to call 'operator+'.

But it's still not done, since it then has to perform even more steps
searching for an appropriate assignment operator. (Etc., etc.)

BTW, I've greatly simplified the actual process for function and
operator overloading. In reality, the compiler (and, under your scheme,
the run-time system) is required to find *all* candidate operators that
can possibly match the left-hand-side, and all that can possibly match
the right-hand-side, then union them and get exactly one final match
(there's some paring down as well; I don't remember right now). The
point is that it's nasty hard, and will require a nasty amount of code.


>>>2. Why don't return types determine overload?
>>
>>Because things like this would be ambiguous:
>>
>> int f();
>> float f();
>> char f();
>>
>> int main()
>> {
>> f();
>> ...
>> }
>
>That's easy - if there's an f() returning void, it's the correct one 
>to call. If there isn't, it's a compile error - you'd need (char) f() 
>or something to say which to call.

C++ is not D = we can't add rules that cause legal C programs to
generate compile errors unless there is a compelling reason to do so.

What would happen with this:

void foo(char* dest, const char* src)
{
strcpy(dest, src);
}

Or even the simple hello-world from K&R:

int main()
{
printf("Hello world!\n");
return 0;
}

Would those generate an error message ("No version of
'strcpy()'/'printf()' returns 'void'")?
* If they would cause an error, we break too much C.
* If they don't cause an error, we jump from the frying pan into the
fire: if someone later on created a version of those functions that
overloaded by return type, all those calls would break because suddenly
they'd all start generating error messages ("missing return-type cast"
or something like that). In other words, the programmer would have to
go back through and cast the return type, e.g., (int)printf(...) or
(char*)strcpy(...).

Adding a return-type-overloaded function wouldn't *always* cause an
error message, since sometimes it would be worse - it would silently
change the meaning of the above code. E.g., if someone created a 'void'
version of printf() or strcpy(), the above code would silently change
meaning from (int)printf(const char*,...) to a totally different
function: (void)printf(const char*,...).


>>Worse, if the three 'f()' functions were compiled in different
>>compilation units on different days of the week, the compiler might
>>not even know about the overloads and it not notice that the call is
>>ambiguous.
>
>That can happen anyway surely if you're talking different scopes?

I don't think so. If two functions are in different classes, there's no
way to accidentally forget to include the right header since you need to
call those functions via an object or a class-name. On the other hand,
if they're defined in different namespace scopes, then again I don't
think you can call them without qualification. Here's a way it might
work the way you suggest: if I'm compiling function f() within namespace
xyz, and if my f() calls g() without qualification, that could mean a
g() within xyz or a g() at filescope (that is, outside any namespace).
If someone forget to include the header that declared xyz's g(), the
compiler would make the wrong choice.

But that seems like a pretty obscure example.


>>There's an interesting example in Bjarne's "Design and Evolution of
>>C++" that shows how type safety would commonly be compromised if C++
>>did what you want. Suggest you get that book and read it -- your
>>respect for the language and its (seemingly random) decisions will go
>>up a few notches.
>
>I read Bjarne's original C++ book and found it nearly impenetrable. 

His writing is hard to read by anyone.


>Of course, that was then and this is now, but he didn't seem to me to 
>write in an overly clear style. Quite laden with technogrammar.

D&E (as the book is affectionally called) is a valuable resource for
someone like you, since it explains why things are the way they are.
It's probably not as hard to read as The C++ Programming Language since
it's really a narrative or story of how Bjarne made his decisions and
why. But even if it is hard to read, you still might like it.
(Obviously if you have only one book to buy, buy mine, not his! :-)
(Actually I get only a buck per book so I really have almost no
incentive to hawk the thing.)


>>>>>Computers
>>>>>don't work naturally with objects - it's an ill-fit.
>>>>>
>>>>>What computers do do is work with data. If you base your design
>>>>>entirely around data, you produce far superior programs. 
>>
>>This is the part I was disagreeing about. You can see why, perhaps,
>>in the example I gave above (the 'Foo' class with 20 derived classes
>>each of which had its own distinct data structure and algorithm).
>
>I'm afraid I don't. In your 20 derived classes, each is in fact its 
>own autonomous data processor whose only commonality is that they 
>share an API. The API is good for the programmer, but doesn't help 
>the data processing one jot.

I have no idea what I was thinking above - the logic seems to totally
escape me. Perhaps I was referring to your last sentence only, that is,
to base your design totally around data. Yea, that's what I was
thinking. Okay, I think I can explain it.

In my base class 'Foo', the design of the system was based around 'Foo'
itself and the API specified by Foo. 99% of the system used 'Foo&' or
'Foo*', and only a small percent of the code actually knew anything
about the data, since the data was held in the derived classes and 99%
of the system was ignorant of those. In fact, there are 20 *different*
data structures, one each in the 20 derived classes, and "99% of the
system" is ignorant of all 20.

The point is the vast majority of the code (say 99%) doesn't have the
slightest clue about the data. To me, that means the code was organized
*not* around the data. The benefit of this is pluggability,
extensibility, and flexibility, since one can add or change a derived
class without breaking any of the 99%.

I'm still not sure that addresses what you were saying, but at least I
understand what I was trying to say last night.


>Hence my view that OO is good for organising source (intuitively it 
>produces good source organisation) but poor for program design (ok, 
>program algorithms in your terms).

I think OO has one bullet in its gun: it is good for achieving
non-functional goals, like extensibility, flexibility, etc. If you are
*very* careful, you can achieve those non-functional goals without
sacrificing other non-functionals, such as speed. I think if someone
has a program with no extensibility and no flexibility goals, then OO
adds little genuine value. (Today's hottest tools tend to be built
around C++ and Java, so there is a peripheral benefit to using one of
those languages even if you don't want flexibility / extensibility. But
that peripheral benefit has nothing to do with OO per se; it's merely an
accident of history that today's tool vendors attach their best stuff to
those languages.)


>>>>The point is that these benefits came as result of OO *design*, not
>>>>as a result of programming-level issues.
>>>
>>>I'm sure OO design greatly improved the likely wasp's nest of 
>>>spaghetti that existed in there previously. But I'm not seeing how OO
>>> design is better than any other approach from this example - there
>>>are many methods that could have been employed to achieve the same
>>>result.
>>
>>Two things:
>>
>>1. If what you said in the last sentence is true, where's the beef? 
>>If these other approaches could do the same thing, why didn't they?
>>
>>2. I think you've missed the point I was making. The point was that
>>this project used inheritance the way I'm proposing it should be used,
>>and that's very different from the "inheritance is for reuse"
>>approach. It's not about OO vs. non-OO. It's about how the two
>>different styles of OO produce different results.
>
>My point was that there are alternative methods of structuring and 
>designing your code that have nothing to do with OO whatsoever. 
>Furthermore, I believe what you call OO is in fact a composite of a 
>number of different approaches many of which exist absolutely fine 
>without having objects nor inheritence nor anything like it.
>
>My fundamental point is that I think that you have integrated many 
>beneficial and good programming practices into your internal 
>conceptualisation of what OO is and means, and you are having 
>difficulty separating them and treating them as what they are. I 
>personally prefer to treat these things more seperately as I believe 
>it offers me a great selection of tools from the toolbox as it were, 
>but it's entirely a personal choice.

You're probably right. I'm not an advocate for any given style of
programming, since any advocate for anything ends up being a
one-trick-pony, and they can only be radically successful if their
particular "trick" happens to be a really good fit for the project du
jour. Instead I try to advocate success over all, and that means
intentionally using whatever styles help achieve that success.


>>Here again, you seem to be saying that if OO isn't optimal for 100% of
>>the solution, then something's wrong with it. I take the opposite
>>tact, mainly because I am *not* a promoter for any given language or
>>paradigm. In fact, I would be highly suspicious if someone (including
>>you) claimed to have a technique that is optimal for 100% of the
>>solution to any given problem, and especially if it was optimal for
>>100% of the solution of 100% of the problems. I simply do not believe
>>that there exists any one-size-fits-all techniques, including OO,
>>yours, or anybody else's.
>
>What then do you feel is problematic with a data-centric approach? 

That's easy: one size does not fit all. There's nothing "problematic"
about it, but it is a style, and therefore it will be a good fit for
some problems and a not-so-good fit for others.


>Why isn't it a better one-size-fits-all approach? 

Because there is no one-size-fits-all approach! :-)


>Surely you would 
>agree that if you base your design on quantities of data and the 
>overheads of the media in which they reside, you naturally and 
>intuitively produce a much more efficient design?

Even if what you're saying is true, "a much more efficient design" might
not the top priority on "this" project. All I'm saying is: I prefer to
start with the goals, *then* decide which technologies to use. Anyone
who comes in talking, who already knows which technologies should be
used before understanding the goals, is foolish in my book.

I wouldn't want to assume your data-oriented approach is the answer any
more than I would want to assume OO is the answer. First tell me what
the question is, *THEN* I'll come up with the "most appropriate" answer.

(BTW I think technologists typically get off track when they call one
technology "better" than another. I think they should use words like
"more appropriate for my particular project," since "better" seems to
imply "better in all projects in all industries for all time." I don't
think you said that; it was just an off-the-wall comment.)


>>>So, thoughts? I'm particularly interested in what you see as design
>>>flaws 
>>
>>Please compare and contrast with web-services. Obviously you're not
>>married to XML like most web-services are, but they also have a
>>concept of components / services through which data flows. Is there
>>some similarity? Even at the conceptual level?
>
>Good question.
>
>The difference is in orientation. XML builds on top of the existing 
>paradigm using existing software and its structure. Hence, the range 
>of data it can process and how it processes it is quite limited 
>(despite what its advocates might say).
>
>What I propose goes the other way round - the programming is shaped 
>by the needs of the data (rather than the other way round with XML). 
>Of course, this needs a complete rewrite of all the software, but 
>more on that later.
>
>Fundamentally of course, XML is based around separating content from 
>structure in order to achieve data portability. Now this is a 
>laudable idea (and also one I think a pipedream) and partly of course 
>my idea does the same. However, the fact I use much tinier data 
>processors (ie; much finer granularity) and very different way of 
>interfacing two formats of data I feel makes my solution far 
>superior.
>
>Of course, if they take XML much beyond what's already agreed, then I 
>could have a problem on my hands. However, I think the same old 
>propriatary data problems will raise their head and will subvert the 
>possible potential. In the end, my method is completely compatible 
>with XML, so I can always bind in XML facilities.

You might want to think about this as you go forward. XML's limitations
are the fact that it's text based (speed, perhaps some limitations in
its flexibility), and, paradoxically, it is too good at being
self-describing and therefore there are some security concerns. (If
you're aware of that second point, skip this: if an XML glob has a tag
somewhere that says <CreditCard>....</CreditCard>, then hackers know
just where to look to get the info they want. If it wasn't so
self-describing, it would be harder to hack. People are very aware of
this problem and are working on it. Fortunately (or unfortunately??)
the solution is trivial: encrypt all XML blobs that pass across a
network.)

The point is that XML has some limitations, but it seems like people are
going to be able to get 90% of what they want via XML, and then 95%,
then 97%, etc. That progression makes your stuff less compelling.


>>[all very valid points about business]
>>I'm not trying to discourage you - just trying to ask if you know what
>>the pay-back really is. I'm also trying to remind you about how none
>>of the front runners in OO survived, and ultimately it took a couple
>>of decades before that paradigm took hold.
>
>I completely agree with all these very valid points. But then I 
>didn't explain my business model to you, only the technical model. 
>The idea is to completely avoid business, because they won't buy it. 
>The target for the first two years is actually slashdot readers. Let 
>me explain:
>
>Have you ever seen or used something that just impressed you with its 
>quality? Have you ever really enjoyed programming for a certain 
>language or operating system because it was so well designed?
>
>In other words, I'm targeting the 20% of programmers or so who 
>actually like programming and do it outside of work for fun. ie; a 
>good majority of slashdot readers.
>
>The runtime is obviously free and the SDK will mostly be free too ie; 
>investors can't expect a return for the first two years. This is 
>because in fact we're building a software base, without which no new 
>paradigm stands a chance in hell.
>
>We start making money when we put in the networking code sometime 
>into the third year. 

That's a *very* hard sell to a VC guy. They have lots of people
claiming to deliver a 100% or 1000% return in the first year. To admit
you're a money pit that won't even generate any revenue for 3 years (and
won't probably generate profit for many more years) will be a *very*
hard sell.

Think about using Cygnus's business model. Cygnus was Michael Tiemann's
old company (I think they were acquired by Red Hat). They had a similar
goal as yours, only they had much more popular tools, e.g., the GNU C
and C++ compilers, etc., etc. Michael wrote the first version of g++,
then rms (Stallman) got back into the mix and they worked together on a
brand new GCC that combined C, C++, Objective C, Java, FORTRAN, and
probably Swahili into the same compiler. The point is that the GNU
tools are free, but Cygnus got revenue by charging corporations for
support. Big companies in the US are afraid of free software. They
want *somebody* they can call and say, "I need you to fix this bug NOW!"
So Michael offered them a one-day turn-around on bugs (or something like
that) for $20,000/year (or something like that). He told them he'd
distribute the bug-fix to everyone free of charge, but they didn't care:
they wanted to know THEIR programmers wouldn't get hung up on bugs, so
it made business sense.


>>Will it be most suited for embedded systems? handhelds? web servers?
>
>None of those three. In fact, it's likely to require very significant 
>overheads - it certainly uses a *lot* of processes and threads plus 
>it uses a lot of caching, so memory use will be high.
>
>However, I honestly don't know. I look at COM and I think my solution 
>is likely to require less overhead. I won't know until I have working 
>code to benchmark. I will say though I have gone to great lengths to 
>optimise the system.

Think about that - it might be worthwhile to create a few "sample apps"
and a few "sample app programmers," that way you "start with the end in
mind." I'm sure you have a few sample apps already, so I'm talking
about some things like COM-ish things, XML/web services, etc.

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Fri, 2 Aug 2002 02:57:35 +0200

On 1 Aug 2002 at 3:29, Marshall Cline wrote:

> However what I'm talking about is something different. I really need
> to express the core idea more clearly. The idea of extensibility is
> usually achieved by structuring your code so 99% (ideally 100%) of
> your system is ignorant of the various derived classes. In
> particular, pretend 'ABC' is an abstract base class with pure virtual
> methods but no data and no code, and pretend 'Der1', 'Der2', ...,
> 'Der20' are concrete derived classes that inherit directly from 'ABC'.
> The goal is for 99% of the system to NEVER use the token Der1, Der2,
> ..., Der20 --- to pass all these objects as 'f(ABC* p)' or 'g(ABC&
> x)'.

That's actually what I meant previously about being able to replace a 
class in a hierarchy with a completely different one and have little 
if no knock-on effects.

> There are other problems with deep hierarchies, most especially the
> reality that they often result in "improper inheritance," and that
> causes unexpected violations of the base class's contract which
> ultimately breaks the code of "the vast majority" of the system. This
> proper-inheritance notion is the same as require-no-more,
> promise-no-less, which you basically didn't like :-(

No, I didn't like the /phrase/, not its meaning. I understood the 
meaning three emails ago.

> Big picture: Let's start over and ask, In addition to meeting the
> requirements, what are we trying to achieve? Instead of saying
> something vague like "reduce maintenance cost," I'll try to codify
> that in terms of software stability: we want the bulk of the changes
> or extensions to *not* require changes to the bulk of the system. A
> cutesy way to say this is to eliminate the ripple effect. The point
> is to try to build stuff in such a way that the bulk (say 99%) of the
> system is stable when changes or extensions are made. The above is a
> partial solution to this problem.

Agreed.

> >One of the things I look for when designing my class inheritances is
> >whether I could say, chop out one of the base classes and plug in a
> >similar but different one.
> 
> Interesting. That might be an artifact of the "other" approach to
> inheritance, since that's almost exactly the opposite of what happens
> with my code. Generally I design things so the base class is forever.
> There is no benefit to unplugging it, and in fact it is extremely
> unusual for it to be unpluggable since it codifies both the signatures
> *and* the contracts that the various derived classes must abide by,
> and plugging in another base class would almost always change those in
> some way. But again, I emphasize that I never set up unpluggable base
> classes *because* of the overall structure of my code, in particular
> the structure where 99% of the system uses base-class
> pointers/references, and that 99% is ignorant of any derived classes.

No no, I meant in terms of keeping coupling low, not at all that I 
actually intended to be swapping base classes around (that would 
indicate I hadn't designed the thing right).

> These last two paragraphs aren't describing a new model, but are
> trying to give a couple of insights about the model I've been
> describing all along.

You don't seem to believe me when I say I've integrated your wisdom! 
Trust me, I absolutely 100% understand what you have taught me - I 
learn quickly!

> >>A derived class's methods are allowed to weaken requirements
> >>(preconditions) and/or strengthen promises (postconditions), but
> >>never the other way around. In other words, you are free to
> >>override a method from a base class provided your override requires
> >>no more and promises no less than is required/promised by the method
> >>in the base class. If an override logically strengthens a
> >>requirement/precondition, or if it logically weakens a promise, it
> >>is "improper inheritance" and it will cause problems. In
> >>particular, it will break user code, meaning it will break some
> >>portion of our million-line app. Yuck.
> >>
> >>The problem with Set inheriting from Bag is Set weakens the
> >>postcondition/promise of insert(Item). Bag::insert() promises that
> >>size() *will* increase (i.e., the Item *will* get inserted), but
> >>Set::insert() promises something weaker: size() *might* increase,
> >>depending on whether contains(Item) returns true or false. 
> >>Remember: it's perfectly normal and acceptable to weaken a
> >>precondition/requirement, but it is dastardly evil to strengthen a
> >>postcondition/promise.
> >
> >This is quite ephemeral and subtle stuff. Correct application appears
> > to require considering a lot of variables.
> 
> I probably didn't describe it well since it's actually quite simple. 
> In fact, one of my concerns with most software is that it's not soft,
> and in particular it has a large ripple effect from most any change. 
> This means a programmer has to understand how all the pieces fit
> together in order to make most any change. In other words, the whole
> is bigger than the sum of the parts.

Well this is typical of any increasingly complex system - more and 
more, it is less the mere sum of its parts.

> The idea I described above means that the whole is *merely* the sum of
> the parts. In other words, an average (read stupid) programmer can
> look at a single derived class and its base class, and can, with total
> and complete ignorance of how the rest of the system is structured,
> decide unequivocally whether the derived class will break any existing
> code that uses the base class. To use another metaphor, he can be
> embedded in a huge forest, but he can know whether something new will
> break any old thing by examining just one other leaf.

That's an admirable desire, but do you think it's really possible? If 
I've learned anything from quantum mechanics and biology, it's that 
there will *always* be knock-on effects from even the tiniest change 
in any large system. Good design and coding is about minimising 
those, but as you've mentioned before all you need is one bad 
programmer to muck it all up.

> I rather like that idea since I have found the average programmer is,
> well, average, and is typically unable or unwilling to understand the
> whole. That means they screw up systems where the whole is bigger
> than the sum of the parts -- they screw up systems where they must
> understand a whole bunch of code in order to fully know whether a
> change will break anything else.

Hence the usefulness of pairing programmers.

> I call this the "middle of the bell curve" problem. The idea is that
> every company has a bell curve, and in the middle you've got this big
> pile of average people, and unfortunately these average people can't
> handle systems where the whole is bigger than the sum of the parts.
> That means we end up relying on the hot-shots whenever we need to
> change anything, and all the average guys sit around sucking their
> thumbs. I think that's bad. It's bad for the hot-shots, since they
> can never do anything interesting - they spend all their time fighting
> fires; it's bad for the average people since they're under utilized;
> and it's bad for the business since they're constantly in terror mode.

OTOH, as many BOFH's know, enhancing a company's dependence on you 
increases your power. Right at the start those expert's under whose 
wing I was which I mentioned, they would do things like turn up late 
when they felt like it and declare their own vacation time with about 
eight hours notice. I must admit, I've used my own hot-shot status 
occasionally as well - while I don't like the consequences of it 
professionally, it's an easy vicious circle to fall into.

> If we don't solve the middle-of-the-bell-curve problem, why have we
> bothered with all these fancy tools, paradigms, etc.? In other words,
> the hot-shots could *always* walk on water and do amazing things with
> code, and they don't *need* OO or block-structured or GUI builders or
> any other fancy thing. 

No, that's not true. If collected some hot-shots together and wrote 
say X ground-up in assembler - yes, absolutely, it could be done and 
done well but in productivity terms it would be a disaster.

Or, put more simply, all this OO and GUI builders and such just as 
much enhance productivity of hot-shots than the average guy. In fact, 
I'd say the more abstract we make it, the *greater* the difference in 
productivity between best and worse because the harder it is for the 
average guy to know why what he wants to do doesn't work.

> So if we don't solve the
> middle-of-the-bell-curve problem, we might as well not have bothered
> with all this OO stuff (or you could substitute "data centric stuff"),
> since we really haven't changed anything: with or without our fancy
> languages/techniques, the hot-shots can handle the complexity and the
> average guys suck their thumbs.

No, I'd have to disagree with you here. In many many ways the modern 
task of software engineering is harder than it was in the 1980's. At 
least then, you wrote your code and it worked. Nowadays, it's a much 
more subtle task because your code depends directly on millions of 
lines of other people's code, much of which wasn't written with a 
single purpose in mind. I know I've wasted days on stupid problems 
with undocumented malfunctioning - and I can only imagine for the 
less technically able (does the phrase "horrible nasty workaround" 
come to mind?)

> Finding a way to utilize the middle of the bell curve is, in my mind,
> a bulls eye for the industry as a whole. And I think the above
> approach is a partial solution.

I think there's some chance so long as the average programmer stays 
in one environment eg; just Java. As soon as they want to say tie 
Java in with COM, then all hell can break loose. And the more 
disperate technologies you bring together to try and get them to work 
as a cohesive whole, the harder it gets.

I'll put it this way: there is a definite problem in modern software 
engineering with documentation. That game I wrote for DirectX had me 
pounding my head for days because of some of the worst docs I have 
seen in recent times. Unix in general is even worse - you get your 
man or info pages which vary widely in quality. AFAICS they're not 
making the guys who write the code write the documentation, and 
that's bad.

I'll just mention RISC-OS had fantastic documentation (even with 
custom designed manuals which automatically perched on your lap). 
It's a difference I still miss today, and it's why my project has 
excellent documentation (I wrote a lot of it before the code).

> >No, I've got what you mean and I understand why. However, the point
> >is not different to what I understood a few days ago although I must
> >admit, Better = Horizontal > Vertical is a much easier rule of thumb
> >than all these past few days of discussion. You can use that rule in
> >your next book if you want :)
> 
> I've seen hierarchies with up to five levels, although all but the
> very last were abstract base classes with almost no data or code. So
> again, to me the real culprit is improper inheritance and/or
> inheritance from a data structure. The former breaks user code and
> the later can cause performance problems (creates a ripple effect when
> we try to improve performance by changing to a different data
> structure).

What would your thoughts be then on Qt, which does make some use of 
data, more data, some more data; in its class hierarchies? 

> For example, they could cause the list's contents to become
> unsorted, and that could screw up TSortedList's binary search
> algorithms. The only way to insulate yourself from this is to use
> has-a or private/protected inheritance.

Surely private or protected inheritance affects the subclass only? 
ie; you could still pass the subclass to its base class?

> >Embedded systems programming as we knew it is dying out. When 
> >desktops went all powerful, a lot of us assembler guys went into tiny
> > systems but now they've gone all powerful, it's a rapidly shrinking
> >market. The writing is definitely on the wall - move onto OO and C++
> >and such or else become unemployable.
> 
> You may have already worked with hand-held systems, but if not, they
> might be the last bastion of tight, high-tech coding. Particularly
> hand-held systems targeted at the consumer market, since that usually
> means the company wants to squeeze the unit cost and extend the
> battery life. In those cases, they worry about everything. Wasting
> memory means the thing needs more RAM or flash, and that increases
> unit cost and reduces battery life. Similarly wasting CPU cycles
> burns the battery up pretty fast. So in the end they want it very
> small and very fast, and that makes it challenging/fun.

No, that's not hugely true anymore. I worked alongside the Windows CE 
port to the ARM as well as Psion's Symbian OS and the predominant 
view was to write it much as for a desktop. After all, handhelds will 
get faster and have more memory just like a desktop. What they do is 
produce a beta copy for the development prototype which is usually 
way over-spec (ie; spec in two to three years), and then work out the 
least they can put into the production models and optimise from there 
(ie; how low can they push the clock speed + hardware features for 
the software). It's definitely not ground-up anymore, and if it comes 
down to the flash image being too big to fit they just stick more ROM 
in (it's negligible in price and battery consumption).

In fact, between two thirds and three quarters of battery power goes 
on the screen. Regarding cost, most tends to go with your chosen 
screen/chipset/peripherals whereas memory is quite cheap.

Put it this way: Symbian and WinCE are entirely C++. WinCE lets you 
port your windows app through a special recompile and removal of some 
of the more esoteric APIs. The days of assembler hacking are over.

> Limbo. It's hosted within Inferno, an OS that was originally by
> Lucent, but was sold to a UK company named Vita Nuova.
> 
> Limbo was designed by Dennis Ritchie and some other really smart folks
> (BTW I had a chance to talk to Dennis on the phone as a result of this
> engagement), and everyone involved gives it glowing reviews. But like
> I said, my client is having a hard time finding people to work with it
> since there simply aren't that many Limbo programmers out there.
> 
> Somewhat interesting approach. It's hosted via a virtual machine, and
> it's compiled into a byte-code of sorts, but it's very different from
> the stack-machine approach used by Java. It's much closer to a
> register-machine, so the source code "a = b + c" compiles into one
> instruction (pretend they're all of type 'int', which uses the 'w'
> suffix for 'word'):
> 
> addw b, c, a // adds b+c, storing result into a
> 
> The corresponding Java instructions would be something like this:
> 
> iload b // pushes b
> iload c // pushes c
> iadd // pops c then b, adds, pushes the sum
> istore a // pops the sum, stores into a
> 
> There are two benefits to the Limbo byte-code scheme: it tends to be
> more compact, on average, and it's much closer to the underlying
> hardware instructions so a JIT compiler is much smaller, faster, uses
> less memory, and is easier to write. E.g., a Java JIT has to convert
> all these stack instructions to a typical machine-code add, and that
> transformation has to happen on the fly, whereas Limbo does most of
> that transformation at compile-time.

In other words, Limbo is doing a full compile to a proper assembler 
model (which just happens not to have a processor which can run it, 
but one could be easily designed). Java is really mostly interpreted 
in that the source is pretty easy to see in the byte code. I've seen 
some reverse compilers and their output is awfully similar to the 
original - whereas no reverse compiler would have a hope 
reconstituting C++ (or even C).

> It also has some very interesting data transmission techniques, based
> on CAR Hoare's CSP (communicating sequential processes) model. By
> tying their synchronization and data transmission technique to CSP,
> they instantly know all sorts of facts that academics have proven
> about their language/environment. For example, Hoare showed that CSP
> can be used to build any other synchronization primitive (such as
> semaphores, or Java-like monitors, or anything else), and a whole
> bunch of academics created all sorts of reader/writer scenarios that
> can be exploited by the language.
> 
> The Limbo implementation of CSP is via channels. You create a channel
> in your code, then one thread reads from the channel and another
> thread writes to it. Channels aren't normally tied to files, but they
> can be. Pretty slick stuff, and somewhat related to what you're doing.
> For example, you can have a channel of 'int', or a channel of 'Xyz'
> where the latter is a struct that contains all sorts of stuff, or
> anything in between.

That's interesting, that's very similar to what I do - channels are 
lighter finer granularity form of my data streams and I've also used 
a lazy data locking approach to coordinate multiple access to 
distributed data. Mine runs like a p2p system because unfortunately, 
POSIX does not define many inter-process synchronisation mechanisms.

> The language takes some getting used to, primarily because it, unlike
> C or C++, has *no* back door to let you do things that are nasty. 
> E.g., there are no unchecked pointer casts, there is nothing
> corresponding to a 'void*' type, function pointers have to specify all
> parameters exactly, there is no is-a conversion or any other way to
> get a Foo pointer to point at a Bar object, etc. Obviously the
> byte-code level (called "Dis") lets you do these things, but Limbo
> itself tries to protect idiots from being idiotic. (As you might
> guess, I had to write some Dis code for some things. That was fine,
> of course, but it was somewhat bizarre seeing that Limbo offered no
> alternative.)
>
> You might want to read their articles about CSP. See
> www.vitanuova.com. Also check out Lucent's web site.

Actually, I went and downloaded a prebuilt version for VMWare - I'm 
sitting on its desktop right now. I must admit to being slightly 
miffed that they've also "stolen" my idea for a unified namespace, 
although theirs merely includes windows. They're also come up 
remarkably with quite a few things I had thought of independently - 
like overloading mouse button presses. I'm doing it in a way though 
that won't scare people (unlike Plan 9)

Maybe I should go apply for a job at Bell Labs? Nah, that US visa 
thing getting in the way again ...

Thanks for pointing me towards Plan 9, I wouldn't have known my ideas 
are so agreed upon by (eminent) others without it!

> I can achieve everything in straight C that I can in C++, since I can
> always manually simulate a virtual-pointer / virtual-table mechanism,
> hand-mangle function names to simulate function or operator
> overloading, manually create the state transition tables used by
> exception handling, etc. However there would be a lot of grunt
> bookkeeping code, and that would eventually cloud the ultimate goal. 
> Even if I chose a simplified set of features to simulate, it would
> still add chaff to the code and that would make it harder to work
> with. One real-world example of this is X (AKA the X Windows System).
> X was written in C using OO design, and they had all sorts of macros
> to handle inheritance and all the rest, but eventually they lost a
> grip on the system because it had too much bookkeeping crap.

No I twigged this with my C fairly early on. It only makes sense to 
munge non language supported features in so far as doing so saves you 
cost later on. Going overboard makes things worse.

That is why I chose C++ for my project, not C or assembler (like my 
previous two attempts).

> I think you just weakened your argument about OO being non-intuitive.
> Below you said:
> 
> >... can you see my point that when a newbie 
> >designs OO they tend to get it wrong? Hence my point that good OO
> >isn't intuitive, and hence my point that there is something wrong
> >with OO because a better system would be intuitive ie; complete
> >newbie has a good chance of generating a good design?
> 
> The argument you're making seems reasonable: when a newbie designs OO
> he gets it wrong, therefore good OO isn't intuitive, therefore there
> is something wrong with OO.
> 
> However, once you admit that these people are no good at design full
> stop period, the two "therefore"s go away -- the fact that they're no
> good at design full stop period provides an alternative explanation
> for why they get OO designs wrong.

No not at all - the reason partly why they are no good at design is 
because they aren't using the right schemas. When someone must 
practice something non-intuitive, they must build helper schemas to 
manage it. If they have difficulty with this, the result is poor 
understanding and especially application.

My suggestion is to give them a better schema base with which 
intuition can help them more.

> >Yes, it compiles fine. And no, I'm not sure why it does when TQString
> > does especially when I've faithfully replicated the constructor
> >hierarchy above.
> 
> Are any of the ctors in either TQString or QString listed as
> "explicit"?

No. In fact I didn't know there was an explicit keyword till now.

> Note that your 'DerivedString' has a ctor that takes a (const
> BaseString&), which is not a copy ctor. Is that intentional?
> 
> It seems very strange to me that QString would have an operator= that
> takes a (const char*), but not one that takes a (const QString&). If
> it really takes both, you might want to add them both.
> 
> Basically I'm curious and frustrated that I don't understand this one.
> If you're willing, keep adding signatures from QString/TQString to
> BaseString/DerivedString until the latter breaks. I'd be thrilled if
> you can chase this one down, but I'll obviously understand if you
> can't. (I *hate* irrational errors, because I'm always afraid I've
> missed something else. Like your "template<class type>" bug, adding a
> pointer cast made the error go away, but I don't think either of us
> were comfortable until we found the real culprit.)

I did play around some more with that but couldn't replicate the 
error. Unfortunately, I added casts to the six or so errors and now I 
can't find them anymore (I really need to put this project into CVS). 
So I am afraid it's lost for the time being - sorry.

Furthermore, I found why << and >> weren't working. It seems they 
didn't like being concantated eg; ds << keyword << metadata where 
keyword was a QString and metadata a struct with public operator 
overloads designed to stream it. I understand now QString doesn't 
know what a TQDataStream is, so it was implicitly casting up and then 
my metadata struct couldn't handle the output QDataStream instead of 
TQDataStream.

> >Ok, I'm interested now. You can point me at a webpage if one exists.
> 
> No prob. To make sure we're on the same page, let's be explicit that
> all the 'foo()' functions take the same parameter list, say an 'int'
> and a 'double', so the only difference is their return types. I'll
> first rewrite the "user code" using these parameters:
> 
> void sample(int a, double b)
> {
> int i = foo(a, b);
> char c = foo(a, b);
> float f = foo(a, b);
> double d = foo(a, b);
> String s = foo(a, b);
> }
> 
> The rules of the game are simple: if we can get a totally separate
> function to get called for each line above, we win.
> 
> The solution is trivial:
> 
> class foo {
> public:
> foo(int a, double b) : a_(a), b_(b) { }
> operator int() const { ... }
> operator char() const { ... }
> operator float() const { ... }
> operator double() const { ... }
> operator String() const { ... }
> private:
> int a_;
> double b_;
> };
> 
> QED

Let me get this: you're overloading the () operator yes? In which 
case, that's quite ingenious. I'm not sure it would prove regularly 
useful though - seems too roundabout a solution except in quite 
specific instances.

Thanks nevertheless.

> >I personally would probably have had it use static typing when it
> >could, but when the compiler didn't know it would complain unless you
> > added a modifier to say it was a dynamic cast - then the check gets
> >delayed till run time. As it happens, surely that's happened anyway
> >(albeit relatively recently) with dynamic_cast<>().
> >
> >My point is, it could have been made possible to utilise the best of
> >both worlds but with a bias toward static typing.
> 
> I think your goal is admirable. However if you think a little deeper
> about how this would actually get implemented, you would see it would
> cause C++ to run much slower than the worst Smalltalk implementation,
> and to generate huge piles of code for even trivial functions. E.g.,
> consider:
> 
> void foo(QString& a, QString& b)
> {
> a = "xyz" + b;
> }
> 
> Pretend QString's has a typical 'operator+' that is a non-member
> function (possibly a 'friend' of QString). It needs to be a
> non-member function to make the above legal. Pretend the signature of
> this 'operator+' is typical:
> 
> QString operator+ (const QString& x, const QString& y);
> 
> Thus the 'foo()' function simply promotes "xyz" to QString (via a
> QString ctor), calls the operator+ function, uses QString's assignment
> operator to copy the result, then destructs the temporary QString.

That's how it works currently, yes.

> However if your relaxed rules above let someone pass things that are
> not a QString (or one of its derived classes) for 'a' and/or 'b',
> things are much worse. (And, unfortunately, if your relaxed rules do
> not allow this, then I don't think you're getting much if any
> advantage to your relaxed rules.)
> 
> In particular, if 'a' and/or 'b' might not be QString objects, the
> compiler would need to generate code that checked, at run-time, if
> there exists any 'operator+' that can take a 'char*' and whatever is
> the type of 'a' (which it won't know until run-time). Not finding
> one, it would search for valid pointer conversions on the left, e.g.,
> 'const char*', 'void*', 'const void*'. Not finding any of those, it
> would also search for any 'operator+' that takes the type of 'b' on
> the right. Finally, if we assume 'b' actually is a QString, it would
> find a match since it could promote the type of 'b' from 'QString&' to
> 'const QString&' (that's called a cv-conversion).

Firstly, I was thinking that the compiler would produce an error 
without a special keyword which limits the overall possibilities of 
casting ie; a strong hint to limit the total number of varieties. 
Hence then much of the above searching is unnecessary.

> However it's not done yet. To make the only candidate 'operator+'
> work, it has to try to convert the left-hand parameter from 'char*' to
> whatever is on the left-side of the 'operator+' (which it would
> discover at run-time to be 'const QString&'). Eventually it will
> discover this can be done in three distinct steps: promote the 'char*'
> to 'const char*', call the QString ctor that takes a 'const char*',
> then bind a 'const QString&' to the temporary QString object. Now if
> finally has enough information to call 'operator+'.
> 
> But it's still not done, since it then has to perform even more steps
> searching for an appropriate assignment operator. (Etc., etc.)
> 
> BTW, I've greatly simplified the actual process for function and
> operator overloading. In reality, the compiler (and, under your
> scheme, the run-time system) is required to find *all* candidate
> operators that can possibly match the left-hand-side, and all that can
> possibly match the right-hand-side, then union them and get exactly
> one final match (there's some paring down as well; I don't remember
> right now). The point is that it's nasty hard, and will require a
> nasty amount of code.

I think actually your point is that doing this requires duplication 
of effort - compile-time and run-time and the two don't quite mesh 
together perfectly.

Ok, fair enough. Still, out of the OO languages I know, they seem to 
strongly tend towards either static or dynamic with no attempts to 
run a middle route. I probably am saying this out of ignorance 
though.

> C++ is not D = we can't add rules that cause legal C programs to
> generate compile errors unless there is a compelling reason to do so.

I'm not seeing that this would.

> What would happen with this:
> 
> void foo(char* dest, const char* src)
> {
> strcpy(dest, src);
> }
> 
> Or even the simple hello-world from K&R:
> 
> int main()
> {
> printf("Hello world!\n");
> return 0;
> }
> 
> Would those generate an error message ("No version of
> 'strcpy()'/'printf()' returns 'void'")?

Only if there is another overload. If there's one and one only 
strcpy(), it gets called irrespective of return just like C. If 
there's more than one, it uses the void return otherwise it generates 
an error (without a cast).

> * If they would cause an error, we break too much C.
> * If they don't cause an error, we jump from the frying pan into the
> fire: if someone later on created a version of those functions that
> overloaded by return type, all those calls would break because
> suddenly they'd all start generating error messages ("missing
> return-type cast" or something like that). In other words, the
> programmer would have to go back through and cast the return type,
> e.g., (int)printf(...) or (char*)strcpy(...).

No I think my solution preserves existing code.

> Adding a return-type-overloaded function wouldn't *always* cause an
> error message, since sometimes it would be worse - it would silently
> change the meaning of the above code. E.g., if someone created a
> 'void' version of printf() or strcpy(), the above code would silently
> change meaning from (int)printf(const char*,...) to a totally
> different function: (void)printf(const char*,...).

In this particular case, yes. I would have the message "demons abound 
here" stamped in red ink on that. My point is that C and C++ put lots 
of power into the hands of the programmer anyway, so I don't think 
the fact you can break lots of code by introducing a void return 
variant of an existing function is all that bad. There are worse 
potentials for error in the language.

> >I read Bjarne's original C++ book and found it nearly impenetrable. 
> 
> His writing is hard to read by anyone.

Ah thank god, I was thinking I was stupid!

> >Of course, that was then and this is now, but he didn't seem to me to
> > write in an overly clear style. Quite laden with technogrammar.
> 
> D&E (as the book is affectionally called) is a valuable resource for
> someone like you, since it explains why things are the way they are.
> It's probably not as hard to read as The C++ Programming Language
> since it's really a narrative or story of how Bjarne made his
> decisions and why. But even if it is hard to read, you still might
> like it. (Obviously if you have only one book to buy, buy mine, not
> his! :-) (Actually I get only a buck per book so I really have almost
> no incentive to hawk the thing.)

I'm guessing you get a 10% commission then, halved between the two of 
you. Yeah, it's not a lot ...

No, my reading time is fully occupied with philosophy, psychology and 
other humanities. I force myself to read for a half hour a day, but 
even still other things quickly eat up my time (eg; project, visiting 
people etc.).

> >I'm afraid I don't. In your 20 derived classes, each is in fact its
> >own autonomous data processor whose only commonality is that they
> >share an API. The API is good for the programmer, but doesn't help
> >the data processing one jot.
> 
> I have no idea what I was thinking above - the logic seems to totally
> escape me. Perhaps I was referring to your last sentence only, that
> is, to base your design totally around data. Yea, that's what I was
> thinking. Okay, I think I can explain it.
> 
> In my base class 'Foo', the design of the system was based around
> 'Foo' itself and the API specified by Foo. 99% of the system used
> 'Foo&' or 'Foo*', and only a small percent of the code actually knew
> anything about the data, since the data was held in the derived
> classes and 99% of the system was ignorant of those. In fact, there
> are 20 *different* data structures, one each in the 20 derived
> classes, and "99% of the system" is ignorant of all 20.
> 
> The point is the vast majority of the code (say 99%) doesn't have the
> slightest clue about the data. To me, that means the code was
> organized *not* around the data. The benefit of this is pluggability,
> extensibility, and flexibility, since one can add or change a derived
> class without breaking any of the 99%.
> 
> I'm still not sure that addresses what you were saying, but at least I
> understand what I was trying to say last night.

No that addresses programmability and maintainability. It does not 
address program efficiency, which was my point.

> >Hence my view that OO is good for organising source (intuitively it
> >produces good source organisation) but poor for program design (ok,
> >program algorithms in your terms).
> 
> I think OO has one bullet in its gun: it is good for achieving
> non-functional goals, like extensibility, flexibility, etc. If you
> are *very* careful, you can achieve those non-functional goals without
> sacrificing other non-functionals, such as speed. I think if someone
> has a program with no extensibility and no flexibility goals, then OO
> adds little genuine value. 

Err, does this mean you are agreeing with me? :)

> >My fundamental point is that I think that you have integrated many
> >beneficial and good programming practices into your internal
> >conceptualisation of what OO is and means, and you are having
> >difficulty separating them and treating them as what they are. I
> >personally prefer to treat these things more seperately as I believe
> >it offers me a great selection of tools from the toolbox as it were,
> >but it's entirely a personal choice.
> 
> You're probably right. I'm not an advocate for any given style of
> programming, since any advocate for anything ends up being a
> one-trick-pony, and they can only be radically successful if their
> particular "trick" happens to be a really good fit for the project du
> jour. Instead I try to advocate success over all, and that means
> intentionally using whatever styles help achieve that success.

Ah, agreement also. Good.

> >What then do you feel is problematic with a data-centric approach? 
> 
> That's easy: one size does not fit all. There's nothing "problematic"
> about it, but it is a style, and therefore it will be a good fit for
> some problems and a not-so-good fit for others.
> 
> >Why isn't it a better one-size-fits-all approach? 
> 
> Because there is no one-size-fits-all approach! :-)

Ok, how about a better starting approach?

> >Surely you would 
> >agree that if you base your design on quantities of data and the
> >overheads of the media in which they reside, you naturally and
> >intuitively produce a much more efficient design?
> 
> Even if what you're saying is true, "a much more efficient design"
> might not the top priority on "this" project. All I'm saying is: I
> prefer to start with the goals, *then* decide which technologies to
> use. Anyone who comes in talking, who already knows which
> technologies should be used before understanding the goals, is foolish
> in my book.
> 
> I wouldn't want to assume your data-oriented approach is the answer
> any more than I would want to assume OO is the answer. First tell me
> what the question is, *THEN* I'll come up with the "most appropriate"
> answer.

Ok, I think we're escaping the fundamental core of this thread. 
Basically, what I am saying, is that across all the software projects 
in all the world, people are mostly applying an OO-based solution as 
a primary leader. I feel this produces worse quality software because 
of the problems with lack of intuition and furthermore, if a data-
centric approach were taken instead as a primary leader, better 
quality software would emerge precisely because there is better 
chance of intuition leading you correctly. In this, I am not negating 
the use of OO at all, just saying it should not be the primary 
tackling methodology - and of course, any combination of various 
techniques should be used depending on what's the best solution.

> [My Data Centric ideas]
> >>>So, thoughts? I'm particularly interested in what you see as design
> >>>flaws 
> >>
> >>Please compare and contrast with web-services. Obviously you're not
> >>married to XML like most web-services are, but they also have a
> >>concept of components / services through which data flows. Is there
> >>some similarity? Even at the conceptual level?
>
> You might want to think about this as you go forward. XML's
> limitations are the fact that it's text based (speed, perhaps some
> limitations in its flexibility), and, paradoxically, it is too good at
> being self-describing and therefore there are some security concerns. 
> (If you're aware of that second point, skip this: if an XML glob has a
> tag somewhere that says <CreditCard>....</CreditCard>, then hackers
> know just where to look to get the info they want. If it wasn't so
> self-describing, it would be harder to hack. People are very aware of
> this problem and are working on it. Fortunately (or unfortunately??)
> the solution is trivial: encrypt all XML blobs that pass across a
> network.)
> 
> The point is that XML has some limitations, but it seems like people
> are going to be able to get 90% of what they want via XML, and then
> 95%, then 97%, etc. That progression makes your stuff less
> compelling.

True, but it depends greatly where they are taking XML. AFAICS, 
effectively all they are doing is making propriatery file formats 
public and furthermore, they're guaranteeing a certain minimum 
structure.

You still require a DTD. You still require something which can work 
with that DTD. Of course, the DOM API takes much of your generalised 
work away, but you're still lumbered with either licensing an 
interpreter for the data or building your own one in order to make 
use of the data.

Contrast with my solution: mine leverages existing facilities in that 
if I want to pull data out of say an Excel spreadsheet, my data 
converter just invokes COM and has it done. Furthermore, because of 
colaborative processing, if my Sparc box wants to root around inside 
an Excel file it merely uses a data converter running on a windows 
box. End result: identical.

Are we beginning to see business advantages yet? Hence why I said the 
networking code is the key to revenue.

> >Have you ever seen or used something that just impressed you with its
> > quality? Have you ever really enjoyed programming for a certain
> >language or operating system because it was so well designed?
> >
> >In other words, I'm targeting the 20% of programmers or so who 
> >actually like programming and do it outside of work for fun. ie; a
> >good majority of slashdot readers.
> >
> >The runtime is obviously free and the SDK will mostly be free too ie;
> > investors can't expect a return for the first two years. This is
> >because in fact we're building a software base, without which no new
> >paradigm stands a chance in hell.
> >
> >We start making money when we put in the networking code sometime
> >into the third year. 
> 
> That's a *very* hard sell to a VC guy. They have lots of people
> claiming to deliver a 100% or 1000% return in the first year. To
> admit you're a money pit that won't even generate any revenue for 3
> years (and won't probably generate profit for many more years) will be
> a *very* hard sell.

Yes but the people making those claims have been proven to be liars 
by the dot com death. They were all along, because if Amazon can't 
make a profit in how many years then sure as hell no claims of 100% 
return in the first year have any foundation in truth whatsoever.

Regarding being a money pit, if I took me and another programmer of 
my choosing (a 10x programmer like from DEC's Mythical Man Month) on 
a 20k salary with a strong percentage in company shares, we should 
only need 150,000 for the first two years.

Other way round: we can take donations during the first two years, 
say 20 euro each one. Having watched BeOS and internet radio 
stations, I genuinely think that could cover 20% of our costs - but 
only into the second year onwards.

> Think about using Cygnus's business model. Cygnus was Michael
> Tiemann's old company (I think they were acquired by Red Hat). They
> had a similar goal as yours, only they had much more popular tools,
> e.g., the GNU C and C++ compilers, etc., etc. Michael wrote the first
> version of g++, then rms (Stallman) got back into the mix and they
> worked together on a brand new GCC that combined C, C++, Objective C,
> Java, FORTRAN, and probably Swahili into the same compiler. The point
> is that the GNU tools are free, but Cygnus got revenue by charging
> corporations for support. Big companies in the US are afraid of free
> software. They want *somebody* they can call and say, "I need you to
> fix this bug NOW!" So Michael offered them a one-day turn-around on
> bugs (or something like that) for $20,000/year (or something like
> that). He told them he'd distribute the bug-fix to everyone free of
> charge, but they didn't care: they wanted to know THEIR programmers
> wouldn't get hung up on bugs, so it made business sense.

Already well ahead of you on that one. Worst comes to worst and I 
fail to attract a penny - well, then it's time to GPL the lot and 
pray I can make it into something I can consult on in the future.

> >However, I honestly don't know. I look at COM and I think my solution
> > is likely to require less overhead. I won't know until I have
> >working code to benchmark. I will say though I have gone to great
> >lengths to optimise the system.
> 
> Think about that - it might be worthwhile to create a few "sample
> apps" and a few "sample app programmers," that way you "start with the
> end in mind." I'm sure you have a few sample apps already, so I'm
> talking about some things like COM-ish things, XML/web services, etc.

Yeah I threw together a JPEG=>Image converter to test the 
"funability" of writing for my project. I'm a big advocate of test 
suites, so a few of those will be written.

A question your expertise may be able to answer: is there a non-GPL 
portable Unix shell because I've looked *everywhere* and can't find 
one? I would like to modify it to use my project's namespace as the 
filing system instead. Failing this, I'll take a copy of Flex and get 
it to spit out some parser C++ from which I'll make a simple shell.

The reason I need one is that a shell will be invoked a lot. I've 
borrowed off RISC-OS the concept that data type associations, icons 
etc. should be built dynamically instead of using any kind of 
registry or central database. The shell will coordinate these 
actions.

Failing that, how about a free non-GPL functional language which 
isn't a pain to use? I've experimented with Hugs (a Haskell 
interpreter) but Haskell is too esoteric for me.

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sat, 3 Aug 2002 03:28:21 -0500

Niall Douglas wrote:
>On 1 Aug 2002 at 3:29, Marshall Cline wrote:
>
>>However what I'm talking about is something different. I really need
>>to express the core idea more clearly. The idea of extensibility is
>>usually achieved by structuring your code so 99% (ideally 100%) of
>>your system is ignorant of the various derived classes. In
>>particular, pretend 'ABC' is an abstract base class with pure virtual
>>methods but no data and no code, and pretend 'Der1', 'Der2', ...,
>>'Der20' are concrete derived classes that inherit directly from 'ABC'.
>> The goal is for 99% of the system to NEVER use the token Der1, Der2,
>>..., Der20 --- to pass all these objects as 'f(ABC* p)' or 'g(ABC&
>>x)'.
>
>That's actually what I meant previously about being able to replace a 
>class in a hierarchy with a completely different one and have little 
>if no knock-on effects.
>
>>There are other problems with deep hierarchies, most especially the
>>reality that they often result in "improper inheritance," and that
>>causes unexpected violations of the base class's contract which
>>ultimately breaks the code of "the vast majority" of the system. This
>>proper-inheritance notion is the same as require-no-more,
>>promise-no-less, which you basically didn't like :-(
>
>No, I didn't like the /phrase/, not its meaning. I understood the 
>meaning three emails ago.

Regarding "three emails ago," we seem to have had a small communication
problem. I re-explained things after you already "got it," and I
apologize for frustrating you that way. I certainly did not want to
imply you are stupid or thick-headed or something, since it is quite
clear (to me, anyway) that you are not.

However I think we both played a part in this communication problem.
For example, when I first explained the "require no more and promise no
less" idea in my previous email, you replied, "This is quite ephemeral
and subtle..." Although it is clearly subtle at times, I see it as the
opposite of ephemeral, and since, perhaps as a back-handed compliment to
you (e.g., "I *know* this guy is bright, so if he thinks it is
ephemeral, I must not have explained it very well"), I re-explained it.

There have been several other times throughout our conversation when you
said something that made me think, "He still doesn't see what I'm
seeing." I see now I was wrong, so I'm not trying to convince you that
you don't get it. I'm simply trying to help you see why I re-explained
things too many times.

For example, when you explained that you had already checked to make
sure 'prepend()' and 'append()' were not called within the base class's
code, and that that gave you confidence there wouldn't be any errors
resulting from your redefining those methods in TSortedList, I thought
to myself, "He doesn't get it yet; checking the base class itself is
necessary but not sufficient." So I (erroneously) explained it again.

Similarly when you said, "One of the things I look for when designing my
class inheritances is whether I could say, chop out one of the base
classes and plug in a similar but different one." This is a very
strange concept to me, and I don't think it makes sense when inheritance
is used as I have described, so I (erroneously) thought you didn't get
it and explained it again. :-)

Clearly *some* of the things you said made it seem like you got it, but
hopefully you can see how I (perhaps misinterpreted) others in a way
that made you seem like you didn't get it. As I said before, I'm not
accusing you of not getting it. I'm simply explaining why I erroneously
thought you didn't get it, and why I therefore explained it again (and
again ;-).

Put it this way: stupid people say random things. They are incapable of
coming up with a cohesive perspective on complex things, so their
statements are often inconsistent with each other. You said some things
that (I thought!) were inconsistent with each other, but you're not
stupid (or if you are, you sure fooled me ;-) If I had thought you were
stupid, I probably would have politely ended the conversation and
quietly written you off as a lost cause (sorry if that is condescending,
but we both have better things to do with our lives than pour hours and
hours into people we can't actually help). So instead of writing you
off, I figured, "Just one more email and he'll *really* get it!"

Hopefully that explains why (I believe) we both played a role in me
being a broken-record. And also, hopefully it shows you that I didn't
repeat myself because I thought you were dumb, or because I was dumb,
but instead because I thought you were bright enough to get it, and if
you saw it from just one more vantage point, you'd get it.

Okay, so you get it. Now we can move on!


>>These last two paragraphs aren't describing a new model, but are
>>trying to give a couple of insights about the model I've been
>>describing all along.
>
>You don't seem to believe me when I say I've integrated your wisdom! 
>Trust me, I absolutely 100% understand what you have taught me - I 
>learn quickly!

Okay, I believe you now. Hit me with a big enough hammer and I stop
repeating myself. I think.


>>>>A derived class's methods are allowed to weaken requirements
>>>>(preconditions) and/or strengthen promises (postconditions), but
>>>>never the other way around. In other words, you are free to
>>>>override a method from a base class provided your override requires
>>>>no more and promises no less than is required/promised by the method
>>>>in the base class. If an override logically strengthens a
>>>>requirement/precondition, or if it logically weakens a promise, it
>>>>is "improper inheritance" and it will cause problems. In
>>>>particular, it will break user code, meaning it will break some
>>>>portion of our million-line app. Yuck.
>>>>
>>>>The problem with Set inheriting from Bag is Set weakens the
>>>>postcondition/promise of insert(Item). Bag::insert() promises that
>>>>size() *will* increase (i.e., the Item *will* get inserted), but
>>>>Set::insert() promises something weaker: size() *might* increase,
>>>>depending on whether contains(Item) returns true or false. 
>>>>Remember: it's perfectly normal and acceptable to weaken a
>>>>precondition/requirement, but it is dastardly evil to strengthen a
>>>>postcondition/promise.
>>>
>>>This is quite ephemeral and subtle stuff. Correct application appears
>>> to require considering a lot of variables.
>>
>>I probably didn't describe it well since it's actually quite simple. 
>>In fact, one of my concerns with most software is that it's not soft,
>>and in particular it has a large ripple effect from most any change. 
>>This means a programmer has to understand how all the pieces fit
>>together in order to make most any change. In other words, the whole
>>is bigger than the sum of the parts.
>
>Well this is typical of any increasingly complex system - more and 
>more, it is less the mere sum of its parts.
>
>>The idea I described above means that the whole is *merely* the sum of
>>the parts. In other words, an average (read stupid) programmer can
>>look at a single derived class and its base class, and can, with total
>>and complete ignorance of how the rest of the system is structured,
>>decide unequivocally whether the derived class will break any existing
>>code that uses the base class. To use another metaphor, he can be
>>embedded in a huge forest, but he can know whether something new will
>>break any old thing by examining just one other leaf.
>
>That's an admirable desire, but do you think it's really possible? 

Yes, in the sense that I've seen big projects get 80% or 90% of the way
there. The key, of course, is to find low-budget ways to get the first
80% of the value, and given the 80/20 rule, that turns out to be
possible. I call it the low-hanging fruit. It's not perfect, but it's
much better than if we go with the status quo.

The low-hanging fruit involves just a few disciplines, including
"programming by contract" (you may be familiar with this; if not, sneak
a peek at chapters 1-4 of Bertrand Meyers's book, Object-Oriented
Software Construction; or I could explain it to you) and it requires
design reviews where the contracts in base classes are carefully
evaluated based on some pretty straightforward criteria. Since many
programmers don't have self-discipline, even when it would be in their
own best interest, project leaders must enforce the above by putting
them into the project's process. In the end, these things actually will
happen if they get tracked and reviewed (read "enforced") by management,
and they really do contribute a great deal toward the above goal.

There are a few other design and programming ideas I use to help achieve
that goal (more low-hanging fruit). For example, simple (but often
overlooked) things like creating an explicit architecture with explicit
APIs in each subsystem, wrapping the API of each OO subsystem so the
other subsystems can't "see" the OO subsystem's inheritances or how it
allocates methods to objects (in my lingo, so they can't see it's
design), etc.


>If 
>I've learned anything from quantum mechanics and biology, it's that 
>there will *always* be knock-on effects from even the tiniest change 
>in any large system. Good design and coding is about minimising 
>those, 

Agreed: bad design has a huge ripple effect, which can be thought of as
chaos (a tiny change in one places causes a very large change somewhere
else).


>but as you've mentioned before all you need is one bad 
>programmer to muck it all up.

All I can say for sure is that that doesn't seem to be a problem in
practice. Perhaps the added "process" smooths out the effects of the
bad programmers, I don't know. May be that's another argument in favor
of pairing programmers (though I can only make that argument
theoretically: since I haven't done a project with pair programming, I
can't say that the above disciplines are necessary and/or sufficient to
achieve the goal of the-whole-is-merely-the-sum-of-the-parts).


>>I rather like that idea since I have found the average programmer is,
>>well, average, and is typically unable or unwilling to understand the
>>whole. That means they screw up systems where the whole is bigger
>>than the sum of the parts -- they screw up systems where they must
>>understand a whole bunch of code in order to fully know whether a
>>change will break anything else.
>
>Hence the usefulness of pairing programmers.

Perhaps you're right. I always worry about know-it-all programmers who
insist on their own way even when they're wrong, but like I said, that's
more of a theoretical concern about pair programming since I haven't
experienced it in practice.


>>I call this the "middle of the bell curve" problem. The idea is that
>>every company has a bell curve, and in the middle you've got this big
>>pile of average people, and unfortunately these average people can't
>>handle systems where the whole is bigger than the sum of the parts.
>>That means we end up relying on the hot-shots whenever we need to
>>change anything, and all the average guys sit around sucking their
>>thumbs. I think that's bad. It's bad for the hot-shots, since they
>>can never do anything interesting - they spend all their time fighting
>>fires; it's bad for the average people since they're under utilized;
>>and it's bad for the business since they're constantly in terror mode.
>
>OTOH, as many BOFH's know, enhancing a company's dependence on you 
>increases your power. Right at the start those expert's under whose 
>wing I was which I mentioned, they would do things like turn up late 
>when they felt like it and declare their own vacation time with about 
>eight hours notice. I must admit, I've used my own hot-shot status 
>occasionally as well - while I don't like the consequences of it 
>professionally, it's an easy vicious circle to fall into.

Yes, but I've often found the hot-shots willing to go along with this
approach. I've never tried to fix the 8-hour-notice-for-vacation
problem, and probably couldn't if I tried, but I have had success with
the particular situation of helping empower the
middle-of-the-bell-curve.

Perhaps that success has been because I usually present some of these
ideas in front of a group/seminar, and I usually (verbally) beat my
chest and say something like, "Anybody who hurts their company to
benefit themselves is unprofessional and deserves to be fired."
Basically I shame the hot-shots into realizing they can't hold the
company hostage just for their own personal job security. In fact, I'll
be giving that sort of speech at a development lab of UPS on Tuesday,
and I'll probably mention the recent corporate scandals. If I do, I'll
talk about those disreputable people who hurt the stockholders in order
to line their own pockets. Nobody wants to be compared to those guys,
which brings out a righteous streak in everybody (including the
hot-shots).

(Sometimes, when I really need to bring out a big hammer, I compare the
"wrong" attitude to terrorism. It's a stretch, but I'm pretty good in
front of a crowd. The key insight is that nobody should be allowed to
hurt their company just to help themselves. For example, I say
something like, "If you hold company assets in between your earlobes,
they don't belong to you - they belong to the company. If you refuse to
write them down just to protect your own job, you're no better than a
terrorist, since you're basically saying, "Give me my way or I'll hurt
you." Amazingly I haven't been lynched yet.)

Like I said, I doubt any of this saber-rattling actually changes
people's hearts, and therefore it won't change the guy who gives 8 hours
notice before vacation, but it does seem to force the hot-shots into
supporting the plan, and invariably they buy-in.


>>If we don't solve the middle-of-the-bell-curve problem, why have we
>>bothered with all these fancy tools, paradigms, etc.? In other words,
>>the hot-shots could *always* walk on water and do amazing things with
>>code, and they don't *need* OO or block-structured or GUI builders or
>>any other fancy thing. 
>
>No, that's not true. If collected some hot-shots together and wrote 
>say X ground-up in assembler - yes, absolutely, it could be done and 
>done well but in productivity terms it would be a disaster.

You're probably right about what you're saying, but that's a totally
different topic from what I was *trying* to say. I'm not talking about
productivity differences (either between those who use assembler vs.
high-level tools or between the hot-shots and the dolts). I'm talking
about the ability to understand the whole ripple effect of a change in
their heads. In other words, the *average* architect is broad but
shallow (they know a little about the whole system, but often don't know
enough to reach in and change the code), and the *average* "coder" is
deep but narrow (they have deep understanding of a few areas within the
system, but most can't tell you how all the pieces hold together - they
don't have the breadth of an architect). But knowing the full ripple
effect of a change to the system often requires both broad and deep
knowledge that very few possess. In one of my consulting gigs, there
was a guy (Mike Corrigan) who was both broad and deep. Management
called him a "system-wide expert." Everyone believed he could visualize
the entire system in his head, and it seemed to be true. It was a huge
system (around two million lines below a major interface, and around 14
million lines above that). When someone would propose a change, he
would go into never-never land, and perhaps a week later he would
explain why it couldn't be done or would provide a list of a dozen
subsystems that would be effected.

When I wrote the above, I was visualizing the Mike Corrigans of the
world. After they get enough experience with a system, they are both
deep and broad, and in particular, they can see the entire ripple effect
in their heads. That's what I was trying to say (though you can't see
the connection without seeing the paragraph prior to the paragraph that
begins, "If we don't solve the middle-of-the-bell-curve problem...")


>>So if we don't solve the
>>middle-of-the-bell-curve problem, we might as well not have bothered
>>with all this OO stuff (or you could substitute "data centric stuff"),
>>since we really haven't changed anything: with or without our fancy
>>languages/techniques, the hot-shots can handle the complexity and the
>>average guys suck their thumbs.
>
>No, I'd have to disagree with you here. In many many ways the modern 
>task of software engineering is harder than it was in the 1980's. At 
>least then, you wrote your code and it worked. Nowadays, it's a much 
>more subtle task because your code depends directly on millions of 
>lines of other people's code, much of which wasn't written with a 
>single purpose in mind. I know I've wasted days on stupid problems 
>with undocumented malfunctioning - and I can only imagine for the 
>less technically able (does the phrase "horrible nasty workaround" 
>come to mind?)

Perhaps you would agree with this: In most companies, those in the
middle-of-the-bell-curve have a difficult time reliably changing large
systems. Companies that solve this problem will be better off.


>I'll put it this way: there is a definite problem in modern software 
>engineering with documentation. That game I wrote for DirectX had me 
>pounding my head for days because of some of the worst docs I have 
>seen in recent times. Unix in general is even worse - you get your 
>man or info pages which vary widely in quality. AFAICS they're not 
>making the guys who write the code write the documentation, and 
>that's bad.

(If you're already familiar with programming-by-contract, you can ignore
this.)

This is an example of programming-by-contract, by the way, only with
programming-by-contract, we normally don't think of the software being
used as if it was shipped by a third party. I.e., in
programming-by-contract, I would write a good contract / specification
for a function I wrote, even if that function was going to be used only
by my department.

So in a sense, programming-by-contract is a small-granular version of
what you'd like to see from these third-party vendors.


>I'll just mention RISC-OS had fantastic documentation (even with 
>custom designed manuals which automatically perched on your lap). 
>It's a difference I still miss today, and it's why my project has 
>excellent documentation (I wrote a lot of it before the code).

Bingo. Good stuff. I do the same thing: write the specs as best I can
ahead of time, and invariably add to them as I get further into the
project, and sometimes (carefully) changing what I wrote when it was
wrong or inconsistent (or when I simply discover a better way).

When "selling" companies on this idea, I even use a different term than
documentation, since most people think of documentation as something you
write that *describes* what the code already does. But these
contracts/specifications *prescribe* what the code *should* do, hence
the different name: contracts or specifications.


>>>No, I've got what you mean and I understand why. However, the point
>>>is not different to what I understood a few days ago although I must
>>>admit, Better = Horizontal > Vertical is a much easier rule of thumb
>>>than all these past few days of discussion. You can use that rule in
>>>your next book if you want :)
>>
>>I've seen hierarchies with up to five levels, although all but the
>>very last were abstract base classes with almost no data or code. So
>>again, to me the real culprit is improper inheritance and/or
>>inheritance from a data structure. The former breaks user code and
>>the later can cause performance problems (creates a ripple effect when
>>we try to improve performance by changing to a different data
>>structure).
>
>What would your thoughts be then on Qt, which does make some use of 
>data, more data, some more data; in its class hierarchies? 

Don't know enough about Qt to comment.

Generally speaking, good C++ (or Java or Eiffel) class hierarchies are
short and fat with very little data in the base classes. The
fundamental reason for this is that these languages exhibit an assymetry
between data and code. In particular, they let a derived class replace
some code in the base class (virtual functions), but they don't let a
derived class do the same with data (they don't have "virtual data").
Once a base class has a certain data structure, all derived classes
forever and ever are forced to have that data structure. If a given
derived class doesn't "want" that data structure, it has to carry it
around anyhow, and that typically makes it bigger or slower than
optimal.

For example, suppose we have a hierarchy representing shapes on a 2D
Euclidean plane, with class Square inheriting from Rectangle inheriting
from Polygon inheriting from Shape. Assume Shape is abstract with no
data whatsoever, but suppose we decide to make Polygon concrete. In
this case a Polygon would have some sort of list of x-y points, say a
GList<Point>. Unfortunately that would be a very bloated representation
for a Rectangle, since all we really need for a rectangle is a single
point, a width, a height, and perhaps a rotation angle. It's even worse
for Square, since Square needs only a single point plus a width (and
possibly a rotation angle).

This illustrates another peculiarity about OO, which is not typically
discussed, as far as I know. The peculiarity is that the data hierarchy
and the is-a hierarchy go in opposite directions. Put it this way:
inheritance lets you *add* data to whatever was declared in the base
class, but in many cases you really want to subtract that data instead.
For example, take Square inheriting from Rectangle. From a data
perspective (that is, when we think of inheritance as a reuse
mechanism), it makes a lot more sense to inherit the thing backwards,
after all, Rectangle (the derived class) could simply augment its base
class's 'width_' datum with its own distinct 'height_' datum, and it
could then simply override the 'height()' method so it returns 'height_'
instead:

class Square {
public:
Square(double size) : width_(size) { }
virtual double width() const { return width_; }
virtual double height() const { return width_; }
virtual double area() const { return width() * height(); }
virtual void draw() const { /*uses width() & height()*/ }
private:
double width_;
};

// This is backwards from is-a, but it
// makes sense from data and reuse...

class Rectangle : public Square {
public:
Rectangle(double width, double height)
: Square(width), height_(height) { }
virtual double height() const { return height_; }
private:
double height_;
};


Obviously this is backwards semantically, and is therefore bad based on
everything we've discussed forever (which you already get; I know). But
it is interesting how the "inheritance is for reuse" idea pushes you in
exactly the wrong direction. It is also interesting (and somewhat sad)
that the proper use of inheritance ends up requiring you to write more
code. That would be solvable if today's OO languages had virtual data,
but I'm not even sure how that could be implemented in a typesafe
language.

I actually do know of another way to have-your-cake-and-eat-it-too in
the above situation. Too late to describe this time; ask me someday if
you're interested. (And, strangely enough, although C++ and Java and
Eiffel don't support this better approach, OO COBOL of all languages
does. So if anybody ever asks you which is the most advanced OO
language, tell them COBOL. And snicker with an evil, Boris Karlov-like
laugh.)


>>For example, they could cause the list's contents to become
>>unsorted, and that could screw up TSortedList's binary search
>>algorithms. The only way to insulate yourself from this is to use
>>has-a or private/protected inheritance.
>
>Surely private or protected inheritance affects the subclass only? 
>ie; you could still pass the subclass to its base class?

I have no idea what you're asking - sorry, I simply can't parse it.

Here's the deal with private/protected inheritance. Where public
inheritance "means" is-substitutable-for, private and protected
inheritance "mean" has-a (AKA aggregation or composition). For example,
if TSortedList privately inherited from GList, then it would be
semantically almost the same as if TSortedList had-a GList as a private
member. In both cases (private inheritance and has-a) the methods of
GList (including the now infamous prepend() and append()) would *not*
automatically be accessible to users of TSortedList, and in both cases
users of TSortedList could not pass a TSortedList to a function that is
expecting a GList (which, as you know, is the start of all our "improper
inheritance" problems).

There are a few differences between private inheritance and has-a, but
the differences are buried in the code of TSortedList. In particular,
if TSortedList privately inherited from GList, the derived class's code
(its member and friend functions) are allowed to convert a TSortedList&
to a GList& (and similarly for pointers), and TSortedList can use a
special syntax to make selected public methods of GList public within
TSortedList. For example, if there is a 'size()' method within GList
that TSortedList wants to make public, then in the public section of
TSortedList simply say "using GList<T>::size;". Note: the '()' were
omitted intentionally. See the FAQ for this 'using' syntax. In fact,
the FAQ has a whole section on private and protected inheritance - you
might want to read it since it will solve your "bad inheritance"
problem, at least with TSortedList.


>>>Embedded systems programming as we knew it is dying out. When 
>>>desktops went all powerful, a lot of us assembler guys went into tiny
>>> systems but now they've gone all powerful, it's a rapidly shrinking
>>>market. The writing is definitely on the wall - move onto OO and C++
>>>and such or else become unemployable.
>>
>>You may have already worked with hand-held systems, but if not, they
>>might be the last bastion of tight, high-tech coding. Particularly
>>hand-held systems targeted at the consumer market, since that usually
>>means the company wants to squeeze the unit cost and extend the
>>battery life. In those cases, they worry about everything. Wasting
>>memory means the thing needs more RAM or flash, and that increases
>>unit cost and reduces battery life. Similarly wasting CPU cycles
>>burns the battery up pretty fast. So in the end they want it very
>>small and very fast, and that makes it challenging/fun.
>
>No, that's not hugely true anymore.

We probably have different experience bases, since most of the companies
I've worked with have such tight coding constraints that bloatware like
WinCE are unthinkable. The folks I've worked with try to squeeze the
whole shebang, including OS, apps, data, drivers, wireless stuff,
everything, into 4MB, sometimes less. They often end up with much
smaller, proprietary OSs.


>I worked alongside the Windows CE 
>port to the ARM as well as Psion's Symbian OS and the predominant 
>view was to write it much as for a desktop. After all, handhelds will 
>get faster and have more memory just like a desktop. 

That makes sense, and once you're willing to throw enough hardware at
it, e.g., 32MB and 400MHz, you can say that sort of thing.

But in the world of, say, calculators, customers expect the thing to run
for a year on an AA battery or two. The screens on those things aren't
backlit, but they have oodles of code in them. I remember working with
an HP team that said they had half a million lines of code in one of
their calculators. It solves differential equations, plots various
kinds of graphs, is programmable, etc., etc., and the amount of memory
it has is important to them.

I have a Palm V. Doesn't do too much, has a lousy 160x160 LCD
black-and-green screen, and it came with only 4MB of memory, but it's
tiny, inexpensive, weighs only 4(?) ounces, and runs for a week between
charges. I could have purchased a WinCE box, but it's much heavier,
more expensive, and needs to be charged much more often.

Now the world is slowly moving toward higher power processors and
backlit screen, which ultimately means people will begin to expect less
and less wrt battery life. In the mean time, there will always be
companies like HP, Texas Instruments, UPS, FedEx, Brooklyn Union Gas,
etc., that are up against memory limitations, and they'll always be
fighting to add functionality without increasing hardware. I think that
stuff is fun.


>>Limbo. It's hosted within Inferno, an OS that was originally by
>>Lucent, but was sold to a UK company named Vita Nuova.
>>
>>Limbo was designed by Dennis Ritchie and some other really smart folks
>>(BTW I had a chance to talk to Dennis on the phone as a result of this
>>engagement), and everyone involved gives it glowing reviews. But like
>>I said, my client is having a hard time finding people to work with it
>>since there simply aren't that many Limbo programmers out there.
>>
>>Somewhat interesting approach. It's hosted via a virtual machine, and
>>it's compiled into a byte-code of sorts, but it's very different from
>>the stack-machine approach used by Java. It's much closer to a
>>register-machine, so the source code "a = b + c" compiles into one
>>instruction (pretend they're all of type 'int', which uses the 'w'
>>suffix for 'word'):
>>
>> addw b, c, a // adds b+c, storing result into a
>>
>>The corresponding Java instructions would be something like this:
>>
>> iload b // pushes b
>> iload c // pushes c
>> iadd // pops c then b, adds, pushes the sum
>> istore a // pops the sum, stores into a
>>
>>There are two benefits to the Limbo byte-code scheme: it tends to be
>>more compact, on average, and it's much closer to the underlying
>>hardware instructions so a JIT compiler is much smaller, faster, uses
>>less memory, and is easier to write. E.g., a Java JIT has to convert
>>all these stack instructions to a typical machine-code add, and that
>>transformation has to happen on the fly, whereas Limbo does most of
>>that transformation at compile-time.
>
>In other words, Limbo is doing a full compile to a proper assembler 
>model (which just happens not to have a processor which can run it, 
>but one could be easily designed). Java is really mostly interpreted 
>in that the source is pretty easy to see in the byte code. I've seen 
>some reverse compilers and their output is awfully similar to the 
>original - whereas no reverse compiler would have a hope 
>reconstituting C++ (or even C).

Yes Limbo does a full compile to an abstract machine. But I don't agree
with one of your implications: The fact that Java uncompilers do a
pretty good job does not mean that Java is mostly interpreted. In fact,
the real reason Java uncompilers do a pretty good job is because of all
the "meta data" Java .class files are required to carry around (they
have to contain enough meta data that "reflection" can work; e.g., Java
lets you poke around in the meta data of another class, including
determining at runtime what methods it has, the parameters/return types
of those methods, which methods are private, protected, public; etc.,
etc.). If a Limbo binary (a .dis file) contained the meta-data that a
Java .class file contained, a Limbo uncompiler could do about as good a
job as a Java uncompiler.

Agree that it would be hard to uncompile C or C++, even if there was
meta-data, since it would be much harder to guess what the original code
looked like from the instruction stream. But I think that's an artifact
of the fact that a "real" processor has very small granular instructions
and the optimizer is expected to reorganize things to make them
fast/small/whatever. In contrast, the byte-codes for Limbo and Java do
more work (e.g., both have a "call" instruction that does a *whole* lot
more than a "real" hardware call), so they expect the virtual machine
itself to be optimized for the particular hardware platform.

FYI Java has a number of processors that run it. In that case, wouldn't
the assembly language look just like Java byte-codes?


>>The language takes some getting used to, primarily because it, unlike
>>C or C++, has *no* back door to let you do things that are nasty. 
>>E.g., there are no unchecked pointer casts, there is nothing
>>corresponding to a 'void*' type, function pointers have to specify all
>>parameters exactly, there is no is-a conversion or any other way to
>>get a Foo pointer to point at a Bar object, etc. Obviously the
>>byte-code level (called "Dis") lets you do these things, but Limbo
>>itself tries to protect idiots from being idiotic. (As you might
>>guess, I had to write some Dis code for some things. That was fine,
>>of course, but it was somewhat bizarre seeing that Limbo offered no
>>alternative.)
>>
>>You might want to read their articles about CSP. See
>>www.vitanuova.com. Also check out Lucent's web site.
>
>Actually, I went and downloaded a prebuilt version for VMWare - I'm 
>sitting on its desktop right now. I must admit to being slightly 
>miffed that they've also "stolen" my idea for a unified namespace, 

What's that expression about great minds / great men stealing ideas.


>although theirs merely includes windows. They're also come up 
>remarkably with quite a few things I had thought of independently - 
>like overloading mouse button presses. I'm doing it in a way though 
>that won't scare people (unlike Plan 9)

The interesting thing to me is that Inferno is built *around* Limbo.
For example, whereas pid in Unix is a process ID, in Inferno it's a
thread-ID, and each thread runs a different .dis file. In other words,
running an application in Unix forks a process with its own address
space, but in Inferno it simply spawns a thread in the shared address
space, and runs a .dis file along with all the other .dis files that are
currently running.

I'm not doing it justice. I'll try again: Inferno is nothing but one
big Limbo interpreter. Every program in Inferno, including the shell
itself, utilities like 'ls', and all the rest, are Dis threads that are
runing in the same address space by the same Dis "virtual machine." The
shared address space thing seems horribly error prone, and perhaps it
is, but you've already seen how namespaces can be trimmed for individual
applications ("threads"), so apparently that's how they keep these
different apps separated. It seems like a slick, lightweight idea.

(BTW you mentioned Plan 9. Is that what you got? Or did you get
Inferno? I've been discussing Inferno. I believe Inferno stole the
namespace idea from Plan 9, but I don't know that Plan 9 is built around
a Dis virtual machine the way Inferno is. Also I'm not sure about Limbo
on Plan 9.)


>Maybe I should go apply for a job at Bell Labs? Nah, that US visa 
>thing getting in the way again ...

Vita Nuova is in the UK, and they own the rights to Plan 9 and Inferno.


>>(BTW if you correspond with Vita Nuova about Limbo, please don't say,
>>"Marshall Cline said..." since I don't want to cause any hurt feelings
>>by calling their baby ugly. Thanks.)
>
>Heh, I am the master of discretion in these matters! I'll even modify 
>the online posting I was going to do of this commentary so those 
>sections are removed.

Thanks.


>Thanks for pointing me towards Plan 9, I wouldn't have known my ideas 
>are so agreed upon by (eminent) others without it!

You're famous and you didn't know it.


>>Note that your 'DerivedString' has a ctor that takes a (const
>>BaseString&), which is not a copy ctor. Is that intentional?
>>
>>It seems very strange to me that QString would have an operator= that
>>takes a (const char*), but not one that takes a (const QString&). If
>>it really takes both, you might want to add them both.
>>
>>Basically I'm curious and frustrated that I don't understand this one.
>>If you're willing, keep adding signatures from QString/TQString to
>>BaseString/DerivedString until the latter breaks. I'd be thrilled if
>>you can chase this one down, but I'll obviously understand if you
>>can't. (I *hate* irrational errors, because I'm always afraid I've
>>missed something else. Like your "template<class type>" bug, adding a
>>pointer cast made the error go away, but I don't think either of us
>>were comfortable until we found the real culprit.)
>
>I did play around some more with that but couldn't replicate the 
>error. Unfortunately, I added casts to the six or so errors and now I 
>can't find them anymore (I really need to put this project into CVS). 
>So I am afraid it's lost for the time being - sorry.
>
>Furthermore, I found why << and >> weren't working. It seems they 
>didn't like being concantated eg; ds << keyword << metadata where 
>keyword was a QString and metadata a struct with public operator 
>overloads designed to stream it. I understand now QString doesn't 
>know what a TQDataStream is, so it was implicitly casting up and then 
>my metadata struct couldn't handle the output QDataStream instead of 
>TQDataStream.

Of course! The expression (out << xyz) usually returns a stream
reference that refers to 'out' itself, that way they can be cascaded,
e.g., out << xyz << pqr << abc. But if out is a base-class stream, then
the type of (out << xyz) is base-class-stream-reference, and then the
'<< pqr' part won't make sense to the compiler unless it too is defined
on the base-class-stream.

If your << functions can work with the base-class-stream on the
left-hand-side, then change your << methods to friend functions and
insert an explicit left-hand-side to be base-class-ref. E.g., change
this *member* function:

TQDataStream& operator<< (const Foo& xyz);

and its corresponding definition:

TQDataStream& TQDataStream::operator<< (const Foo& xyz)
{
...
return *this;
}

to the following friend function declaration:

friend QDataStream& operator<< (QDataStream& s, const Foo& xyz);

and its corresponding definition:

QDataStream& operator<< (QDataStream& s, const Foo& xyz)
{
...
return s;
}


>>>Ok, I'm interested now. You can point me at a webpage if one exists.
>>
>>No prob. To make sure we're on the same page, let's be explicit that
>>all the 'foo()' functions take the same parameter list, say an 'int'
>>and a 'double', so the only difference is their return types. I'll
>>first rewrite the "user code" using these parameters:
>>
>> void sample(int a, double b)
>> {
>> int i = foo(a, b);
>> char c = foo(a, b);
>> float f = foo(a, b);
>> double d = foo(a, b);
>> String s = foo(a, b);
>> }
>>
>>The rules of the game are simple: if we can get a totally separate
>>function to get called for each line above, we win.
>>
>>The solution is trivial:
>>
>> class foo {
>> public:
>> foo(int a, double b) : a_(a), b_(b) { }
>> operator int() const { ... }
>> operator char() const { ... }
>> operator float() const { ... }
>> operator double() const { ... }
>> operator String() const { ... }
>> private:
>> int a_;
>> double b_;
>> };
>>
>>QED
>
>Let me get this: you're overloading the () operator yes? 

Close. It's called the cast-operator. It's used in cases when you want
to allow your object of class Foo to be converted to an object of class
Bar. For example, the statement:

if (cin >> xyz) {
...
}

says that (cin >> xyz) must be some sort of boolean-ish thing. But we
know the type of (cin >> xyz) is 'istream&', since we know that these
can be cascaded, e.g., cin >> xyz >> pqr >> abc. So how could
'istream&' appear in a boolean context? Answer: the cast operator.
'istream' has a method called 'operator boolean() const' that returns
'true' if the stream is in a good state (meaning it didn't dected any
errors in preceding input operations), or 'false' otherwise. (The
actual operator is 'operator void*() const' or something like that,
since that prevents someone from accidentally saying, for example, 'int
x = cin').

So basically what happens is this: the expression 'foo(a, b)' is just
like 'String("xyz")': it constructs a temporary, unnamed object of class
'foo' by storing the values 'a' and 'b' within the object itself. Then
that function-like object is converted to an int, char, float, double,
or String, and the conversion is what triggers the right 'operator
<type>()' function to be called. Finally the 'foo' object is
destructed, which in this case does nothing.


>In which 
>case, that's quite ingenious. I'm not sure it would prove regularly 
>useful though - seems too roundabout a solution except in quite 
>specific instances.

It's an *idiom*. Idioms are always more roundabout than features that
are directly supported in the language proper. But there's no need for
a language feature since there's an easy to use idiom that does the job
quite well.

For example, there was talk in the C++ committee of adding a new keyword
'inherited' or 'super', so that single-inheritance hierarchies, which
are quite common, could access their base class's stuff without having
to name the base class explicitly. In other words, if class 'Der' is
derived from class 'Base', and 'Der::f()' wants to call 'Base::f()', it
would be nice if 'Der::f()' could say 'super::f()' or 'inherited::f()'
instead of having to name the base class explictily, e.g., 'Base::f()',
since 'super::f()' would reduce cut-and-paste errors when copying from a
different class, and is generally simpler.

However someone mentioned an idiom that allows programmers to do just
that, and the proposal/discussion died instantly. The idea is for 'Der'
to add this line:

class Der : public Base {
typedef Base super; <===***
public:
...
};

That's more roundabout, but it gets the job done. Same with my
"overloaded return type idiom."


>>>I personally would probably have had it use static typing when it
>>>could, but when the compiler didn't know it would complain unless you
>>> added a modifier to say it was a dynamic cast - then the check gets
>>>delayed till run time. As it happens, surely that's happened anyway
>>>(albeit relatively recently) with dynamic_cast<>().
>>>
>>>My point is, it could have been made possible to utilise the best of
>>>both worlds but with a bias toward static typing.
>>
>>I think your goal is admirable. However if you think a little deeper
>>about how this would actually get implemented, you would see it would
>>cause C++ to run much slower than the worst Smalltalk implementation,
>>and to generate huge piles of code for even trivial functions. E.g.,
>>consider:
>>
>> void foo(QString& a, QString& b)
>> {
>> a = "xyz" + b;
>> }
>>
>>Pretend QString's has a typical 'operator+' that is a non-member
>>function (possibly a 'friend' of QString). It needs to be a
>>non-member function to make the above legal. Pretend the signature of
>>this 'operator+' is typical:
>>
>> QString operator+ (const QString& x, const QString& y);
>>
>>Thus the 'foo()' function simply promotes "xyz" to QString (via a
>>QString ctor), calls the operator+ function, uses QString's assignment
>>operator to copy the result, then destructs the temporary QString.
>
>That's how it works currently, yes.
>
>>However if your relaxed rules above let someone pass things that are
>>not a QString (or one of its derived classes) for 'a' and/or 'b',
>>things are much worse. (And, unfortunately, if your relaxed rules do
>>not allow this, then I don't think you're getting much if any
>>advantage to your relaxed rules.)
>>
>>In particular, if 'a' and/or 'b' might not be QString objects, the
>>compiler would need to generate code that checked, at run-time, if
>>there exists any 'operator+' that can take a 'char*' and whatever is
>>the type of 'a' (which it won't know until run-time). Not finding
>>one, it would search for valid pointer conversions on the left, e.g.,
>>'const char*', 'void*', 'const void*'. Not finding any of those, it
>>would also search for any 'operator+' that takes the type of 'b' on
>>the right. Finally, if we assume 'b' actually is a QString, it would
>>find a match since it could promote the type of 'b' from 'QString&' to
>>'const QString&' (that's called a cv-conversion).
>
>Firstly, I was thinking that the compiler would produce an error 
>without a special keyword which limits the overall possibilities of 
>casting ie; a strong hint to limit the total number of varieties. 
>Hence then much of the above searching is unnecessary.

I may not understand what exactly you were thinking for your special
keyword, but was assuming you would want to add it at the point where
the funny cast was made, not in the declaration of the function proper.
In other words, when you want an Xyz* to point at a Pqr object, and when
those types are not related through inheritance, then at that very
instant you need to tell the compiler, "I know what I'm doing, it's
okay, this pointer will simply use more dynamic type-checking."

If my assumption was similar to your thinking, then the function itself
would have no way of knowing whether it was being called by one of those
funny pointers/references, or a normal pointer/reference. In fact, if
it was called from 10 places, it might be a mixture of the two. That's
why I thought the function would need to make all those extra checks.

The more I think about it, the more I think my way is the most intuitive
way. It wouldn't even make sense to decorate the function itself, after
all, what criteria could a programmer ever use when creating a function
for knowing whether to add the keyword that says, "This parameter is
allowed to be something wildly different, in which case we'll use
dynamic type checking on this parameter." I honestly don't think I
myself could know when to use that keyword and when not (if the keyword
decorated either the function as a whole or the individual parameters of
a function).

What if my little foo() function is called from 10 places and only one
passes a reference that needs dynamic type-checking? Do we use it then?
Seems to me that we have to. But what if all 10 use the normal C++
approach, then someday someone comes along and wants to do something
dynamic. Must we really go back and change the function's declaration?
What if the function is shipped in a library from a third party? Should
they use the "dynamic type checking" keyword "just in case"?

Again, I may be wrong about what you were thinking. But I *bet* if you
forced the keyword to go into the function declaration itself, then
there would be all sorts of other problems.

Oh yea, what about function pointers? Function pointers have *types*.
If you put the keyword in the function or parameter list, then you
presumably have to change the *type* of the function pointer, otherwise
someone could accidentally have a static-typing-only function-pointer
that pointed at a dynamically-typed function, and vice versa. Would
that cause problems?

And if the function itself or its parameters are decorated, what about
allowing function overloading based on that tag? E.g., what if you
wanted two foo() functions, one that takes a statically-typed QString&
and the other takes a dynamically-typed QString&.

And what about when the 'this' pointer itself is supposed to be
dynamically typed? Do we add another keyword out where 'const' goes?
(After the paramter list?) And would it be possible to have method
overloading based on that just like we can have method overloading based
on const?

(Some of these are very subtle and require a lot of compromise, and all
you need is for one of these to "not work out right" and you have a
worse wart than we already have. Adding language features like this is
much harder than it looks, as I'm sure you know.)


>>However it's not done yet. To make the only candidate 'operator+'
>>work, it has to try to convert the left-hand parameter from 'char*' to
>>whatever is on the left-side of the 'operator+' (which it would
>>discover at run-time to be 'const QString&'). Eventually it will
>>discover this can be done in three distinct steps: promote the 'char*'
>>to 'const char*', call the QString ctor that takes a 'const char*',
>>then bind a 'const QString&' to the temporary QString object. Now if
>>finally has enough information to call 'operator+'.
>>
>>But it's still not done, since it then has to perform even more steps
>>searching for an appropriate assignment operator. (Etc., etc.)
>>
>>BTW, I've greatly simplified the actual process for function and
>>operator overloading. In reality, the compiler (and, under your
>>scheme, the run-time system) is required to find *all* candidate
>>operators that can possibly match the left-hand-side, and all that can
>>possibly match the right-hand-side, then union them and get exactly
>>one final match (there's some paring down as well; I don't remember
>>right now). The point is that it's nasty hard, and will require a
>>nasty amount of code.
>
>I think actually your point is that doing this requires duplication 
>of effort - compile-time and run-time and the two don't quite mesh 
>together perfectly.

Yes, duplication of effort. And duplication of code-size, which means a
little function could grow huge since it has to do almost as much work
as the compiler had to do when compiling the thing. And duplication of
CPU cycles, which means a little function will go much, much, much
slower.

I think it would be tantamount to running C++ as a language that
interprets its expressions every time control passes over them. (Much
slower than, say, Java, since that typically compiles things down to a
reasonably fast byte-code.)


>Ok, fair enough. Still, out of the OO languages I know, they seem to 
>strongly tend towards either static or dynamic with no attempts to 
>run a middle route. I probably am saying this out of ignorance 
>though.

There certainly aren't *many* in the middle. CLOS (Common Lisp Object
System) was one.

BTW want to talk about what's missing from C++ and Java and the rest,
CLOS has something very slick called "multi methods." Basic ideas is
this: in C++, the expression 'a.f(b)' uses dynamic binding on the type
of object referred to by 'a', but *not* based on the type of object
referred to by 'b'. This is another assymetry: why is the 'this' object
"special" in that way?

There really is no good answer, and CLOS solved it by saying you could
use dynamic binding on both (or "all N") parameters, not just the 'this'
paramter. For example, consider a hierarchy of Number, including
Integer, Double, Rational, InfinitePrecisionReal, BigNum, etc.
Everything seems great until you try to define "multiply." There are N
classes, so there are O(N^2) different algorithms, e.g., Integer*Double
uses different binary code from Rational*Integer, etc. And how to you
dispatch dynamically on those N^2 algorithms? You can't use
'a.multiplyBy(b)' since that will dynamically dispatch based on the type
of 'a' alone: there are only N different choices.

CLOS had a direct solution: define your N*N functions, and let CLOS
figure it out at runtime. It wasn't super fast, and the rules were
pretty involved (e.g., in tall hierarchies, you can have "close" and
"not so close" matches; what if one of your functions matches parameter
#1 closely and #2 not so close, and another function is the opposite;
which do you choose?) But it worked, and it was useful, at least in
some cases.

Here's another motivating example: suppose you had a hierarchy of N
Shapes, and you wanted to define a method called "equivalent()", e.g.,
'a.equivalent(b)'. The meaning of 'equivalent' was that the shapes
*appeared* the same. That means an Ellipse could be equivalent to a
Circle, but not to a Rectangle (except when both are zero-sized). A
Polygon could be equivalent to a Square or Triangle, etc. Seems
reasonable until you actually try to write that sucker. How can you get
N*N algorithms if you can only dispatch on the object to the left of the
".".

If you're interested, I can show you another idiom (emphasis) that lets
you do this in C++ and Java.


>>C++ is not D = we can't add rules that cause legal C programs to
>>generate compile errors unless there is a compelling reason to do so.
>
>I'm not seeing that this would.
>
>>What would happen with this:
>>
>> void foo(char* dest, const char* src)
>> {
>> strcpy(dest, src);
>> }
>>
>>Or even the simple hello-world from K&R:
>>
>> int main()
>> {
>> printf("Hello world!\n");
>> return 0;
>> }
>>
>>Would those generate an error message ("No version of
>>'strcpy()'/'printf()' returns 'void'")?
>
>Only if there is another overload. If there's one and one only 
>strcpy(), it gets called irrespective of return just like C. 

FYI there are two 'strcpy()'s - one with a const and one without.

>If 
>there's more than one, it uses the void return otherwise it generates 
>an error (without a cast).
>
>>* If they would cause an error, we break too much C.
>>* If they don't cause an error, we jump from the frying pan into the
>>fire: if someone later on created a version of those functions that
>>overloaded by return type, all those calls would break because
>>suddenly they'd all start generating error messages ("missing
>>return-type cast" or something like that). In other words, the
>>programmer would have to go back through and cast the return type,
>>e.g., (int)printf(...) or (char*)strcpy(...).
>
>No I think my solution preserves existing code.

I don't think so. I'll explain my second bullet with an example.
Suppose you have a function
int foo(int x);

Someone writes a million lines of code using foo(int), and a lot of the
time they ignore the return value, e.g., like how most people call
'printf()'.
foo(42);

Then later someone creates this function:
double foo(int x);

I believe your rules cause all those calls to
foo(42);
to generate an error message.


>>Adding a return-type-overloaded function wouldn't *always* cause an
>>error message, since sometimes it would be worse - it would silently
>>change the meaning of the above code. E.g., if someone created a
>>'void' version of printf() or strcpy(), the above code would silently
>>change meaning from (int)printf(const char*,...) to a totally
>>different function: (void)printf(const char*,...).
>
>In this particular case, yes. I would have the message "demons abound 
>here" stamped in red ink on that. My point is that C and C++ put lots 
>of power into the hands of the programmer anyway, so I don't think 
>the fact you can break lots of code by introducing a void return 
>variant of an existing function is all that bad. There are worse 
>potentials for error in the language.

I suppose you're right about the power-in-the-programmers-hand part.
After all, we're not talking about a programming language that
*prevents* idiots from shooting themselves in the foot!! If anything,
it separates the men from the boys. Perhaps not as bad as juggling
chain-saws, but it certainly has its "sharp pointy things" that will
make your program bleed if you screw up.


>>>Of course, that was then and this is now, but he didn't seem to me to
>>> write in an overly clear style. Quite laden with technogrammar.
>>
>>D&E (as the book is affectionally called) is a valuable resource for
>>someone like you, since it explains why things are the way they are.
>>It's probably not as hard to read as The C++ Programming Language
>>since it's really a narrative or story of how Bjarne made his
>>decisions and why. But even if it is hard to read, you still might
>>like it. (Obviously if you have only one book to buy, buy mine, not
>>his! :-) (Actually I get only a buck per book so I really have almost
>>no incentive to hawk the thing.)
>
>I'm guessing you get a 10% commission then, halved between the two of 
>you. Yeah, it's not a lot ...

Actually I don't even think it's 10%. It might be a buck a book
*divided* between us. I really don't know (and obviously don't pay much
attention to it). They send me a check every once in a while, but it's
not enough to send the kids to college so it doesn't really hit my radar
screen.


>>>I'm afraid I don't. In your 20 derived classes, each is in fact its
>>>own autonomous data processor whose only commonality is that they
>>>share an API. The API is good for the programmer, but doesn't help
>>>the data processing one jot.
>>
>>I have no idea what I was thinking above - the logic seems to totally
>>escape me. Perhaps I was referring to your last sentence only, that
>>is, to base your design totally around data. Yea, that's what I was
>>thinking. Okay, I think I can explain it.
>>
>>In my base class 'Foo', the design of the system was based around
>>'Foo' itself and the API specified by Foo. 99% of the system used
>>'Foo&' or 'Foo*', and only a small percent of the code actually knew
>>anything about the data, since the data was held in the derived
>>classes and 99% of the system was ignorant of those. In fact, there
>>are 20 *different* data structures, one each in the 20 derived
>>classes, and "99% of the system" is ignorant of all 20.
>>
>>The point is the vast majority of the code (say 99%) doesn't have the
>>slightest clue about the data. To me, that means the code was
>>organized *not* around the data. The benefit of this is pluggability,
>>extensibility, and flexibility, since one can add or change a derived
>>class without breaking any of the 99%.
>>
>>I'm still not sure that addresses what you were saying, but at least I
>>understand what I was trying to say last night.
>
>No that addresses programmability and maintainability. It does not 
>address program efficiency, which was my point.

Well I for one use flexibility as a performance tuning technique. At
least sometimes. In other words, I bury data structures in derived
classes, and sometimes end up selecting derived classes based on
performance considerations. For example, "This Bag is good for big
piles of data and has good average cost, but is occasionally individual
queries go really slow; this other one never has any really slow look-up
costs, but its average case is a little worse; this third one is the
fastest choice if you have less than 10 elements; etc." That way I can
pick and choose based on what each individual Bag (or whatever)
needs/wants/has. And the 99% of the system is ignorant of which kind of
Bag it's working with - it just knows it's using the Bag abstraction.

That's quite different from the old Abstract Data Type (ADT) idea. Both
ADTs and the thing I just described hide the detailed data structure
from the client, but with ADTs there was exactly one data structure and
with the thing I'm talking about, every individual location in the
source code that creates a Bag object could conceivably use a different
derived class == a different data structure, and the 99% of the system
would work with all these different data structurse (more or less)
simultaneously.


>>>Hence my view that OO is good for organising source (intuitively it
>>>produces good source organisation) but poor for program design (ok,
>>>program algorithms in your terms).
>>
>>I think OO has one bullet in its gun: it is good for achieving
>>non-functional goals, like extensibility, flexibility, etc. If you
>>are *very* careful, you can achieve those non-functional goals without
>>sacrificing other non-functionals, such as speed. I think if someone
>>has a program with no extensibility and no flexibility goals, then OO
>>adds little genuine value. 
>
>Err, does this mean you are agreeing with me? :)

Depends on what I'm agreeing to!! :-)
That OO is imperfect? Yes.
That OO isn't the best choice for every task? Yes.
That OO's imperfections means it is bad? No.

I think imperfect tools are good since there are no alternatives. In
other words, I think *all* tools are imperfect, and that *none* of them
pass the one-size-fits-all test. OO (and your data-oriented approach)
included.


>>>My fundamental point is that I think that you have integrated many
>>>beneficial and good programming practices into your internal
>>>conceptualisation of what OO is and means, and you are having
>>>difficulty separating them and treating them as what they are. I
>>>personally prefer to treat these things more seperately as I believe
>>>it offers me a great selection of tools from the toolbox as it were,
>>>but it's entirely a personal choice.
>>
>>You're probably right. I'm not an advocate for any given style of
>>programming, since any advocate for anything ends up being a
>>one-trick-pony, and they can only be radically successful if their
>>particular "trick" happens to be a really good fit for the project du
>>jour. Instead I try to advocate success over all, and that means
>>intentionally using whatever styles help achieve that success.
>
>Ah, agreement also. Good.

Perhaps. But you seem to be more of an advocate than me. (Meaning you
seem to be an advocate for the data-oriented approach more than I am for
OO or anything else.) But I guess it's okay for you to be an advocate,
after all, you're actually trying to convince other people that your
thing is good and they should embrace it. I, on the other hand, have
the luxury of floating above that - I don't need to promote any
technology, and therefore I "get" to be agnostic - to promote
business-level goals like "success" or whatever. You can (and should)
also use those terms, but what you really end up doing is saying,
"You'll be more successful using *my* thingy."

Naturally any decision-maker knows to listen to the guy who's not
selling anything, which is why consultants like me try to be
technology-neutral.


>>>Why isn't it a better one-size-fits-all approach? 
>>
>>Because there is no one-size-fits-all approach! :-)
>
>Ok, how about a better starting approach?

You misunderstand me. It *can't* be a better starting approach than
what I start with, since what I start with is a question-mark. In other
words, I don't start with OO and then move on from there. I start by
examining the business situation.

Example, there's a telecom company in town called White Rock Networks.
The CEO lives down the street - I pass his house all the time. He and I
and our wives went to the Symphony together a while back. His company
uses C and is afraid of C++ because of performance considerations. I
think they're wrong, but I don't care, since the fact that they have 150
C programmers who hate C++ and Java means that the *best* language for
them is C. It doesn't matter whether their reason for hating C++ or
Java is right or wrong; it only matters *that* they hate C++ and Java,
and therefore trying to get them to program in C++ or Java would cause
the best of them to jump ship - to quit and move to a different company.

I do the same with programming approaches, e.g., structured programming
vs. object-based vs. full object-oriented vs. this thing you're cooking
up. I'd like to learn more about your thing so I can use it someday,
but ultimately I'll need to find a business and technical spot where
it's a good fit.


>>>Surely you would 
>>>agree that if you base your design on quantities of data and the
>>>overheads of the media in which they reside, you naturally and
>>>intuitively produce a much more efficient design?

I honestly don't know enough about what you're doing to agree or
disagree.

***HOWEVER*** even if I agreed fully with that statement, I still don't
think it has much to do with whether your stuff should be used on a
given project. I honestly believe language- and technique-selection
should be based on things like who the programmers are, what they know,
whether the universities are churning out more programmers for us,
whether we're going to be able to replace the ones we have if they quit,
etc., in addition to the technical aspects you mentioned above. Just
because "X" is a better mousetrap than "Y" doesn't mean we should use
"X". We should use "X" if and only if "X" will reduce the overall
company time, cost, and risk. And that includes the time and cost for
retraining, the risk of losing the people we have who don't make the
transition, and the risk of being held hostage by our programmers (e.g.,
if we choose a technology where there are only a few competent
programmers, we might end up having to pay through the nose just to keep
the ones we have).


>>Even if what you're saying is true, "a much more efficient design"
>>might not the top priority on "this" project. All I'm saying is: I
>>prefer to start with the goals, *then* decide which technologies to
>>use. Anyone who comes in talking, who already knows which
>>technologies should be used before understanding the goals, is foolish
>>in my book.
>>
>>I wouldn't want to assume your data-oriented approach is the answer
>>any more than I would want to assume OO is the answer. First tell me
>>what the question is, *THEN* I'll come up with the "most appropriate"
>>answer.
>
>Ok, I think we're escaping the fundamental core of this thread. 
>Basically, what I am saying, is that across all the software projects 
>in all the world, people are mostly applying an OO-based solution as 
>a primary leader. I feel this produces worse quality software because 
>of the problems with lack of intuition

Which may be true. But the fact that "people are mostly applying an
OO-based solution as a primary leader" means there are a lot of
programmers out there, and there will be a lot of tool vendors and
compiler vendors to choose from, and we'll be able to get off-the-shelf
libraries, and we'll be able to get consultants to help out in a pinch,
and we'll have the choice whether to rent or buy our programmers, and,
and, and.

Actually I really like your spunk and determination, and I really
shouldn't try to throw a wet towel on your fire. You *need* your fire
since otherwise you won't be able to finish what you've started.

Tell you what: let's not talk about being language neutral any more,
since it will not help you. Instead, please tell me about your
paradigm. Show me some examples.

I really need to get some sleep - sorry I can't finish these responses.

Marshall

PS:
>A question your expertise may be able to answer: is there a non-GPL 
>portable Unix shell because I've looked *everywhere* and can't find 
>one?

Sorry, don't know.



From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sun, 4 Aug 2002 18:49:45 +0200

On 3 Aug 2002 at 3:28, Marshall Cline wrote:

> >>This proper-inheritance notion is the same as require-no-more,
> >>promise-no-less, which you basically didn't like :-(
> >
> >No, I didn't like the /phrase/, not its meaning. I understood the
> >meaning three emails ago.
> 
> Regarding "three emails ago," we seem to have had a small
> communication problem. I re-explained things after you already "got
> it," and I apologize for frustrating you that way. I certainly did
> not want to imply you are stupid or thick-headed or something, since
> it is quite clear (to me, anyway) that you are not.

Oh good. I'm glad you don't think me stupid - many others do. I put 
it down to personality incompatibilities.

> However I think we both played a part in this communication problem.
> For example, when I first explained the "require no more and promise
> no less" idea in my previous email, you replied, "This is quite
> ephemeral and subtle..." Although it is clearly subtle at times, I
> see it as the opposite of ephemeral, and since, perhaps as a
> back-handed compliment to you (e.g., "I *know* this guy is bright, so
> if he thinks it is ephemeral, I must not have explained it very
> well"), I re-explained it.

In which case, I must explain myself as well - why I said that in 
that fashion wasn't just because of you, but also for the benefit of 
the others following this conversation. I *do* however think it 
subtle because, like you said before, it's not a programming error.

> There have been several other times throughout our conversation when
> you said something that made me think, "He still doesn't see what I'm
> seeing." I see now I was wrong, so I'm not trying to convince you
> that you don't get it. I'm simply trying to help you see why I
> re-explained things too many times.

Well, my father will often tell me the same story maybe three or four 
times before I point out to him he's already told me three or four 
times. Also, given some of the people I associated with at university 
who I shall describe as following alternative lifestyles, quite 
severe weirdness is something I'm used to.

> For example, when you explained that you had already checked to make
> sure 'prepend()' and 'append()' were not called within the base
> class's code, and that that gave you confidence there wouldn't be any
> errors resulting from your redefining those methods in TSortedList, I
> thought to myself, "He doesn't get it yet; checking the base class
> itself is necessary but not sufficient." So I (erroneously) explained
> it again.

No, I was referring to before when I had ever talked to you (which I 
thought I made clear at the time). In current code, TSortedList isn't 
even related to QList anymore to partly fix the issues you 
illustrated to me.

> Put it this way: stupid people say random things. They are incapable
> of coming up with a cohesive perspective on complex things, so their
> statements are often inconsistent with each other. You said some
> things that (I thought!) were inconsistent with each other, but you're
> not stupid (or if you are, you sure fooled me ;-) If I had thought
> you were stupid, I probably would have politely ended the conversation
> and quietly written you off as a lost cause (sorry if that is
> condescending, but we both have better things to do with our lives
> than pour hours and hours into people we can't actually help). So
> instead of writing you off, I figured, "Just one more email and he'll
> *really* get it!"
> 
> Hopefully that explains why (I believe) we both played a role in me
> being a broken-record. And also, hopefully it shows you that I didn't
> repeat myself because I thought you were dumb, or because I was dumb,
> but instead because I thought you were bright enough to get it, and if
> you saw it from just one more vantage point, you'd get it.
> 
> Okay, so you get it. Now we can move on!

I should mention you've caused me to go off and rewrite quite a lot 
of classes which has then caused quite a lot of code to break which I 
think is the best possible proof of my understanding. For example, my 
security descriptor class I had implemented as a public derivation of 
QList which is now clearly bad. I've since made it private 
inheritence with selected methods made public with "using" (a full 
reimplementation of that class would basically mean a start from 
scratch given how often it is used - so I'm afraid I compromised). I 
also fixed up the TKernelPath class which is a string with extra 
bells and whistles - previously I had changed some methods to do 
different behaviour, now I've reverted them and put the new behaviour 
in new methods. And etc. etc. it goes on and on ...

I do want to make it clear though that I am very grateful for the 
tips. You have saved me lots of time in the long run.

> [minimising ripple effect]
> >That's an admirable desire, but do you think it's really possible?
> 
> Yes, in the sense that I've seen big projects get 80% or 90% of the
> way there. The key, of course, is to find low-budget ways to get the
> first 80% of the value, and given the 80/20 rule, that turns out to be
> possible. I call it the low-hanging fruit. It's not perfect, but
> it's much better than if we go with the status quo.
> 
> The low-hanging fruit involves just a few disciplines, including
> "programming by contract" (you may be familiar with this; if not,
> sneak a peek at chapters 1-4 of Bertrand Meyers's book,
> Object-Oriented Software Construction; or I could explain it to you)
> and it requires design reviews where the contracts in base classes are
> carefully evaluated based on some pretty straightforward criteria. 
> Since many programmers don't have self-discipline, even when it would
> be in their own best interest, project leaders must enforce the above
> by putting them into the project's process. In the end, these things
> actually will happen if they get tracked and reviewed (read
> "enforced") by management, and they really do contribute a great deal
> toward the above goal.

Ah, you mean to write out a set of rules which all programmers must 
follow to the letter, and I do spot checks to ensure they're doing 
it. I actually came up with that on my own too - I made the basic 
inviable on pain of death rules into a half A4 sheet and then 
explanations why for another fifteen pages. I then stuck the half A4 
sheet on the monitor of every programmer and got very annoyed if they 
removed it :)

> There are a few other design and programming ideas I use to help
> achieve that goal (more low-hanging fruit). For example, simple (but
> often overlooked) things like creating an explicit architecture with
> explicit APIs in each subsystem, wrapping the API of each OO subsystem
> so the other subsystems can't "see" the OO subsystem's inheritances or
> how it allocates methods to objects (in my lingo, so they can't see
> it's design), etc.

Surely that's a technique since C days where you abstracted all the 
implementation detail into the .c file? Well, not quite the same as 
the header file space tends to be unified, but I get what you mean. 
Again, you're lessening coupling.

> >If 
> >I've learned anything from quantum mechanics and biology, it's that
> >there will *always* be knock-on effects from even the tiniest change
> >in any large system. Good design and coding is about minimising
> >those, 
> 
> Agreed: bad design has a huge ripple effect, which can be thought of
> as chaos (a tiny change in one places causes a very large change
> somewhere else).

That's another reason behind my project BTW - I'm heavily deriving my 
design from quantum theory. Basically, in the quantum world 
everything exists because it has a relation to something else and yet 
order and self-containment emerges easily.

Go further: in the human brain, there are billions of interconnected 
nodes all talking to each other. You can kill or interfere with a 
substantial number of those and yet everything automatically adapts. 
I could go now into fractals and non-linear math examples, but I 
think you get the idea.

Basically, my project aims to make all the components tiny because in 
theory the total cost of changing a part of the overall system should 
rapidly decrease. In theory, you should be able to modify or 
interfere with substantial parts of the web with merely a lessening 
of functionality, not a complete stop. Obviously, this is some 
decades away yet, but my project is the first step down that road.

> >>I rather like that idea since I have found the average programmer
> >>is, well, average, and is typically unable or unwilling to
> >>understand the whole. That means they screw up systems where the
> >>whole is bigger than the sum of the parts -- they screw up systems
> >>where they must understand a whole bunch of code in order to fully
> >>know whether a change will break anything else.
> >
> >Hence the usefulness of pairing programmers.
> 
> Perhaps you're right. I always worry about know-it-all programmers
> who insist on their own way even when they're wrong, but like I said,
> that's more of a theoretical concern about pair programming since I
> haven't experienced it in practice.

Psychology theory tells us people are much less likely to 
individualise when they have someone looking over their shoulder. 
Pair programming should theroretically cause a lazy programmer to 
sharpen up.

However (and I did spend some time talking with the webmaster at 
www.extremeprogramming.org about this), my experience is that certain 
types of personality form a lazy pair who spend all of their time 
talking and mucking around. Furthermore, there are programmers I know 
of who would hate being paired (they're the same ones who hate people 
looking over their shoulders) and of course there are also people who 
just dislike each other. Lastly, I have difficulty imagining myself 
being paired - at school, when we had to go to these programming 
classes, I always raced ahead of the guy with me - I did try to hold 
his hand, but he really wanted to defer to me. I can definitely see 
that unequally skilled programmers wouldn't pair either - one would 
defer everything to the other.

> [about motivation your programming team]
> Perhaps that success has been because I usually present some of these
> ideas in front of a group/seminar, and I usually (verbally) beat my
> chest and say something like, "Anybody who hurts their company to
> benefit themselves is unprofessional and deserves to be fired."
> Basically I shame the hot-shots into realizing they can't hold the
> company hostage just for their own personal job security. In fact,
> I'll be giving that sort of speech at a development lab of UPS on
> Tuesday, and I'll probably mention the recent corporate scandals. If
> I do, I'll talk about those disreputable people who hurt the
> stockholders in order to line their own pockets. Nobody wants to be
> compared to those guys, which brings out a righteous streak in
> everybody (including the hot-shots).

That's a very US attitude about business being somehow holdable to 
some higher standard. I mean, in Europe it's naturally accepted every 
company will do the most evil possible to increase profits - hence 
our very red-tape heavy legistlature. But then of course, we're 
comfortable with Marx here.

> (Sometimes, when I really need to bring out a big hammer, I compare
> the "wrong" attitude to terrorism. It's a stretch, but I'm pretty
> good in front of a crowd. The key insight is that nobody should be
> allowed to hurt their company just to help themselves. For example, I
> say something like, "If you hold company assets in between your
> earlobes, they don't belong to you - they belong to the company. If
> you refuse to write them down just to protect your own job, you're no
> better than a terrorist, since you're basically saying, "Give me my
> way or I'll hurt you." Amazingly I haven't been lynched yet.)

Again, the word "terrorism" doesn't carry anything like the same 
weight here (sometimes I must admit to laughing at CNN's import of 
gravitas on it). We tend here to look at terrorism in its context, 
not its acts - after all, modern terrorism was invented by European 
colonialism and governments as well as NGO's have used it frequently 
in both internal and external causes. For example, the IRA are 
freedom fighters to many but terrorists to the English. All the great 
revolutionary leaders I was taught about in school were so highly 
held by massacring every important Englishman to set food in Ireland, 
along with their wives and children (it sent a "better" message). 
They all went on to become government ministers and highly respected 
internationally, including by the English.

Anyway, I digress. I get quite annoyed by the actions of the current 
Bush administration :(

BTW, if you want to improve your performance in front of a crowd, 
studying debating techniques or even toastmastering is very 
worthwhile. You can pick up all the techniques professional 
politicians use.

> Like I said, I doubt any of this saber-rattling actually changes
> people's hearts, and therefore it won't change the guy who gives 8
> hours notice before vacation, but it does seem to force the hot-shots
> into supporting the plan, and invariably they buy-in.

No, but psychologically it reduces the individual's ability to 
rationalise non-conformance ie; make excuses for not toeing the line.

> You're probably right about what you're saying, but that's a totally
> different topic from what I was *trying* to say. I'm not talking
> about productivity differences (either between those who use assembler
> vs. high-level tools or between the hot-shots and the dolts). I'm
> talking about the ability to understand the whole ripple effect of a
> change in their heads. In other words, the *average* architect is
> broad but shallow (they know a little about the whole system, but
> often don't know enough to reach in and change the code), and the
> *average* "coder" is deep but narrow (they have deep understanding of
> a few areas within the system, but most can't tell you how all the
> pieces hold together - they don't have the breadth of an architect). 

Well, to most software engineers their job is to provide money - they 
don't do it at home for fun.

> But knowing the full ripple effect of a change to the system often
> requires both broad and deep knowledge that very few possess. In one
> of my consulting gigs, there was a guy (Mike Corrigan) who was both
> broad and deep. Management called him a "system-wide expert." 
> Everyone believed he could visualize the entire system in his head,
> and it seemed to be true. It was a huge system (around two million
> lines below a major interface, and around 14 million lines above
> that). When someone would propose a change, he would go into
> never-never land, and perhaps a week later he would explain why it
> couldn't be done or would provide a list of a dozen subsystems that
> would be effected.

Ah yes, the sudden revelations you have while in the shower. You give 
yourself a task to think about, then carry a little notebook 
everywhere to go to write down ideas which suddenly pop into your 
head. You can get remarkable results using this technique.

> When I wrote the above, I was visualizing the Mike Corrigans of the
> world. After they get enough experience with a system, they are both
> deep and broad, and in particular, they can see the entire ripple
> effect in their heads. That's what I was trying to say (though you
> can't see the connection without seeing the paragraph prior to the
> paragraph that begins, "If we don't solve the middle-of-the-bell-curve
> problem...")

Of course, the above type of guy is worth their weight in gold and 
ultimately, their use cannot be completely eliminated. But it can be 
minimised, and more importantly if he were to leave/retire, the 
ensuing chaos can also be minimised.

> Perhaps you would agree with this: In most companies, those in the
> middle-of-the-bell-curve have a difficult time reliably changing large
> systems. Companies that solve this problem will be better off.

Yes, absolutely.

> >I'll just mention RISC-OS had fantastic documentation (even with
> >custom designed manuals which automatically perched on your lap).
> >It's a difference I still miss today, and it's why my project has
> >excellent documentation (I wrote a lot of it before the code).
> 
> Bingo. Good stuff. I do the same thing: write the specs as best I
> can ahead of time, and invariably add to them as I get further into
> the project, and sometimes (carefully) changing what I wrote when it
> was wrong or inconsistent (or when I simply discover a better way).

Heh, definitely the story of this project. I was of course severely 
limited because I didn't know what C++ would let me or not let me do. 
But still, so far, it's been all detail changes eg; my TProcess class 
had to be made abstract because the kernel and client libraries share 
99% of the same code so I passed all the differences out to base 
classes.

> When "selling" companies on this idea, I even use a different term
> than documentation, since most people think of documentation as
> something you write that *describes* what the code already does. But
> these contracts/specifications *prescribe* what the code *should* do,
> hence the different name: contracts or specifications.

Good idea. I usually use the word "specification" myself, but I could 
use the concept of contracts to obtain for myself more design time. 
Companies generally like to see you start coding immediately - I 
usually do grunt work initially into the coding period, leaving me 
more subconcious time to think of problems with the design.

> Generally speaking, good C++ (or Java or Eiffel) class hierarchies are
> short and fat with very little data in the base classes. The
> fundamental reason for this is that these languages exhibit an
> assymetry between data and code. In particular, they let a derived
> class replace some code in the base class (virtual functions), but
> they don't let a derived class do the same with data (they don't have
> "virtual data"). Once a base class has a certain data structure, all
> derived classes forever and ever are forced to have that data
> structure. If a given derived class doesn't "want" that data
> structure, it has to carry it around anyhow, and that typically makes
> it bigger or slower than optimal.

Hmm. I'm not seeing a need for virtual data given you could use a 
virtual method to implement the virtual data. However, that's an 
idiom.

> Obviously this is backwards semantically, and is therefore bad based
> on everything we've discussed forever (which you already get; I know).
> But it is interesting how the "inheritance is for reuse" idea pushes
> you in exactly the wrong direction. It is also interesting (and
> somewhat sad) that the proper use of inheritance ends up requiring you
> to write more code. That would be solvable if today's OO languages
> had virtual data, but I'm not even sure how that could be implemented
> in a typesafe language.

I don't think it possible in a statically typed language, but it 
could be done in a dynamically typed language. Effectively, it would 
be an extension of the dynamic type system.

> I actually do know of another way to have-your-cake-and-eat-it-too in
> the above situation. Too late to describe this time; ask me someday
> if you're interested. (And, strangely enough, although C++ and Java
> and Eiffel don't support this better approach, OO COBOL of all
> languages does. So if anybody ever asks you which is the most
> advanced OO language, tell them COBOL. And snicker with an evil,
> Boris Karlov-like laugh.)

I think I may be ferociously set upon and horribly mauled if I tried 
that :)

> >>For example, they could cause the list's contents to become
> >>unsorted, and that could screw up TSortedList's binary search
> >>algorithms. The only way to insulate yourself from this is to use
> >>has-a or private/protected inheritance.
> >
> >Surely private or protected inheritance affects the subclass only?
> >ie; you could still pass the subclass to its base class?
> 
> I have no idea what you're asking - sorry, I simply can't parse it.

Actually you answered it below :)

> Here's the deal with private/protected inheritance. Where public
> inheritance "means" is-substitutable-for, private and protected
> inheritance "mean" has-a (AKA aggregation or composition). For
> example, if TSortedList privately inherited from GList, then it would
> be semantically almost the same as if TSortedList had-a GList as a
> private member. In both cases (private inheritance and has-a) the
> methods of GList (including the now infamous prepend() and append())
> would *not* automatically be accessible to users of TSortedList, and
> in both cases users of TSortedList could not pass a TSortedList to a
> function that is expecting a GList (which, as you know, is the start
> of all our "improper inheritance" problems).
> 
> There are a few differences between private inheritance and has-a, but
> the differences are buried in the code of TSortedList. In particular,
> if TSortedList privately inherited from GList, the derived class's
> code (its member and friend functions) are allowed to convert a
> TSortedList& to a GList& (and similarly for pointers), and TSortedList
> can use a special syntax to make selected public methods of GList
> public within TSortedList. For example, if there is a 'size()' method
> within GList that TSortedList wants to make public, then in the public
> section of TSortedList simply say "using GList<T>::size;". Note: the
> '()' were omitted intentionally. See the FAQ for this 'using' syntax.
> In fact, the FAQ has a whole section on private and protected
> inheritance - you might want to read it since it will solve your "bad
> inheritance" problem, at least with TSortedList.

Have read it many times. You see, in order not to appear stupid 
during these emails, I usually research every point before I make it.
Very time-consuming, but also an excellent way of learning.

> [embedded systems]
> >No, that's not hugely true anymore.
> 
> We probably have different experience bases, since most of the
> companies I've worked with have such tight coding constraints that
> bloatware like WinCE are unthinkable. The folks I've worked with try
> to squeeze the whole shebang, including OS, apps, data, drivers,
> wireless stuff, everything, into 4MB, sometimes less. They often end
> up with much smaller, proprietary OSs.

That's often because they've committed themselves to using one 
particular piece of hardware (for whatever reason). If you're talking 
sub-CE capable hardware (eg; electronic barometer etc.), then 
*currently* you're right, there is still a need for proper assembler 
hackers.

But I must point out, think what it will be like in ten years when 
Bill's vision of having every embedded system running Windows comes 
true? That day is coming, and it's coming quicker than many realise. 
Just as the US multinationals crushed Europe's indigenous desktop 
industry, they are now targeting the embedded market. The result in 
either case is vastly more powerful embedded systems, and much less 
assembler programmers.

> But in the world of, say, calculators, customers expect the thing to
> run for a year on an AA battery or two. The screens on those things
> aren't backlit, but they have oodles of code in them. I remember
> working with an HP team that said they had half a million lines of
> code in one of their calculators. It solves differential equations,
> plots various kinds of graphs, is programmable, etc., etc., and the
> amount of memory it has is important to them.

Yeah I remember talking to a guy at ARM during lunch about scientific 
calculators.

> I have a Palm V. Doesn't do too much, has a lousy 160x160 LCD
> black-and-green screen, and it came with only 4MB of memory, but it's
> tiny, inexpensive, weighs only 4(?) ounces, and runs for a week
> between charges. I could have purchased a WinCE box, but it's much
> heavier, more expensive, and needs to be charged much more often.

I would have bought a Psion Netbook if I ever had the money. They had 
a 150Mhz StrongARM with 32Mb of RAM and ran Linux (proper desktop 
version) like a dream. You could reflash them you see with any OS you 
wanted (they come with Symbian OS). Ran for 12-16 hours continuously 
before needing a recharge, despite its 640x480 full colour display.

> Now the world is slowly moving toward higher power processors and
> backlit screen, which ultimately means people will begin to expect
> less and less wrt battery life. In the mean time, there will always
> be companies like HP, Texas Instruments, UPS, FedEx, Brooklyn Union
> Gas, etc., that are up against memory limitations, and they'll always
> be fighting to add functionality without increasing hardware. I think
> that stuff is fun.

Oh it is fun. And in many ways, I wouldn't mind my old job at ARM 
back except I had no power there and unfortunately in ARM, the more 
power you have the less you program :(

> >In other words, Limbo is doing a full compile to a proper assembler
> >model (which just happens not to have a processor which can run it,
> >but one could be easily designed). Java is really mostly interpreted
> >in that the source is pretty easy to see in the byte code. I've seen
> >some reverse compilers and their output is awfully similar to the
> >original - whereas no reverse compiler would have a hope
> >reconstituting C++ (or even C).
> 
> Yes Limbo does a full compile to an abstract machine. But I don't
> agree with one of your implications: The fact that Java uncompilers do
> a pretty good job does not mean that Java is mostly interpreted. In
> fact, the real reason Java uncompilers do a pretty good job is because
> of all the "meta data" Java .class files are required to carry around
> (they have to contain enough meta data that "reflection" can work;
> e.g., Java lets you poke around in the meta data of another class,
> including determining at runtime what methods it has, the
> parameters/return types of those methods, which methods are private,
> protected, public; etc., etc.). If a Limbo binary (a .dis file)
> contained the meta-data that a Java .class file contained, a Limbo
> uncompiler could do about as good a job as a Java uncompiler.

I didn't know that. Fair enough. I'm still thinking though the Java 
bytecode was not the best design it could be though.

> Agree that it would be hard to uncompile C or C++, even if there was
> meta-data, since it would be much harder to guess what the original
> code looked like from the instruction stream. But I think that's an
> artifact of the fact that a "real" processor has very small granular
> instructions and the optimizer is expected to reorganize things to
> make them fast/small/whatever. In contrast, the byte-codes for Limbo
> and Java do more work (e.g., both have a "call" instruction that does
> a *whole* lot more than a "real" hardware call), so they expect the
> virtual machine itself to be optimized for the particular hardware
> platform.

Heh, the ARM doesn't have even a call subroutine instruction (nor a 
stack). It's proper RISC :)

> FYI Java has a number of processors that run it. In that case,
> wouldn't the assembly language look just like Java byte-codes?

I was under the impression there were processors which natively 
executes /some/ of the byte code, but a majority was too high-level 
and so remained being interpreted. ARM do just such a processor in 
fact - you tell it to go execute the Java at address X and it calls a 
vector everytime it finds something it doesn't understand. Provided a 
12x speedup if I remember (I was involved in the design).

> >although theirs merely includes windows. They're also come up 
> >remarkably with quite a few things I had thought of independently -
> >like overloading mouse button presses. I'm doing it in a way though
> >that won't scare people (unlike Plan 9)
> 
> The interesting thing to me is that Inferno is built *around* Limbo.
> For example, whereas pid in Unix is a process ID, in Inferno it's a
> thread-ID, and each thread runs a different .dis file. In other
> words, running an application in Unix forks a process with its own
> address space, but in Inferno it simply spawns a thread in the shared
> address space, and runs a .dis file along with all the other .dis
> files that are currently running.

Surely they have data consistency errors then if faulty code corrupts 
the shared data? Wouldn't be a very secure system then.

> I'm not doing it justice. I'll try again: Inferno is nothing but one
> big Limbo interpreter. Every program in Inferno, including the shell
> itself, utilities like 'ls', and all the rest, are Dis threads that
> are runing in the same address space by the same Dis "virtual
> machine." The shared address space thing seems horribly error prone,
> and perhaps it is, but you've already seen how namespaces can be
> trimmed for individual applications ("threads"), so apparently that's
> how they keep these different apps separated. It seems like a slick,
> lightweight idea.

As in, copy on write?

I should mention I didn't have much success with Plan 9. Their user 
interface is *extremely* minimalist and their documentation could be 
described as minimalist as well :(

As I've mentioned before, I really don't like anything that isn't 
intuitive. I should be able to use without a manual, but /excel/ with 
a manual.

> (BTW you mentioned Plan 9. Is that what you got? Or did you get
> Inferno? I've been discussing Inferno. I believe Inferno stole the
> namespace idea from Plan 9, but I don't know that Plan 9 is built
> around a Dis virtual machine the way Inferno is. Also I'm not sure
> about Limbo on Plan 9.)

According to Vita Nueva, Plan 9 is merely an industrial version of 
Inferno. The two will talk together via 9P and in most intents and 
purposes they're identical.

> >Maybe I should go apply for a job at Bell Labs? Nah, that US visa
> >thing getting in the way again ...
> 
> Vita Nuova is in the UK, and they own the rights to Plan 9 and
> Inferno.

Well, if I ever return to the UK, I may give them a call. They're in 
northern england as well which is a major brownie point (I can't 
stand the south - too crowded).

BTW if they own it, why is Bell still doing all the development work?

> >Thanks for pointing me towards Plan 9, I wouldn't have known my ideas
> > are so agreed upon by (eminent) others without it!
> 
> You're famous and you didn't know it.

No I'm like Leibniz who invented calculus and didn't get the credit 
for it. When I start getting invited to give speeches, *then* I'm 
famous! (Futhermore, it'd be nice to have a bit more free cash)

> Close. It's called the cast-operator. It's used in cases when you
> want to allow your object of class Foo to be converted to an object of
> class Bar. For example, the statement:

I've just used that (the cast operator) to implement a template class 
permitting thread local storage. You literally do:

TThreadLocalStorage<MyLocalData *> locdata=new MyLocalData;
...
locdata->foo=5;

> However someone mentioned an idiom that allows programmers to do just
> that, and the proposal/discussion died instantly. The idea is for
> 'Der' to add this line:
> 
> class Der : public Base {
> typedef Base super; <===***
> public:
> ...
> };
> 
> That's more roundabout, but it gets the job done. Same with my
> "overloaded return type idiom."

That's a good idea. May use that myself - I had been doing a search 
and replace within a selection.

> (Some of these are very subtle and require a lot of compromise, and
> all you need is for one of these to "not work out right" and you have
> a worse wart than we already have. Adding language features like this
> is much harder than it looks, as I'm sure you know.)

I did some investigation into points to support my arguments, but 
unfortunately found plenty of points against my argument :) - much of 
which followed what you said.

So, I relinquish my point. However, see below for my proposal for my 
own language which you got me thinking about in bed last night.

> >Ok, fair enough. Still, out of the OO languages I know, they seem to
> >strongly tend towards either static or dynamic with no attempts to
> >run a middle route. I probably am saying this out of ignorance
> >though.
> 
> There certainly aren't *many* in the middle. CLOS (Common Lisp Object
> System) was one.
> 
> BTW want to talk about what's missing from C++ and Java and the rest,
> CLOS has something very slick called "multi methods." Basic ideas is
> this: in C++, the expression 'a.f(b)' uses dynamic binding on the type
> of object referred to by 'a', but *not* based on the type of object
> referred to by 'b'. This is another assymetry: why is the 'this'
> object "special" in that way?
> 
> There really is no good answer, and CLOS solved it by saying you could
> use dynamic binding on both (or "all N") parameters, not just the
> 'this' paramter. For example, consider a hierarchy of Number,
> including Integer, Double, Rational, InfinitePrecisionReal, BigNum,
> etc. Everything seems great until you try to define "multiply." There
> are N classes, so there are O(N^2) different algorithms, e.g.,
> Integer*Double uses different binary code from Rational*Integer, etc. 
> And how to you dispatch dynamically on those N^2 algorithms? You
> can't use 'a.multiplyBy(b)' since that will dynamically dispatch based
> on the type of 'a' alone: there are only N different choices.
> 
> CLOS had a direct solution: define your N*N functions, and let CLOS
> figure it out at runtime. It wasn't super fast, and the rules were
> pretty involved (e.g., in tall hierarchies, you can have "close" and
> "not so close" matches; what if one of your functions matches
> parameter #1 closely and #2 not so close, and another function is the
> opposite; which do you choose?) But it worked, and it was useful, at
> least in some cases.

It's interesting you mention Lisp. I studied Logo while I was at 
school, became quite good at it and came fourth in an international 
competition (as usual only because my solutions were a bit whacky). 
Either way, whilst lying in bed last night I had sets floating around 
in my head and visions of a language which fufilled my three 
criteria: (i) that I program what to do not how to do it (ii) that it 
centres around data and (iii) it is compilable and interpretable, 
with the preference on the interpretable.

Basically, sets as you know are a collection of data, whether that is 
other sets, numbers, structures etc. I had a vision of basically the 
programmer preparing a set and executing code on it - with the 
interesting proviso that of course, code is data so quite 
realistically a set could be a collection of code. Of course, there 
is a kind of OO with the ability to attach code to data and arguably 
you can perform inheritence by unioning two sets of data - hence 
their code unions as well - and you could attach more code to have 
the two sets work together. However, it's definitely not pure OO.

Regarding compiling the thing, you can ask it to spit out C++ - which 
it will - and link against a run-time library. Performance would only 
be marginally better than interpreting it, but that's fine by me - I 
only want the ability to compile for those who don't like giving away 
their sources.

I looked into Lisp, and found no one has bothered doing much with it 
in five years now. Pity, because while I remember Logo didn't always 
do things as well as it could (I remember some non-intuitive syntax), 
it was pretty powerful. I'd like to do with this language the same as 
the rest of my project - is easy to use on the outside, but gets 
exponentially more powerful the deeper you go - and always always 
intuitively.

Of course, the language would directly work with my project's data in 
its unified dataspace. You merely run around connecting stuff 
together and making it go. However, I want it easily powerful enough 
you could write your entire application in it and indeed, that you 
would *want* to always write in it.

Anyway, it's not of much import. Given I can't find a shell anywhere, 
I'll have to write my own and hence I need some gameplan for design 
as it will become someday a full blown language.

> Here's another motivating example: suppose you had a hierarchy of N
> Shapes, and you wanted to define a method called "equivalent()", e.g.,
> 'a.equivalent(b)'. The meaning of 'equivalent' was that the shapes
> *appeared* the same. That means an Ellipse could be equivalent to a
> Circle, but not to a Rectangle (except when both are zero-sized). A
> Polygon could be equivalent to a Square or Triangle, etc. Seems
> reasonable until you actually try to write that sucker. How can you
> get N*N algorithms if you can only dispatch on the object to the left
> of the ".".

That's sounds horrendous. I'd personally redesign :)

> If you're interested, I can show you another idiom (emphasis) that
> lets you do this in C++ and Java.

Go on then. I've already used most of your examples in some way.

> >Only if there is another overload. If there's one and one only 
> >strcpy(), it gets called irrespective of return just like C. 
> 
> FYI there are two 'strcpy()'s - one with a const and one without.

Thought that was on its input parameter? Besides, a const <type> is 
merely a strong indicator that type's data should not be modified - 
otherwise, it acts the same. I see no difference.

> >No I think my solution preserves existing code.
> 
> I don't think so. I'll explain my second bullet with an example.
> Suppose you have a function
> int foo(int x);
> 
> Someone writes a million lines of code using foo(int), and a lot of
> the time they ignore the return value, e.g., like how most people call
> 'printf()'.
> foo(42);
> 
> Then later someone creates this function:
> double foo(int x);
> 
> I believe your rules cause all those calls to
> foo(42);
> to generate an error message.

Absolutely. But what happens currently? Currently, under C++, you get 
an error about not permitting overload based on return type.

My solution merely offers the programmer the ability to overload on 
return type. It's not without caveat nor danger, and furthermore 
probably only newly written code could use it properly - however, it 
does not break existing code *unless* the programmer does something 
stupid.

I can't see any reason why the next version of C++ shouldn't support 
this. As your idiom for overloading return types shows, it can be 
useful and certainly I would have found it useful during this 
project.

> I suppose you're right about the power-in-the-programmers-hand part.
> After all, we're not talking about a programming language that
> *prevents* idiots from shooting themselves in the foot!! If anything,
> it separates the men from the boys. Perhaps not as bad as juggling
> chain-saws, but it certainly has its "sharp pointy things" that will
> make your program bleed if you screw up.

Precisely. Given the lack of damage to existing code and the handy 
possible benefits, it should be submitted for approval IMHO.

> >Err, does this mean you are agreeing with me? :)
> 
> Depends on what I'm agreeing to!! :-)
> That OO is imperfect? Yes.
> That OO isn't the best choice for every task? Yes.
> That OO's imperfections means it is bad? No.
> 
> I think imperfect tools are good since there are no alternatives. In
> other words, I think *all* tools are imperfect, and that *none* of
> them pass the one-size-fits-all test. OO (and your data-oriented
> approach) included.

Ah good, I can fully agree with all of the above.

> >Ah, agreement also. Good.
> 
> Perhaps. But you seem to be more of an advocate than me. (Meaning
> you seem to be an advocate for the data-oriented approach more than I
> am for OO or anything else.) But I guess it's okay for you to be an
> advocate, after all, you're actually trying to convince other people
> that your thing is good and they should embrace it. I, on the other
> hand, have the luxury of floating above that - I don't need to promote
> any technology, and therefore I "get" to be agnostic - to promote
> business-level goals like "success" or whatever. You can (and should)
> also use those terms, but what you really end up doing is saying,
> "You'll be more successful using *my* thingy."
> 
> Naturally any decision-maker knows to listen to the guy who's not
> selling anything, which is why consultants like me try to be
> technology-neutral.

I suppose part of where I'm coming from is because I've read lots of 
psychology and I know there is not a single person on this planet who 
is objective. Everyone brings their prejudices and preconceptions to 
the table. In fact, I would even go so far as to say that overly 
attempting to be objective does you a lot of harm.

My view is that in order to make the best choices, one needs to 
accept ones partiality because only through that are you aware of 
your biases and thus your ability to be flexible is greatly enhanced. 
How you *handle* your prejudices is far far more important than the 
prejudice itself.

A bit of a different take I know, but I can produce studies to 
support this. Of course, eastern philosophy has taken this position 
for millennia, as indeed did the west until Descartes.

> >>>Why isn't it a better one-size-fits-all approach? 
> >>
> >>Because there is no one-size-fits-all approach! :-)
> >
> >Ok, how about a better starting approach?
> 
> You misunderstand me. It *can't* be a better starting approach than
> what I start with, since what I start with is a question-mark. In
> other words, I don't start with OO and then move on from there. I
> start by examining the business situation.

Note I don't believe anyone starts with a tabula rasa. Everyone 
brings their history of experience (it's what you're paid for!) and 
with that comes a lack of objectivity.

> It doesn't matter whether their
> reason for hating C++ or Java is right or wrong; it only matters
> *that* they hate C++ and Java, and therefore trying to get them to
> program in C++ or Java would cause the best of them to jump ship - to
> quit and move to a different company.

Good luck to them in finding pure ANSI C work nowadays. Certainly 
here in Europe, the highest demand is for Java, followed by various 
Microsoft and database technologies, then C++ and way way down the 
list is old fashioned C.

> I do the same with programming approaches, e.g., structured
> programming vs. object-based vs. full object-oriented vs. this thing
> you're cooking up. I'd like to learn more about your thing so I can
> use it someday, but ultimately I'll need to find a business and
> technical spot where it's a good fit.

Unfortunately, AFAICS there aren't the tools for my philosophy out 
there because work in my way of thinking stopped a few years ago. 
Indeed, Unix up until the kernel point is good (pretty fixed a good 
few years ago), set (ie; data) based languages seem to have mostly 
died after 1994-1996, good old RISC-OS was dead post-1996 - well, all 
those things I could place hand on heart and say "wonderful, this is 
good", they aren't being taken in their logical directions anymore - 
you could say, their lines of thinking have been mostly abandoned.

See http://www.paulgraham.com/noop.html - I will say I can see plenty 
of point to OO and I have used it successfully many times. However, I 
still don't think it's the best approach for most problems in the 
form it is currently used - and I *do* agree about the popular OO 
mania currently in effect (and we've discussed the causes of that 
mania).

> >>>Surely you would 
> >>>agree that if you base your design on quantities of data and the
> >>>overheads of the media in which they reside, you naturally and
> >>>intuitively produce a much more efficient design?
> 
> I honestly don't know enough about what you're doing to agree or
> disagree.
> 
> ***HOWEVER*** even if I agreed fully with that statement, I still
> don't think it has much to do with whether your stuff should be used
> on a given project. I honestly believe language- and
> technique-selection should be based on things like who the programmers
> are, what they know, whether the universities are churning out more
> programmers for us, whether we're going to be able to replace the ones
> we have if they quit, etc., in addition to the technical aspects you
> mentioned above. Just because "X" is a better mousetrap than "Y"
> doesn't mean we should use "X". We should use "X" if and only if "X"
> will reduce the overall company time, cost, and risk. And that
> includes the time and cost for retraining, the risk of losing the
> people we have who don't make the transition, and the risk of being
> held hostage by our programmers (e.g., if we choose a technology where
> there are only a few competent programmers, we might end up having to
> pay through the nose just to keep the ones we have).

As you said before, many a better technology have fallen by the 
wayside throughout the years. I think we've covered a good proportion 
of the reasons why in this dialogue - the big question now is can one 
man change the world? :)

> >Ok, I think we're escaping the fundamental core of this thread.
> >Basically, what I am saying, is that across all the software projects
> > in all the world, people are mostly applying an OO-based solution as
> > a primary leader. I feel this produces worse quality software
> >because of the problems with lack of intuition
> 
> Which may be true. But the fact that "people are mostly applying an
> OO-based solution as a primary leader" means there are a lot of
> programmers out there, and there will be a lot of tool vendors and
> compiler vendors to choose from, and we'll be able to get
> off-the-shelf libraries, and we'll be able to get consultants to help
> out in a pinch, and we'll have the choice whether to rent or buy our
> programmers, and, and, and.

However, there are always grass-roots movements. Maybe we believe in 
those more here in Europe than the US. There are many contributory 
factors, and indeed even if it fails it doesn't matter if you 
significantly improved yourself and the lives of others along the way 
eg; Marx's teachings haven't had much success, but can you imagine 
the world without them?

> Actually I really like your spunk and determination, and I really
> shouldn't try to throw a wet towel on your fire. You *need* your fire
> since otherwise you won't be able to finish what you've started.

Not at all. I greatly desire intelligent criticism, otherwise I am 
doomed to waste a great deal of my time on fool's errands.

> Tell you what: let's not talk about being language neutral any more,
> since it will not help you. Instead, please tell me about your
> paradigm. Show me some examples.

TWindow main;
TDataText clock(main, "dc:time");
TDataImage image(main, "/Storage/C/foo.jpg;5");
main.show();

That sticks a clock with the current time plus a view of version five 
of c:\foo.jpg in a window and shows it. If you made TDataText clock a 
TDataVector clock you'd get a graphical clock instead (because the 
type of data is determined by compatibility of provided interfaces). 
Literally, that's all the code you need.

TDataStream 
c1=TDataStream::connect("/Storage/C/myfile.txt","dc:grep");
c1.dest().setMetadata("Parameters", "grep pars - I forget");
TWindow main;
TDataText input(main, c1.dest())
main.show();

That greps the file c:\myfile.txt for whatever and stuffs the results 
in a window. No processing occurs whatsoever until the main.show(). 
Basically, if you use your imagination, I'm sure you can see how 
everything else fits together.

My only major problem is that of active non-simple types eg; a HTML 
file - because it requires as a basic part of its functioning direct 
interaction with the user. Now of course, a traditional custom 
written component can handle this, but that doesn't encourage code 
reuse so I'll need to come up with a way of making QWidget work 
across process boundaries. This will not be an easy technical 
challenge :( (I'm thinking I'll wait till version 2).

Obviously, what takes three lines above can be put into one much less 
obtruse line in my own custom language. I think you can clearly see 
functional tendencies already appearing in the class design, but my 
own language would remove all the grunge code as well.

Cheers,
Niall




From: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
To: "'Niall Douglas'" <xxx@xxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Sun, 4 Aug 2002 14:54:38 -0500

Hi Niall,

I am packing for a business trip, so don't have time to go through a
long reply. (I can hear your sigh of relief already!) I had time to
read it completely, but not enough time to reply.

A couple of quick comments:

======================================================================

1. Re a lack of tabula rasa, agreed that we are all influenced by our
particular experiences. I try to keep an open mind wrt OO vs.
procedural vs. functional vs. logic-oriented vs. constraint-oriented,
but certainly there are limits to how even-keeled any of us can actually
be.

======================================================================

2. Re what you said about embedded and handheld systems getting so
powerful that they don't need assembly makes a lot of sense. For
wireless devices, we may even get to the point where we don't constantly
worry about squeezing functionality into these tiny boxes, both because
the boxes won't be so tiny, and also because the functionality might end
up getting loaded on-demand (either that, or the functionality runs on
servers in the ether, and the device merely communicates with them).
The point is that the world is changing, and it's ultimately going to
dumb-down the kind of coding that happens on just about all computing
device everywhere. 

(One of the trends I've noticed, and you alluded to it earlier, is how
the skills required to be a decent programmer have changed over the
years. 20 years ago, the key was the ability to solve ill-formed
problems - to actually be able to figure things out. Today you need a
good memory, since the key is to know the packages that are available.
In other words, we've moved from a world where most of the code in a
system was written by our own programmers to one where our programmers
merely glue pre-written pieces together. I personally find that sad,
probably because I'm pretty good at the old way of doing things and I
don't have the patience to actually *study* the latest tools written up
in rags like BYTE magazine. But regardless of how it effects you or me,
there does seem to be a clear trend toward knowledge / information, and
a corresponding de-emphasis on insight / problem solving skills /
ability to think out of the box.)

======================================================================

3. A brief comment about this exchange:

Marshall:
>>However I think we both played a part in this communication problem.
>>For example, when I first explained the "require no more and promise
>>no less" idea in my previous email, you replied, "This is quite
>>ephemeral and subtle..." Although it is clearly subtle at times, I
>>see it as the opposite of ephemeral, and since, perhaps as a
>>back-handed compliment to you (e.g., "I *know* this guy is bright, so
>>if he thinks it is ephemeral, I must not have explained it very
>>well"), I re-explained it.
>
Niall:
>In which case, I must explain myself as well - why I said that in 
>that fashion wasn't just because of you, but also for the benefit of 
>the others following this conversation. I *do* however think it 
>subtle because, like you said before, it's not a programming error.

I don't know what you meant by "it" in the last sentence, but I'm
assuming (perhaps wrongly) that "it" refers back to the original example
that you said was ephemeral and subtle, namely improper inheritance. In
that case, I'm not sure why you said it's not a programming error. I
see a few possibilities:
A) Perhaps what you meant it's not a programming error that is caught by
the compiler -- that causes a diagnostic message from the compiler. If
that's what you meant, then of course, you're correct (and that goes
along with what you said about it being subtle).
B) Perhaps you meant it's not a programming error in the sense that it's
more of a design or conceptual error. I suppose that's also correct: it
is an error that primarily started at the design or conceptual level.
It clearly shows up in the code, and therefore is *also* a programming
error, but perhaps you meant to emphasize the design/conceptual error.
C) But if you meant to imply that proper inheritance is merely a "best
practice" that doesn't really effect the code's correctness (e.g., if it
effects only the code's maintainability, programmer productivity, or any
other non-functional goal), then I must disagree. Improper causes all
sorts of correctness problems in the code, and in that sense it is a
programming error.

======================================================================

4. WRT your functional style, you might want to look into using
generic-programming (in C++) instead of OO. From what I've seen of your
ideas, generic programming via C++ might let you achieve some (most?) of
the benefits of having your own language without the associated costs.
It's really a different way of programming, but some of the things you
described at the very end (in your two examples) seem to match pretty
closely with the generic programming idea. I'd suggest starting with
the Lambda library at www.boost.org (there's a link to the Lambda
library on boost's opening page).

======================================================================

I wish you the best.

Marshall




From: Niall Douglas <xxx@xxxxxxx.xxx>
To: "Marshall Cline" <xxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ
Date: Mon, 5 Aug 2002 19:55:23 +0200

On 4 Aug 2002 at 14:54, Marshall Cline wrote:

> I am packing for a business trip, so don't have time to go through a
> long reply. (I can hear your sigh of relief already!) I had time to
> read it completely, but not enough time to reply.

Nah it's cool. I would have had to have ended it anyway end of this 
week as I go on holiday.

> (One of the trends I've noticed, and you alluded to it earlier, is how
> the skills required to be a decent programmer have changed over the
> years. 20 years ago, the key was the ability to solve ill-formed
> problems - to actually be able to figure things out. Today you need a
> good memory, since the key is to know the packages that are available.
> In other words, we've moved from a world where most of the code in a
> system was written by our own programmers to one where our programmers
> merely glue pre-written pieces together. 

Absolutely.

> I personally find that sad,
> probably because I'm pretty good at the old way of doing things and I
> don't have the patience to actually *study* the latest tools written
> up in rags like BYTE magazine. But regardless of how it effects you
> or me, there does seem to be a clear trend toward knowledge /
> information, and a corresponding de-emphasis on insight / problem
> solving skills / ability to think out of the box.)

Yeah the knowledge vs. skill balance is definitely tilting.

> 4. WRT your functional style, you might want to look into using
> generic-programming (in C++) instead of OO. From what I've seen of
> your ideas, generic programming via C++ might let you achieve some
> (most?) of the benefits of having your own language without the
> associated costs. It's really a different way of programming, but some
> of the things you described at the very end (in your two examples)
> seem to match pretty closely with the generic programming idea. I'd
> suggest starting with the Lambda library at www.boost.org (there's a
> link to the Lambda library on boost's opening page).

Ah good old lambda algebra! That toolkit you mentioned is an amazing 
example of what can be done in C++, and even better an example of how 
good compilers can optimise (ie; the GCC 3.0 benchmarks).

> I wish you the best.

Yeah you too. If you ever need an assembler programmer outside the 
US, give me a call. I'd take a pay cut to work with competent people 
especially if the work is challenging or interesting.

Two things:
1. For your FAQ, am I right in thinking it's a good idea to either 
make base class copy constructors protected or virtual? The first 
stops copy slicing by giving a compile error if you make a base class 
copy. The second forces use of the derived class' copy constructor.

2. Can you suggest the following next time C++ wants new features:

class String
{
Mutex mutex;
public:
pre String() { mutex.lock(); }
post String() { mutex.unlock(); }
...
};

The idea being that the compiler inserts pre and post code before and 
after every access to String. The use is for multithreading but could 
be useful for other kludges too. You can of course make pre and post 
virtual.

Cheers,
Niall




To: "Niall Douglas" <xxx@xxxxxxx.xxx>
Date: Tue, 06 Aug 2002 11:35:16 -0500
From: "Marshall Cline" <xxxxxxxxxxxxx@xxxxxxxxx.xxx>
Subject: RE: Comments on your C++ FAQ

Hi Niall,

Re assignment, yes that's the right idea. Most of the 
time they can be protected, since most of the time the 
derived classes aren't assignment compatible. E.g., 
if the base class is Fruit, it doesn't make sense to 
assign an Apple with an Orange, so it should be 
protected. If the base class is Set, with derived 
classes SetUsingHashTable and SetUsingBinaryTree, 
etc., then it makes sense to assign them so it should 
probably be virtual, and perhaps pure virtual (since 
it needs probably to be overridden in the derived 
class's anyway; although you *might* be able to 
implement it in the base class by calling virtual 
functions in both 'this' (to insert elements) and in 
the other Set object (to access elements)).

Re your suggestion, good idea - thanks.

Marshall

fin

blog comments powered by Disqus