[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

(FC-Devel) Re: Argo & architecure

Hi Jason + others,

> By "components" I assume you mean pieces of code running in different
> processes.  I am totally in favor of good modularization, but having a
> lot of processes gets really sticky.

That is what I meant, but I don't necesarily agree that it needs to get
sticky. If we use a good (pre-existing) foundation like CORBA we should be
free to totally ignore IPC issues, process management, etc. Any problems with
IPC would be handled using exception mechanisms, so IPC would not need to
intrude into standard control flow at all. Of course this is the ideal, but I
would hope that in practice we could get close. This may be an unrealistic
expectation on my part, in which case I would agree that is not a good idea. I
havn't had a lot of experience in using nice systems like this (I have used
some primitive ones, yuck!) Has anyone here had much experience designed and
coding using CORBA, ACE or any of our other potential environments?

> There are a number of trade-offs here.  Component-based systems are
> much easier to develop if you have a good, stable architecture and
> interface specs.  Devloping such an architecture is best done by
> people who have substancial experience with similar systems.  It seems
> that the FreeCASE team has a lot of experience in a lot of things, but
> I think this would be the first component based CASE tool for all of
> us (including me), and it is unlikely that we would get interfaces
> right on the first few trys.  Even if we do it as well as current CASE
> tools, we might limit ourselves to the feature-set of current tools.

Well, first I have to admit that it would be a first for me too :) I would
expect that rather than limiting ourselves to a specific feature set, we would
actually make it easier to expand on the tool's functionality. Again, that may
be a naive assumption on my part. You are certainly right that we would need
good interface specs and a decent architecture, but I would regard that as
essential for a good design no matter what sort of system we are producing
(and I am sure that you would too).

> One really helpful thing is the UML standard.  If we can agree on a
> common data repository and data representation then we could prbably
> be able to do a star-topology system where everything goes through the
> repository.  If you look at the history of CASE standards, most of
> them rely on shared repositories.  We would be relying on t standards
> such as OMG's MOF to determine what the interface to the repository
> should be.  An alternative would be to rely on a standard streaming
> format, like XMI or UXF and a common representation of a UML document
> in memory.  File based integrations are not so flashy, but they work,
> they are easier to test, and it would make it easier to integrate with
> existing development utilities, e.g., CVS.

I agree that this is one thing we need to figure out and get right as soon as
possible. A shared repository definately sounds like a good idea to me.
Whether it is implemented using streaming to flat files in CVS, or a flash
distributed bells and whistles OO data base should hopefully be completely
irrelevent to anything other than the repository itself. I can't comment on
the appropriate internal data representation for the system, as I have not
looked at the alternatives. Can someone knowledgable summarise the pros and
cons of the different formats?

One obvious point that is worth making, we should be careful that whatever
format we choose decouples the models (semantics) from their graphical
representations (syntax), and similar good stuff. Although I know we are
focussing on UML at the moment I would hope that we would eventually support
other methodologies and graphical diagrams. As you mention below, a UML
meta-model should be flexible enough to cope with the semantics of other
methodologies, but if we tie model analysis features (for e.g.) to syntactic
elements we will dig ourselves into a deep, dark hole right from the start.

> As far as I know, no one has implemented a truely distributed CASE
> tool, other than simply placing a repository on a different host.
> Some tools, like (older versions of?) Software Through Pictures and
> Cyanne's ObjectTeam seem to have a shell application and separate
> applications for diagram editing, code generation, reporting, and
> (limited) model analysis.  However, I think all of these components
> execute on the user's desktop.  Maybe someone in the project knows
> more about these tools?

That sounds close to the model I was thinking of. Of course despite the
implementation the user would see the system as one integrated application
(unless there was benefit in exposing it's modular nature in some cases).

> Hoever Argo is no "thin client" by any means.  Argo will probably
> always be a memory and CPU hog.  Luckily memory and CPU time are cheap
> and plentiful.  Some of the best features of Argo come from its
> relatively tight integration of model with analysis, model with code
> generation, and model with reverse engineering.  Design critics are
> active all the time and constantly interacting with the model and user
> interface.  Code generation is (intended to be) done in a very
> incremental and interactive way so that the user can see how each
> model change affects the code immediately.  Likewise, reverse
> engineering (little more than parsing) should be integrated tightly
> enough that users can modify the code fragments shown in the "Source"
> tab and see the affects in the diagram or property tab immediately.
> Relying on IPC or launching external programs for analysis and code
> generation would almost certainly make Argo's current features too
> slow.

Hmm, I am a bit confused, maybe we are having terminology problems (or maybe I
need to ease up on the coffee). I think that what you are refering to as the
model I am thinking of as the meta-model. Isn't the model the thing that is
produced using a CASE tool (expressed as one or more views on the model, i.e.
diagrams)? Anyway, I agree that a tight coupling is unavoidable between the
UML meta-model and the various components. I don't necessarily agree that IPC
would be too slow for model analysis or code generation though. What sort of
delay is introduced by a remote method invocation using CORBA compared to the
same method being called in-process? In the case where the method is serviced
by a different process on the same machine (the most likely scenario for us),
or even a machine connected to the same LAN (also likely), I would have
thought the delay would be totally insignificant. Especially considering the
speed of interpreted Java today (no offence, anyone :) Has anyone out there
got these sort of figures, or even an experienced guess? And of course making
a (potentially) remote call gives us the option to code the method in other
languages, perhaps making a big difference in the speed that the method itself
executes at.

> The implementation might be changed a little to break things apart,
> but not too much because the main features all need direct and
> immediate access to the UML model and the user interface.  For
> example, critics and code generation need to be running on the user's
> desktop to give good interactive response times.  I have just added a
> module dependency diagram to the web site.

I am sorry, but I fail to see why this cannot be provided using components
running as separate processes. Firstly the model, if you mean the UML
meta-model it would be compiled into all the processes. If you mean the actual
models that the user is working on creating and modifying, access will have to
be co-ordinated through the repository anyway, no? Every process can have
direct access to it's own read-only version, and be registered for change
notifications on the parts it is interested in (with some sort of diff sent
with the notification if requested maybe?). Don't the critics suggest changes,
rather than implement them directly? And if they do implement them
automatically wouldn't we need some sort of contention mechanism anyway
between the editor and the critic?

Secondly the interface. Why can't the critics (etc.) communicate with the user
interface process using IPC? If the response time is the only issue then I
think that we should take a look at it a bit more closely before we assume it
is unworkable. IPC may be slow compared to CPU speeds, but it can be
blindingly fast to us poor limited humans :)

> You can customize UML to add meaning to the diagrams.  In fact, I
> published a conference paper on one way to do that.  See my home page
> for the paper "Integrating Architecture Description Languages with a
> Standard Design Method".  Different graphical diagrams is a related
> issue.  I would rather see us focus on UML than try to spread
> ourselves too thin on UML, OMT, Booch, Fusion, etc.

Thanks for the reference, I'll check it out. I agree that we should focus on
UML, but I also think we should try to avoid making decisions that will make
it a lot harder for us to implement other methodologies/notations later on.

> Thanks.  I like the critics and I am glad that other people like the
> idea.  One thing that I would love to see happen is for people other
> than me to start writing critics.  The framework is there and it is
> pretty simple, but can people codify their ideas about good design in
> a way that helps others?

Hah! Excellent question! I am sure that SOME people can (isn't that kind of a
definition of a methodology?) Luckily for all of us it should be possible for
lesser mortals to translate their codifed ideas from their English and
diagrammatic representations into whatever language we build critics in. Hmm,
thinking about it we had better have a simple mechanism for disabling and
prioritising critics, I have a very strong feeling that not everyone will be
receptive to all suggestions, or even that all critics will necessarily agree!

Please forgive me if I appear to be missing the point of some of your
comments, I think we are still at that confusing, exciting, wonderful stage
where everyone is spilling with ideas, but we havn't yet worked out a shared
ontology/vocabulary to communicate them properly (I certainly am, anyway).


Duane Griffin
Paradigm Technology Ltd
Phone +64 4 4951000