[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: New package managment (fbsd ports)



Erik wrote:

> > But this requires that I have already made a "windowmaker" directory and
> > downloaded the Makefile for it - right?  Either that or every distro has
> > to have directories and Makefiles for ALL the packages there will ever
> > be.
> 
> yes. Fortunantly the ports dir is pretty easy to make (I think). the ports
> collection has a pretty good spread of packages.

That's never going to be acceptable - when a new package is announced,
people
want to download it immediately.  Nobody is going to want to wait while
someone updates a centralized /usr/ports directory - and then go to that
site and download the new /usr/ports, install *that* and only then get
that
game!

Also, whichever site maintains that directory is going to feel the
'slashdot
effect' with frightnening regularity.

> If the ports framework is
> usable by both linux and fbsd, then we have both groups actively working on it.
> As far as I can tell, there are less active fbsd developers than linux,

(by about two orders of magnitude!)

> and they have a very respectable ports situation. I don't think populating a
> ports framework with all the fun goodies will be a serious problem :)

I think maintaining it in a timely manner could become a problem - also
it
doesn't scale very well. Eventually, that /usr/ports directory is going
to
become VERY large!  Suppose the whole world uses Linux and Windoze is
history,
there could quite easily be a million programs out there that you could
download.

You'd need a million directories and a million makefiles in your
/usr/ports
area - and to maintain that you'd need to download perhaps a Gigabyte
from
the site that maintains /usr/ports.  OK, you could organize the
/usr/ports
site so that you only grab the parts of that heirarchy that you need to
build a specific package - but now you've just moved the problem to
needing
to get all the parts of /usr/ports that your package needs.

There is a political issue too.  Suppose I wrote a piece of software
that
the maintainer of /usr/ports didn't approve of?  Suppose they refused to
add my package to it for some reason?

I prefer a scheme that's under my own control.

> > I suppose we could make the autoload script create a directory and a
> > Makefile in the /usr/ports approved way:
> >
> >   eg  windowmaker.al contains:
> >
> >     mkdir -p /usr/ports/x11-wm/windowmaker
> >     cd /usr/ports/x11-wm/windowmaker
> >     cat >Makefile <<HERE
> >     ...stuff...
> >     HERE
> >     make
> 
> how will that guess dependancies?

In the Makefile presumably.  However the present /usr/ports does it.

However, the more I think about it, the more I think the scheme I
outlined yesterday is superior.

> I think having a human maintainer in the works somewhere would be best.

I think that's the biggest flaw!

> If this becomes semi-standard, then someone
> could crop up some easy documentation on how to make a port framework and
> hopefully developers themselves (who usually have a pretty good idea of
> dependancies and what the newest version is...) will actively maintain ports
> for their projects. Make several 'central cvs repositories' that are chained to
> balance load, and updating the ports heirarchy is as easy as a cvs update.

You'd give CVS write-access to the /usr/ports server to just anyone? 

Yikes!

> > wget seems a pretty solid tool for this kind of thing. It beats any kind
> > of
> > FTP-like tool because it knows how to get things via http as well as
> > ftp.
> >
> 
> wget is impressive, but not omnipresent just yet.

Hmmm - well perhaps not.

> But it's very small, so I
> wouldn't be opposed to having that handle downloading packages.

Certainly each autoload  script could check for wget's existance and
patiently explain how to (manually) download and install it.
Alternatively,
I suppose we could use ftp to download it if it's absent.  That's bad
news from a portability point of view though because not all versions of
ftp will work from command line options.

If the autoload mechanism ever became popular, wget would appear on
distro's
pretty soon.

> A wrapper
> script with some exception handling should be implemented to deal with host
> name lookup failures, route failures, down machines, moved packages, busy
> servers, stoned servers, etc

Yep.

> If a cvs network is the way to go (and I feel very strongly that it is), I
> don't think we'll have much problem finding high speed hosts. I bet various
> metalab/sunsite places will agree, companies with vested interest in the free
> *nix communities may agree if approached (ibm, sun, sgi, etc).

But that's *so* much more complex than the autoload mechanism.

-- 
Steve Baker                  http://web2.airmail.net/sjbaker1
sjbaker1@airmail.net (home)  http://www.woodsoup.org/~sbaker
sjbaker@hti.com      (work)