[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: New package managment (fbsd ports)




On 30-Sep-99 Steve Baker wrote:
> Erik wrote:
>> 
>> okie, I got to play with fbsd's "ports" all today trying to get a system up
>> and
>> running. It's very very similar to this 'autoweb' idea, so I guess I'll try
>> to
>> summarize what it does and why I perceive as its deficiencies, as it may be
>> more logical to merge the fbsd ports collection into linux instead of
>> implementing a whole new scheme.
> 
> If there is something out there that does the right thing - then I'm all
> in favor
> of it.
> 
>> With the distro, there's a /usr/ports heirarchy built. there's a Makefile, a
>> directory called distfiles where the downloaded source is stored, and
>> several
>> directories broken by catagory (graphics, devel, archivers, x11,
>> x11-servers,
>> x11-wm, etc. a whole slew). inside of each directory is a directory for each
>> package. inside of that is a makefile, and md5 file, and some other
>> neglegible
>> stuff which I'm not too terribly concerned with yet :)
>> 
>> I went to install windowmaker, which depends on several packages. so I did
>> cd /usr/ports/x11-wm/windowmaker
>> make
>> the makefile in the windowmaker directory contains info on where the main
>> source distro is, which version, what it depends upon, and where the
>> appropriate patches are.
> 
> But this requires that I have already made a "windowmaker" directory and
> downloaded the Makefile for it - right?  Either that or every distro has
> to have directories and Makefiles for ALL the packages there will ever
> be.

yes. Fortunantly the ports dir is pretty easy to make (I think). the ports
collection has a pretty good spread of packages. If the ports framework is
usable by both linux and fbsd, then we have both groups actively working on it.
As far as I can tell, there are less active fbsd developers than linux, and
they have a very respectable ports situation. I don't think populating a
ports framework with all the fun goodies will be a serious problem :)

> 
> I suppose we could make the autoload script create a directory and a
> Makefile in the /usr/ports approved way:
> 
>   eg  windowmaker.al contains:
> 
>     mkdir -p /usr/ports/x11-wm/windowmaker
>     cd /usr/ports/x11-wm/windowmaker
>     cat >Makefile <<HERE
>     ...stuff...
>     HERE
>     make
> 

how will that guess dependancies? I think having a human maintainer in the
works somewhere would be best. If this becomes semi-standard, then someone
could crop up some easy documentation on how to make a port framework and
hopefully developers themselves (who usually have a pretty good idea of
dependancies and what the newest version is...) will actively maintain ports
for their projects. Make several 'central cvs repositories' that are chained to
balance load, and updating the ports heirarchy is as easy as a cvs update.

> 
>> The makefile includes some other makefile that does
>> the magic (I haven't looked at it yet). okie, I run "make"
>> it says it can't find windowmaker-0.60.0.tar.gz on the system, so it
>> proceeds
>> to download it, unpack it, and then check the dependancies.
>> it tells me I don't have libtiff installs, so it goes to
>> /usr/ports/graphics/libtiff and runs 'make' there...
> 
> But in our case, that directory/Makefile pair might not exist either, so
> we still need something (Preferably something downloaded from the
> libtiff
> web site) to create that directory and *it's* Makefile.
> 

I'm not saying we could instantly pop over to a totally different and new
method :) I think the best approach would be to overhaul and implement the
existing ports packages, then begin migrating packages over to it.

>>, which proceeds to download
>> libtiff, patch it, and check dependancies of libtiff
>> it tells me I don't have libjpeg installed, so it does it again
>> then it builds libtiff, then it builds windowmaker (after other
>> dependancies)
>> this was all done automagic, and if an md5sum fails, it stops the entire
>> process right there. Everything was installed into the /usr/local directory
>> and
>> the permissions looked pretty tight. It's possable to over-ride md5sum
>> checks
>> with a parm to make. running "make clean" in /usr/ports proceeds to walk all
>> the directories and clean things up.
>> 
>> The deficiencies that I perceived are
>> 
>> 1. if uses a program called 'fetch' which craps out on some servers. I ended
>> up
>> installing ncftp3 so I could get the files into /usr/ports/distfiles by
>> hand.
> 
> wget seems a pretty solid tool for this kind of thing. It beats any kind
> of
> FTP-like tool because it knows how to get things via http as well as
> ftp.
>  

wget is impressive, but not omnipresent just yet. But it's very small, so I
wouldn't be opposed to having that handle downloading packages. A wrapper
script with some exception handling should be implemented to deal with host
name lookup failures, route failures, down machines, moved packages, busy
servers, stoned servers, etc

>> 2. Some packages were outdated. It wanted libgif 3.0, but libgif 4.1.0 is
>> the
>> freshest and 3.0 was a little difficult to find (esr doesn't seem to have an
>> account where the fbsd ports thought he should). trying to kludge in 4.1.0
>> with
>> some makefile editing didn't work, the patch files were a bit sensative.
> 
> That's the reason I'd like to have the script for actually fetching a
> particular
> version of a library to be stored on the library's web site.
>  

this is currently how the ports package is implemented. If we do use a central
cvs for the ports framework, then this would be acceptable. New packages would
be merged in as soon as the port mainatainer gets to it. As far as I can tell,
the ports package offers no upgrade between fbsd release versions. I don't
think that's acceptable. I may be wrong, tho, I'm just starting in on it :)

>> 3. it downloaded the file before recursing into dependancies. This didn't
>> seem
>> like a problem as I was doing it, but as was mentioned, if a low level
>> dependancy cannot be met, then that's a whole lot of download for nothing.
> 
> Yes - It seems pretty dumb to dowload things top-down, the bottom-up
> approach
> works better since you don't end up downloading a game if for some
> reason you
> can't run it.
> 
> Another thing I can see as a problem is that my proposal somewhat
> depends on
> all the library managers adding an autoload script to their web sites. 
> There
> is obviously going to be a period when that won't happen (especially if
> the
> scheme is slow to take off).
> 
> Hence, the scheme has to allow the autoload script to be stored
> somewhere
> different from the library it refers to.  Assuming we can manage that,
> there
> would be the option for the game writer to create his own autoload
> scripts
> for libraries that don't have such scripts maintained on their own
> sites.
> 
> Another possibility would be for some kind person to provide autoload
> scripts for a LARGE number of libraries and other programs that don't
> have
> autoload files of their own.
> 
> Ideally though, those files should be distributed across the web so that
> each library maintainer can maintain his or her own autoload files.
>  

see above, multiple cvs repositories :) I think I need to start reading all the
way thru emails before I open my mouth. I think as soon as I get free time, I'm
gonna start looking at how they implement it. Try to figure out how much we can
steal. I really think this should be viewed as a joining of forces, fbsd
helping linux and linux helping fbsd. I think a symbiotic relationship would be
much more beneficial to everyone than a parasitic one. 

If a cvs network is the way to go (and I feel very strongly that it is), I
don't think we'll have much problem finding high speed hosts. I bet various
metalab/sunsite places will agree, companies with vested interest in the free
*nix communities may agree if approached (ibm, sun, sgi, etc).

If anyone is versed in freebsd, please feel totally free to correct or comment.
I'm just beginning with fbsd, and trying to get my feet under me. I'm sure
there are inside tricks that I d'no and stuff :) When I learn more, I'll open
my big fat mouth again and bore everyone :)

> -- 
> Steve Baker                  http://web2.airmail.net/sjbaker1
> sjbaker1@airmail.net (home)  http://www.woodsoup.org/~sbaker
> sjbaker@hti.com      (work)
> 

        -Erik <br0ke@math.smsu.edu> [http://math.smsu.edu/~br0ke]

The opinions expressed by me are not necessarily opinions. In all
probability, they are random rambling, and to be ignored. Failure to ignore
may result in severe boredom or confusion. Shake well before opening. Keep
Refrigerated.