[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
> JF Martinez wrote:
> > >
> > Partitionning is not difficult in RedHat/indy but carving a hole for
> > Linux is since FIPS cannot be named user friendly.
> Yes, that was exactly a replacement of FIPS I was looking for, but not
> because it isn't user friendly; I read the code, it is object oriented
> and it should have been possible to work around. But the real problem
> was that it resizes only fat systems that are empty in the end.
That is a defragmented fat filesystem. But the whole process is
painful. Defragment under Windows then go to DOS and use FIPS then
reboot we need something better.
> > Does parted allow the growing/shrinking of Linux or Windows partitions?
> Yes. And also it creates filesystems.
> I tried to put a dos partition at the end of all my linux partitions, in
> a disk where there is a windows partition too. Windows has still
> problems with that partition, but it is with parted (instead of dos/win
> or fdisk/mkdosfs) that there are less problems.
Windows is picky (read ist is an MS tactic to make things harder for
other systems) about where its partitions can go. It won't boot if it
is not in the first partition of the first disk to begin with.
> Parted can be used in two ways, there is a C library that does all the
> stuff, and a "front-end" command line oriented, interactive or not.
> So it is possible to use the library (its API is documented) or the
> whole stuff in scripts.
> [I told here about a windows app that automates setup programs with
> > Yes, very interesting.
> So, what to do with ?
Try to integrate it of course.
> > > Last thing, I can recompile kernels, I allready tried to do a kernel
> > > which could turn on the most different computers, modularised for anything
> > > possible. I had problems for strange hardware, but I should dig and ask in
> > > relevant places, if needed.
> > >
> > Do you have a fast box? I have found that recompiling a kernel when
> > you really compile everything (be it in modules or in the main body)
> > takes over an hour on a PII 400. And then you need to rebuild for
> > Pentiums, and still a third time for PentiumPros. In 2.2 a single
> > universal kernel means too much performance loss.
> Sure. But this was not for the real kernel, but a kernel used for
> booting the install.
There is no real differnce between a universl kernel and an install
kernel except the install kernel has some features stripped down to
save space. They are both compiled for 386.
In 2.0 there was not much difference between a kernel compiled for
Pentium and one compiled for 386 (most optimizing was done at runtime)
except for selective invalidation of TLB entries and that made little
difference. In 2.2 what I have seen in the source points to far
greater optimizations and in addition there is the question of SMP boxes.
Anyway the method presently used is to use an install kernel compiled
for 386 and during the install look at /proc/cpuinfo and select the
kernel suitable for the processor.
Also presently kernels are built with -m486 whatever the real
processor (and use alignment restrictions with higher processors),
this is probably old gcc 2.7 did not generate correct code with
-mpentium however my benchmarks show a clear improvement if you
compile for the specific processor. However we need to know if it is
safe to use this.
> I haven't a fast box (celeron 400), but the compilation could be made
When I told of a slow box I was thinking in P75s not Celeron 400s
> > My personal method is go to the RedHAt ftp site, download the SRPM
> > they did from the last kernel, do an rpm -U of it then go to where the
> > sources got installed (/usr/src/redhat/SOURCES) and edit the files
> > describing what has to be compiled and how. Then "rpm -ba" They have
> > Alan Cox in their team so I don't thinbk we can improve much on them.
> > But we can add features they considered unimportant like NTFS support
> > (it is presently read only)
> I think the NTFS read-write is still buggy (but I am not sure).
It is. I was referring to including readonly support for NTFS.
> Well, I think a good approach to kernel stuff would be that a first
> kernel is used to boot and to install, recognizing as much hardware as
> it is possible. And then another kernel is set up, optimized for the
> machine. (that you can create as you explained above)
> Am I right ?
> > > Well, last thing, but only to think about, I am aware of some systems that
> > > only install the very basic system, so perhaps it could be useful to use
> > > this as tool to save some work. But I think it is not a priority at all.
> Well, in the projects I talk about, there was such ideas, to recompilate
> for a peculiar hardware. It goes further as it recompilate everything.
> It is still a project, but if they go further, it could save us a lot of
> work. I keep watching.
I thought about it years ago. I also had the curiosity to quantify
the memory savings between a properly compiled distribution kernel and
an optimized one. My conclusion was that it is not worth the trouble
on 32 megs boxes, and probaly not on 16 megs boxes. 8 megs boxes in
active service are rare now and Indy will probably not install on
Jean Francois Martinez
Project Independence: Linux for the Masses