[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: XDM... source level
> Now, there are a few things to notice though. First, you can have
> more than one local server (More than one display, currently I think
> this is only VGA and Mono for the PC's). So, before actually exiting
> XDM itself, you need to check if it has no displays running (i.e. if
> the daemon is the only process running), that's not difficult since it
> keeps a list of active displays (XDM itself is kept on the list as a
> display entry when run in daemon mode).
I was actually thinking of keeping XDM running even if the X server fails.
Once RemoveDisplay(d) is run, XDM won't start that server again, so we have
the console back. Since XDM is still running, any other servers (XDMCP
stuff) can keep running, say if the user has a X terminal for whatever
reason (they're cheap, and relatively 'easy' to set up).
Say we have a scenario where the user just upgraded (or downgraded, say
their card fried) their video card. They turn on their machine, XDM
starts, and the X server freaks. .._0.startAttempts would be set to 2 or 3
or something similarly low, and once it fails out completely, the console
would be freed up. A program could then be run to start up a VGA or NVidia
server with a single recovery client, which would just be the X
configuration program from the installer with the 'recovery' flag set, so
the user can try to fix things.
Ideally, the user would never leave runlevel 5, and XDM would never be
killed. It would just stop trying the local server when it fails, do the
recovery bit, and when that's done you just killall -HUP xdm, at which
point it will reread the Xservers file and start up the local server again.
The main component of this of course is the mechanism to start up the
recovery process. A fork()/exec() might work, but is likely to be
problematic. Then again, there's no other decent way to do it. system()
is synchronous, which would hose out XDM while the recovery is happening,
which is a Bad Thing(tm), since it *is* at daemon, after all... :-}
As far as the X server trying to drop resolution in stages until it finds
something, that's part of most (all?) servers anyway. Or close to it. I
think it takes each modeline that matches the highest config and a
dotclock, and tries to set it. If that fails, it starts down the list of
similar (same res/depth) modes, then drops one res in the "Modes" list to
loop through again, etc.
Then again, I'm mostly guessing about the fallback method. One more thing
for the TODO list: we need someone to understand (at the code level) how
the fallback mechanism works within the X server. With that information,
we can make sure XF86Config is written properly (right options, right
order) to make it most effective.
This should really be the first concrete thing this group produces. It's
relatively simple, but very far-reaching. This in place, we can get a
couple people concentrating on a X configuration system.
Erik Walthinsen <email@example.com> - SEUL Project system architect
/ \ SEUL: Simple End-User Linux -
| | M E G A Creating a Linux distribution
_\ /_ for the home or office user