[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [school-discuss] Intel quad core vs AMD 6-core thin client server



Pardon the top post for quick summary: Thanks Mike for an awesomely detailed response, I've already learned a lot and plan to learn more on 2nd and 3rd read of this...

Justin, any thoughts about updating our tables of recommended configurations using all the excellent info below and in other recent posts on this thread? Daniel


On 10/12/2010 4:53 PM, Michael Shigorin wrote:
(reply-all posting; my background: server/desktop hardware --
not especially biased in Intel/AMD question since my systems did
and do run this and that; also done an ALTSP combo LTSP4/5 tech
and a set of distros around it, two of which are used throughout
Russian schools)


On Sat, Oct 09, 2010 at 10:04:54PM -0400, Daniel Howard wrote:
Anyone have any experience comparing performance of a 6 core
AMD CPU-based thin client server to an Intel quad core based
server?
No, but I have quite a bit of Intel quad hardware at hand
(both C2Q and Xeon 55xx/54xx, have used 53xx too) as well
as some AMD dualcores (Opteron and Athlon64 X2s).

I'm looking for a new 32-client computer lab server.
You need RAM, cores/sockets, network bandwidth and I/O.
Rather in that order (when LTSP is reasonable) -- with the
latter two possibly swapping depending on the nature of apps
and activities.

I lean to the quad core since it's at least a generation ahead
architecturally
Someone told you lies, Intel "quad core" is effectively two
dualcore dies slumped together in one package.  It's at least
generation _behind_ architecturally, in fact. (not that I dislike
Core family, it's fairly nice especially when compared with P4)

but wonder if 6 cores gives better LTSP performance when
students are really just doing OpenOffice and Firefox/Chrome
99.9% of the time.
They would.  Especially if more than ~2 flash plugins or ooffice
processes misbehave and consume cycles.


On Sun, Oct 10, 2010 at 02:24:42PM +0800, j. Tim Denny wrote:
I was just browsing CPU benchmarks...
Usually it's crap.

you then ask how about LTSP performance...   I wonder... is the
OS optimized for multicore usage?  or does that matter?
If the kernel doesn't do SMP, it goes to bitbucket.
Then you get a few hundred processes off those 32 clients.
Go figure.

But then what about GPU?
Nothing special, it won't work for thin clients at all.


On Sun, Oct 10, 2010 at 03:01:43PM +0800, j. Tim Denny wrote:
Someone did all the work for us...
Nope, the review is quite clearly heavily biased towards Intel.

on that page they show a $199 quad core intel beating a hexa
core AMD at  $285
Of their runs, only memory bandwidth is more or less relevant
to application server workloads.  In the rest clock frequency
might be the defining factor (I recently hashed through a few
reviews on 2/4/6-core benchmarking).  Which is is fine for
gaming but for multi-process server it means only more clocks
per second to shuffle tasks around.


On Sun, Oct 10, 2010 at 10:38:38AM -0500, Bart Lauwers wrote:
A 4-core 1Ghz cpu will beat a 2-core 4Ghz cpu any day when it
comes to workload. But in benchmarks the 2-core will always win.
[...]
Finally, you want to make sure you have enough ram and disk
bandwidth. If you underbuy disk or ram, the cpu won't matter.
And yes, I've implemented this in practice.
I second each word.


On Sun, Oct 10, 2010 at 12:56:39PM -0400, Daniel Howard wrote:
AMD Phenom II X6 1090T Thuban 3.2GHz Six-Core Processor, L2
Cache: 6 x 512KB, L3 Cache: 6MB, ($266)
Intel Core i7-950 Bloomfield 3.06GHz Quad-Core Processor, L2
Cache: 4 x 256KB, L3 Cache: 8MB ($295)
BTW it's better to compare CPU+MB kits.

I'm planning on 8GB RAM, should be more than enough.
Yup.  In a 24-client lab, getting them all booted to KDE3 with
Firefox and an intranet page loaded resulted in 1100 Mb RAM
consumed (ALT Linux 4.0 Terminal, 2x Xeon 5310, 8G RAM, software
RAID10 with 4x SATA HDDs, 2x GigE NICs).

The thing is, you might count something like 512M for "all the
apps" and then find out an increment for typical user dataset.

For high disk bandwidth
App server doesn't need high _bandwidth_ per se, but rather
parallel ("multithreaded") one.

I'm still not clear on what would be best, especially with the
new solid state drives, any additional thoughts there regarding
SATA-II, SCSI, and Solid State?
Again, I've used them all (and FC racks too :).

You don't want older SSDs without TRIM (for one, I bought
Kingston SNVP325-S2/128GB for $350 this summer and consider
it being quite optimal an offer for a notebook; didn't throw
real server workloads at that, only a bonnie++ IIRC).

SCSI is phased out by SAS, better don't mess with separate SCSI
adapter and caging/cabling/terminators unless you already know
what you're doing (or buy a second hand dual-cpu server with
decent scsi subsystem, which still requires some experience).

SAS... you might get it somewhat cheaper with HP ML series
or to some extent with low-end Intel server motherboards
integrated as they bought LSI (and dumbed down bulk of its
controllers to hostraid crap along the way).  Still the drives
aren't cheap and still won't outrun RAM cache for starting up
the software.

I'd say, go for 2x SATA HDDs (RAID1) for OS and apps,
and 4x SATA HDDs (RAID10) for /home (user data).

Of disk brands, it's about Hitachi (best performer under heavy
load, ftp.linux.kiev.ua runs 8 of them) or WD Black.  Even if you
decide to go for "enterprise" versions, do consider replacement
policy of 3 years (see also google labs report on drive life,
my experience correlates with it too).  Avoid Samsung (they focus
on linear rates and miss out on faulty sector recovery something
like completely, I learned that the harder way) and to some
extent Seagate (heating/compatibility/stability problems for SATA
drives last 6 years or so, even ES/ES.2 came with unofficial
recommendation to partition off the first gig and leave it alone
due to fault adaptive formatting and overly high failure chance
at those tracks; OTOH I run 8x 1Tb ES2 with this caveat and so
far these are OK but I've already seen the contrary elsewhere).

Software RAID should be quite adequate.

If you have spare bucks, maybe buy an SSD for the former
and back it up to /home regularly just in case.

If you have some more bucks, well, a 3ware or Areca hardware
RAID for those 4 drives might spare some CPU cache thrashing.

But then if you have something like $3000 for that system,
you might just go and buy two-socket 2x4=8 core server.

One more time: if you decide for SAS (moreso for SCSI), go for
ready-made systems.  I've done more than a few homegrown SCSI
systems and even one SAS one, and it's quite a bit of hassle to
fit all the parts in properly even if you have spares (they do
like to do some vendor lock-in with plumbing, heh).


On Sun, Oct 10, 2010 at 12:39:55PM -0500, Bart Lauwers wrote:
At 8G ram you are basically giving each client a max of 256M
workspace. That might be a bit tight ....
Nope, it's not about 8192/32 for everything but rather about
(8192-512)/32, but not for apps but for user data.  For ALT Linux
and browser plus office running under KDE3, my usual calculation
would be 256 + (40..100)*N megabytes.  For average Ubuntu these
might double (ALT is memory optimized, -Wl,--as-needed all over
the place), but still 512+200*32 is a bit easier.

it depends on the application you are running really and how
much of that application is per instance data and how much
shared memory and such. However 16G ram would probably not
hurt, or 2 systems with 8G ... too little ram also means the OS
wont be able to cache enough IO which can be very bad in a thin
client setup.
Yep.

For disk the same principles are true. Solid state disks are
good up to a certain level, but a raid mirror with a few fast
sas disks is probably better if you are to do anything
involving lots disk writes.
Avoid RAID5/6, they will deliver more space per drive *but*
then they'll lag at writes, and RAID5 are prone to double fault
when the next drive scheduled to die will do it under the added
load when rebuilding the array after having the first replaced...

(of course, avoid RAID0 like plague for data that should persist)

On the filesystem side, I use xfs for heavily I/O loaded parts
(*requires* an UPS), ext3 will drive your LA mad but is more safe
regarding power losses.  ext4 so far seems like reasonable middle
one, care to use 2.6.32+ kernels as there was significant
stabilization between .30 and .32.  relatime mount option is more
or less requisite (again, reasonable compromise between noatime
and writing all the superfluous metadata for no real reason).

Do care for decent NIC, and for real extension slots (not crap
like 16x long 8x wise) if you can get only realtec onboard:
it will generate lots of interrupts unlike Intel parts.

You might need two or more gigabit NICs bonded, better make sure
your switch could do it.