Computer Room
Requirements for High
Density Rack Mounted
Servers
Outline
• Why do we need computer rooms?
– Why in the past. – Why in the future.
• Design of the environment.
– Cooling
– Humidity – Power
Why do we need them (Past)
• Security
– Equipment is valuable.
• Convenience
– Specialist Knowledge is needed to look after them. – Networking was relatively difficult.
• Bulk
Why we need them (future)
• Specialist Environmental Requirements
– High density implies more sensitive.
• Convenience
– Human time cost of software maintenance.
Will be needed for the immediate future,
but the Grid will reduce the need long
Cooling - Then
• Rack mounting designed to get high CPU
density – optimise space usage given the
effort needed to allocate secure facility.
– Until recently, maximum power usage was about 2-3kw per rack.
– Air cooling sufficient, cool air taken directly from under the floor.
Cooling Now: too much Success!
• Modern 1U servers are 300W heaters =>
12KW per rack (18KW for blade servers).
• Rule of thumb: 1000 litres/sec of cool air
can handle 12KW.
– In detail a Dell 1750 uses 1200 l/min.
• For 40 racks, this is 32000 l/sec which in a
typical 600mm duct is a wind speed of
Cooling - Solutions
• Focus on airflow!
– Place racks in rows – hot aisle, cold aisle.
– Leave doors off the racks. – Identify hotspots statically,
or dynamically (HP smart cooling).
Major Problem – no bang for buck
• As the processor speeds increase => • They get hotter => • Fewer can exist per
sqr metre =>
• Overall CPU power in datacentre goes
DOWN.
Cooling Solution II
• Try self contained systems.
• Try water cooled units (self contained or
otherwise).
• Use “smarter” systems which actively manage hotspots. HP smart
Humidity
• Computers (in a datacentre) have tighter
tolerances than humans – 45%-55% (despite manufacturer limits of 8%-80%).
– Too low, risks static eletricity (fans in the computers themselves cause this).
– Too high, localised condensation, corrosion and electrical short. Note: Zinc in floor tiles!
• Air conditioning units must be better than for normal offices – how many rooms use
conventional units?
Power
• All this heat comes
from the power supply
– 1.2A per server – 50A per rack
– 4000A for a 40 rack centre
Summary so far….
• Modern machines need a well designed physical environment to get the most out of them. Most current facilities are no longer well suited (a recent thing).
– Intel scrapped 2 chip lines to concentrate on lower power chips, rather than simply faster.
– Sun (and others) are working on chips with multiple cores and lower clock speeds (good for internet
servers, not so good for physics!).
Example: 40 Racks for Oxford
• We have an ideal location
– Lots of power
– Underground (no heat from the sun and very secure). – Lots of headroom (false
floor/ceiling for cooling systems)
– Basement
• no floor loading limit • Does not use up office
Bottom Line
• The very basic estimate for the room,
given the shell, is £80k.
• Adding fully loaded cooling, UPS, power
conditioning, fire protection etc will
probably take this to £400k over time.
• Cost of 40 racks ~ £1.6 million
Hang on!
• There are about 50000
computers already in Oxford university alone.
• Assume 20000 are OK.
• Already have a major data centre, with essentially no infrastructure problems!