It isn't really a question of could it work, more of a question of how could you do it? A fully loaded blade chassis is a very dense device with no room to gain entry to the CPU heatsinks without interrupting airflow. In fact, airflow is king when it comes to DC cooling. This is an extract from Cisco's data centre bible to give you an idea of the mainstream (and high budget) way of doing it.
As a rule, ambient air is sucked into the front of a rack and blown out the back hot. To cater for this you ned to ensure that you have hot and cold aisles within your DC. Your HVACs take in warm air at the top and blow it under the false floor where it is channeled to vented floor tiles in the cold aisles of your DC. Hot air goes out the back and with the right airflow rises to get sucked into the HVACs, cooled and recycled back to the fronts of the racks via the floor vents in the cold aisles.
Within each rack you need to be careful to not leave gaps inbetween equipment without blanking plates because that will fuck with your airflow.
This is the old school way of doing things. More info here:
http://www.cisco.com/c/en/us/soluti.../unified-computing/white_paper_c11-680202.pdf
This is one variation of the new way of doing things (using curtains or sliding doors)
There are loads of ways to do this, some better than others.
I will never permit water cooled racks in my DCs. I've never heard of a problem with them before, they work very well. I've just had too many floods from other sources to risk adding another one
Other than airflow, n+x redundant cooling units are essential. Air lock doors help in big DCs.
Common oversights are forgetting to factor in heat given off by lighting and also engineers working in the room. A human at rest gives off 100-200w of heat. This is a bit more if they are working hard loading up SAN chassis and collapsing carboard boxes!