Adventuring in the realm of cyber-physical systems
Posted on 08 Mar 2016Last time I introduced you to the wonderful field of Control Engineering, with the promise of telling you a bit about the life of a PhD-student as well. Where last time I wrote in a peaceful period, now the engine is in full throttle as deadlines are popping up like mushrooms and coming close at a very rapid rate, with the main goal being the Conference on Decision an Control (CDC) deadline next week. As this is the biggest conference in our field, it would be a major milestone if I were to present my results there. That the conference is held in Las Vegas is only a small detail. This varying trend in stress levels is something which is a common sensation during your PhD. Actually I find it is very similar to the natural stress evolution I felt as a student, i.e. relax during the quarters and stress when exams knock at your door. But looking back at the past two months I have seen my project coming together bit by bit and I can’t help to get a sense of fulfillment from this realization.
Studying the airflow and job scheduling of data centers is where control engineering and physics meet. Or as we would like to call it, a data center is a cyber-physical system, combining the job scheduling (cyber part) with the physics of heat transfer. The question I try to solve: Process the requested workload, consume the least amount of power while doing so and make sure the servers do not overheat. From a mathematical perspective this amounts to a lot of differential equations. To give the gist of the reasoning we start by understanding the thermodynamics of the data center. The change of temperature is given by
ΔT ∝ Qin - Qout + Qinjected
Where Qin is the heat entering the system, e.g. cold air from the cooling system, Qout is the heat going back to the cooling system and Qinjected represents the influence of the job scheduling in the system. Then we study optimality conditions, finding an answer to how we should divide the work we have, such that we consume the least amount of power. Lastly we devise a controller which will tell us what to do in every situation. In this way I have a studied a simple data center and its behavior for these different steps. This first analysis will serve as a base case for when we want to complicate our models and understand more exotic data center configurations and problems encountered by data center operators. As stated before, there is a lot of mathematics which come into play here, but that I will spare you the details today.
The most beautiful part of a PhD-project is that you are working on the frontiers of what is known. Therefore the things which you do, have not been done before and have not been solved before. This brings a degree of uncertainty which is equivalent to the thrills of an exciting adventure. This excitement is exactly what makes doing a PhD one of the nicest jobs out there, that is to say if you have to guts to take the challenge by the balls. The same is to say for my project, where the most inventive part lies in the decision making part, i.e. the design of a suitable controller. I have seen some other approaches out there, but mostly I am in unexplored waters. Luckily I am not alone on the task and I have a professor who helps me see the bigger picture when I again encounter some big obstacle which obscures my vision of the rest of the world.
Written by
-
Tobias Van Damme