Data is critical to facility planning. With data organizations can plan for the long term, they can make decisions based on facts, and they can weigh options against one another. I would say there are three criteria for useful data: availability, reliability, and clarity. All three are necessary; if you have data but can’t access it then it’s not useful; if you have unreliable data then it’s not useful; and if you have reliable data but it’s a convoluted mess then it will be very difficult to discern anything meaningful (i.e. not useful).
Let’s take a look at an example at how data is used in facility renewal planning and building operations at Toronto General Hospital.
We have three chillers in the Toronto General Hospital Central Plant.
- CH1 = 1500 Ton, 15degF delta T, single compressor, no VFD
- CH2 = 1500 Ton, 15degF delta T, single compressor, no VFD
- CH3 = 2400 Ton, 25degF delta T, dual compressor, no VFD
Chiller 3 was designed to be the lead chiller (the first one to come on). The designers were so sure this was the best chiller (many years ago) that they didn’t include a control valve to switch lead operation to another chiller (thus it must be done manually every time, including climbing up a ladder and manually cranking a valve). For a long time it was believed that chiller 3 was the most efficient chiller because it had dual compressors and could produce more cold water. Having it as the lead chiller was therefore not a major concern but operational data wasn’t available to either support or refute this. No data, no proof.
Engineers like data, they like proof.
We set out to get the data we needed, setting up trends on existing sensors such as temperatures on the intake and outlet, chiller power input, and flows. We studied the design schematics trying to determine the design intent. It was soon discovered that the flow meters were broken, a key piece of data needed to calculate cooling energy (measured in BTUs or Tons) and efficiency (measured in kW/Ton). The sensors were old turbine style meters which are prone to getting clogged, so we cleaned them. That didn’t work. So our controls technicians investigated and found the control board was fried. We replaced it.
We finally got our trended power, flows, tons, temperatures but it looked like this (hundreds of thousands of rows like this)…..
unsorted data
We sorted this using pivot tables, but that didn’t mean it was usable yet. There were errors and faults in the data. Where the BAS cut out, or where sensors were reading when they shouldn’t be. We had to provide context to the data. Where were the sensors located? What were they really measuring? The result is a nice date stamped data set, understood in context of the overall system. We also included data that would serve as a second check. One set could lie and to base major decisions on one data set could be risky. Thus we had two and sometimes three sets to compare to, with different sensors on different parts of the system which could help verify the accuracy of the others. An excerpt of the cleaned up data is below, the full very is much longer and wider.
sorted data
That allowed us to run some analytics (read: calculations). Which eventually allowed us to distill the information into ‘bin data’. Bin data is a summary of operational metrics at discrete ‘bins’ of system or environmental conditions.
The bin data shows that chiller 3 rarely ever produces more than 2000 Tons (only 23 hours) in nine months of data (April to December), in fact it mostly maxed out at around 1600 Tons. This makes sense since it’s a 2400 Ton chiller only when operating at 25 degF delta T (the difference between the inlet and outlet temperatures) and the other chillers operate at 15 degF. When the chiller operate in parallel from common headers they all pretty much operate at the same delta T. More importantly, what the bin data also shows is the kW/Ton, our measure of efficiency. It’s comparably terrible for chiller 3. At partial load CH3 uses 50% more power and at full load it’s about 40% more. Here’s this information graphed for clarity:
It’s much easier to see what’s happening when data is displayed in a graph. Some of you might notice that the graphs don’t look as you’d expect – they don’t follow a smooth quadratic curve with efficiency decreasing smoothly between full load and part load. It gave me pause as well. But it’s because our condenser water temperature is changing (i.e. real world operation).
So that brings me to the facility planning. Chiller 3 is up for refurbishment, which is an expensive endeavor. Except that we now know it’s terribly inefficient and we know how much money it costs to operate it. By comparing to a new efficient magnetic bearing chiller we know how much energy we could save and what incentives are available from Toronto Hydro. Thus we have an opportunity. To improve the organization long term and save the hospital money. By replacing chiller 3 we could cut chiller energy use in half, saving the hospital millions of dollars over the replacements lifespan and approximately $400,000 a year. The new chiller would payback in ~4 years after including incentives and avoided maintenance. That’s amazing for major piece like a chiller. The new chiller would also increase reliability and our ability to control with less sensitivity to changes in water flow rates. That’s a real impact that data and data analysis can have on a facility long term.
nice graphs bro. I was expecting a quadratic curve as well but then again life is one big singularity.
In all honesty the work you guys do is pretty amazing… and under-rated. Keep up the good work!
LikeLike
Congratulations! You are doing a good job! I hope UHN appreciate what you are doing in here.
Maria Yepes
LikeLike