How APM generates value hero artwork

How APM generates value

Everything you want to know about Asset Performance Management ·
00:00:00
00:00:00
Notes
Transcript
Download

Transcript

00:00:00
So how does asset performance management create value? It's primarily through two mechanisms. One is an increase in availability and reliability, and the second is a decrease in maintenance cost. Now in addition to those two, a more reliable plant is a safer plant, and a plant that has fewer unplanned shutdowns also has fewer environmental, whether it's kind flares or environmental events. So safety and environmental concerns are very valid but for the purpose of this episode I'm just gonna be talking about
00:00:37
how there's an improvement in reliability and in turn availability and then a decrease in maintenance cost. So the the mechanism on hitting those two drivers is really coming from from three main methods. The first is detecting small problems before they become big problems. The second is detecting issues early enough that you can move from unplanned to planned. And then the third is adequately monitoring the condition or predicting how long you have with certain machinery so you can extend their maintenance intervals. And I'm going to talk a little bit more about all three of those and then close with a few ways to calculate those improvements. So on the detecting small problems before they become big problems. Typically, fixing the small problem is both cheaper and faster. So two examples. One is the the quintessential clogged lube oil filter that feeds oil to a larger machine, be it a turbine or a compressor. Now if that lube oil filter gets sufficiently clogged, it restricts the flow of lubrication to the larger machine and the journal bearings on that turbine or compressor end up getting starved. They heat up, that damage occurs and then vibration increases to the point that you end up seeing it on the larger machine and to fix things you have to go take things out and replace the Babbit or the journal material. But had someone gone in and fixed the
00:02:10
Lubo filter you wouldn't have damaged the larger machine. So a second example there is looking upstream in a process. And the example I'm thinking of is a steam blower where the impeller itself broke apart into pieces and caused a fair bit of downtime but also significant cost because the blower itself had to be replaced. Now that showed itself in high vibration but in a retrospective analysis of what was actually going on you could see decrease in performance a little bit before that increased vibration and it was well really the impeller was destroying itself but before that upstream in the process there was a scrubber or a knockout drum and the whole purpose of that was to take the fluid or the liquid out before it went into the blower or before the steam went into blower. Now that scrubber or knockout drum was poorly controlled a few weeks prior to when the machine went down and what you would end up seeing is you saw the level dancing all over the place. So what had happened was bits of liquid was getting in to the blower itself damaging the impeller and then ultimately causing the impeller
00:03:28
itself to completely destroy itself. So had the scrubber or knockout drum been adequately controlled or had operations noticed that it was poorly controlled and corrected it before the damage occurred we wouldn't have had to fix that. So that's fixing the small thing before it becomes the big thing. Then there is the world of detecting issues earlier so you're moving things from unplanned to planned and in that case it is usually cheaper to do planned maintenance as opposed to unplanned maintenance. Sometimes it's faster but it's almost always cheaper. Now an example here would be in a combined cycle power plant, if you need to rewind one of your large generators that would typically take a little over 20 days or kind of roughly three weeks and cost a fair bit of money. But if you were surprised and you had to do that
00:04:16
generator rewind, it turns out when you you try to do this kind of in a rush state it still takes about three weeks to do it. So at times about the same. But the cost is about two and a half to three times more than that original planned state. So this really varies by machine to machine, by failure to failure, but the main point is if you can plan around these maintenance events or these overhauls it's typically cheaper and sometimes faster. Now the third use case or the third method is around life extension or predicting how long you have until a known failure occurs. And this is intertwined with the world of Well, we know that time -based maintenance is really not the right strategy for the vast majority of failure modes, and in turn, equipment. So if you can adequately characterize the condition of the equipment and predict how one you have, you can extend maintenance intervals. By decreasing the maintenance intervals, you're able to produce more because you're down less. And, if there are elements of infant mortality in those failure modes, then you're actually going to have fewer failures by doing less maintenance. So those are the three main levers for affecting availability and reliability and maintenance cost. And now let's talk about a couple ways of calculating those. So the highest level or the lowest fidelity way of doing this is akin to napkin math. And this would be using historical plant reliability and kind of average production and a historical maintenance cost. Now that reliability is what I mean by the traditional definition of it. So it would be one minus unplanned outage hours and divided by 8 ,760 if you're looking at one year's worth of reliability or unreliability. So you take that and you say, well, those are the number of hours or the percent of time that the plant, platform, or factory is down and not producing because of unplanned events. So that's what I mean by the reliability or unreliability. By definition, that means it is unplanned. Availability includes both planned and unplanned. So you take that and you say, well, from an unreliability perspective, I'm going to say percentage of that is going to be reduced by those three mechanisms I was talking about earlier, kind of finding the small thing before it becomes the big thing, moving from unplanned to planned, or life extension. So a starting point here would be to say let's take a 20 % reduction in the unplanned outage hours or that unreliability and then extrapolate that into how many additional hours you can produce on a yearly basis. You can then translate that into increased revenue or increased profit if you apply your margin. From a maintenance cost perspective, again this is napkin math, but we could just sit there and say hey if we deployed APM across the entire site, let's assume a conservative
00:07:31
5 % reduction in maintenance cost. Now in reality it can be significantly more, but typically maintenance spend is quite high. So if you're throwing around 10 or 15 % reductions, the numbers get, well, they seem superlative or ridiculous. So let's conservatively assume a 5 % reduction. So that's the first method. Again, napkin math. Another way of arriving at something similar is doing more of a bottoms up estimate. And And this would be taking the last three years of work orders and historical downtime and going event by event and assigning a percent impact, ideally to both the maintenance cost and to the duration of downtime. And yes, we know that percent impact is going to be fuzzy or imprecise at best. But by doing it at kind of the event by event level when you end up tallying that up you get a more Tailored view than that 20 % reduction in unreliability that I was just mentioning earlier So you can then translate it to the same thing
00:08:41
How many more hours could you produce and how many fewer maintenance dollars did you spend applying those impacts? so those are two ways of arriving at it, then the third one is focused more around life extension and And this is more applicable if you have a fair bit of similar assets that are either running parallel or similar enough, you could go in and look at the meantime between failure of those assets, you could look at kind of maintenance strategies, you could look at the spread on that meantime between failure. And then you could assume very conservatively what a percent increase in mean time between failure would do from both a maintenance spend perspective and an aggregate impact to production. Now this is also when you get into the world of Monte Carlo simulations because by definition asset performance management's value ends up being lumpy. And what I mean by that is it ends up being more of a risk reduction rather than a clear increase in production. Because like I was talking about in the beginning, we're catching problems earlier. We're extending maintenance that would have historically been running at a certain interval. So the main point is, or the main benefit of running simulations, can sit there and say, okay, well, an average year for this plant would look like this. but let's say I'm going to run a thousand permutations of a plant year and what the distribution of that reliability would be, what the distribution of that maintenance cost would be. That would allow you to be a bit more comprehensive in one's analysis of what benefit they would expect to achieve on average or with a certain level of confidence. So that's APM's value in a nutshell. I in summary, APM is really driving financial value by increasing reliability and in turn availability, by decreasing maintenance cost. And through both of those, you're increasing both your top line and your bottom line. Thanks.