Source: Pixabay

Identifying Anomalies in Commercial Energy Consumption

Indialindsay
11 min readDec 11, 2020

--

Time series applications to energy consumption and outlier detection

Written by: Jocelyne Walker, Luke Bravo, Ram Kapistalam, India Lindsay

Using time and resources efficiently is essential to the success of energy companies, building managers, and utility users. To use resources efficiently, we must combine machine learning and industry understanding to first model energy consumption and then predict anomalies for human intervention.

The potential for value creation exists as typically 15–30% of commercial buildings’ energy usage is wasted due to malfunctioning equipment, improper operation protocol, and faulty construction [1]. Despite efforts to reduce waste, energy consumption in buildings has steadily increased in the last decade. Current faulty detection methods for reducing excess energy consumption are primitive. Our aim is to build a thorough anomaly detection system to reduce excess energy consumption and pave the way for efficient energy management. We will walk you through three of our approaches.

The Data

We obtained time series data from Schneider Electric that contained energy consumption values for power meters in three of their commercial sites. This data was supported with three additional datasets: metadata offering location information and general descriptive features of the meters, a holiday dataset summarizing the occurrences of regional public holidays, and historical weather data containing temperature relative to the building’s location in a time series format.

For our analysis, we merged these 4 data sets together on the meter id for energy related data and on the date and time stamp for holiday and weather data. We narrowed down our dataset to focus on detecting anomalies in energy usage for one laboratory meter located at Site 38. Our data consisted of 49,000 rows and contained information from years 2012–2016.

Pre-processing

The energy meters record the cumulative amount of energy used by the system. Because of this cumulative measurement, we differenced the time series in order to predict energy usage between each hourly reading. The plot on the left shows the cumulative energy usage over time, and the plot on the right shows our differenced predictor variable.

Our predictor variable transformation from cumulative to differenced hourly values

To prepare to model energy consumption, we had to first feature engineer explanatory variables to capture the autocorrelations in energy usage. This included a first- and second-difference lagged value of consumption, along with a daily and monthly lag that captures the multiple levels of seasonality in usage. We see in the image below that the “tv.delta” represents the differenced Values column, and the other columns in the snapshot below represent our explanatory variables.

This data exhibits clear seasonality as higher energy consumption occur at extreme temperature values. When temperatures are extremely low, in months like December and January, and extremely high, in months like July and August, energy consumption peaks. At moderate temperatures, there are few high energy consumption values. The quadratic form of seasonality in our data can be seen below.

Visualizing the seasonality and temperature impact on differenced energy consumption

Overview: Anomaly Detection

Anomaly detection is a method of labeling data points or observations that deviate from the normal behavior of a dataset. We incorporated three methods of detecting anomalies: unsupervised KNN, Bayesian change points, and residual analysis of a predictive model.

We split our data into train and test sets with the training set containing years 2012–2014 and the test set containing years 2015–2016. The training set consisted of 33,173 observations. To detect anomalies in the change in energy consumption, we relied upon the following features: previous and twice previous changes in energy consumption, daily and monthly lagged consumption, and a binary variable indicating whether the day was a holiday

Approach 1: K-Nearest Neighbors — PyOD

K-nearest neighbors (KNN) is an algorithm that is often used for classification or regression problems. In classification, objects are given a label based on a plurality vote of its nearest neighbors. For anomaly detection, anomaly scores are given based on how far a point is from its Nth nearest neighbor (KNN).

This model uses the Python Outlier Detection package — a comprehensive package for detecting anomalies in multivariate datasets. The toolkit contains multiple supervised and unsupervised models for outlier detection that are easy to implement. Models within this method operate by the following similar steps:

  1. Identify a set of features to put in your model
  2. Train an unsupervised model from the PyOD package on training set features
  3. Generate anomaly scores based on a set criteria for the model
  4. Declare anomaly score threshold for outlier classification
  5. Visualize and evaluate model results

For further exploration of PyOD models for anomaly detection, check out Isolated Forest.

We first trained the KNN model with the features built in the “Modeling” section. Then, the model was applied to our test data to generate anomaly scores.

import numpy as np# train kNN detectorfrom pyod.models.knn import KNNclf_name = ‘KNN’clf = KNN()clf.fit(X_train)# See anomaly scores for training datay_train_scores = clf.decision_scores_# You can then generate scores for the test set with decision_function#An anomaly score is a metric to evaluate how “anomalous” a point is # based on set criteriay_test_scores = clf.decision_function(X_test)

Anomaly scores were then visualized to identify an appropriate threshold for labeling. In this case, the automated KNN model generates a score based on how far a point is from its fifth nearest neighbor. All points to the right of the red line were labeled as outliers (6% of data).

The final step for PyOD anomaly detection is evaluating the results. This was challenging for our dataset because outliers were not previously labeled. Thus, evaluation was a subjective decision based on our knowledge of the data. The following graph displays energy consumption values over time with outlier labels.

One significant issue with this, and most, PyOD models is that they fail to account for seasonality when generating outlier scores. In the graph above, most points that were labeled “outliers” were simply in groups where all values were generally high because of seasonality. The outlier points occurred more frequently at the extreme measures of changes in energy consumption. The below table confirms this, revealing in the association between outliers and extremely high lagged energy consumption values.

From an energy conservation perspective, these models lack value as they are unable to identify anomalies that occurred independent of the seasonality factor. We moved to other methods that might assess time series data more effectively.

Approach 2: Bayesian Change Points Using Banpei

Our next approach incorporated the Banpei library (supporting Python 3.x or later and is available on GitHub) primarily for its change point scoring feature. This approach assumes anomalies in our dataset are characterized by abnormal upward or downward spikes and by the abnormal changes in energy consumption preceding and following a data point. Anomalies are labeled as change points.

The Banpei package streamlines the algorithm for change point detection. For each datapoint K in a time series, it applies the Poisson process with a prior Gamma distribution rate to the subset of data occurring prior to K. It compares this distribution with the corresponding distribution of the subset of data occurring after K. It requires two simple lines of code to execute:

The ‘is_lanczos’ parameter helps increase the computational efficiency. This eigenvalue algorithm, named after Cornelius Lanczos, is an example of a power iterator. Given a nondefective or diagonalizable matrix A, the algorithm yields (1) λ, the eigenvalue of A with the largest absolute value and (2) a non-zero vector that is an eigenvector of λ, such that Av = λv. This code yielded an array called “results” with change point scores, which are interpretable as probabilities derived by Banpei’s SST function from the comparison of Gamma distributions for each point K.

This package requires self definition of outlier thresholds. We defined an outlier as any data point with a change point probability score two standard deviations greater or less than the average score. Then, for each data point K , the change point probability was computed and compared with this cutoff value, C. All points exceeding C were classified as anomalies. We additionally introduced a sensitivity hyperparameter. This parameter modified this cutoff point by calculating it using a moving window average with an adjustable window size, helping to tune the algorithm to sensitivities in our data set.

Our baseline model ignored the sensitivity hyperparameter and calculated the C as the cumulative average change point probability plus two standard deviations. Throughout the summer of 2016, this model detected 429 anomalies (marked in red below), similar to those detected with the PyOD approach. The major shortcoming of this model is its low sensitivity to seasonality of power consumption. Since this model calculates C as a cumulative average from the first data point all the way through the Kth data point, regular seasons of higher power consumption (i.e., summer) bias or inflate C such that many change points are likely ignored or missed. This problem manifests as an inflated false negative rate (i.e., missed anomalies). The changepoint probability graph illustrates the problem; the orange line representing C is very inertiatic. To reduce this and improve results, we next modeled our approach incorporating the tuning of our sensitivity hyperparameter.

Our next model turned out to be overly sensitive. This model’s C was calculated as the moving window average over just one day (24 data points) plus two standard deviations (calculated over the same 24-hour period). As observed in the below plots, it overcompensated for the inertia of C in our baseline model. This model detected 2277 anomalies throughout the summer of 2016.

The final change-point-based model we created sought to achieve a middle-of-the-road sensitivity. This was accomplished by extending the moving window (used to calculate our average and standard deviation for C) from 24 hours to one month. Although the graph of our detected anomalies is similar to the baseline model’s graph, the second plot illustrates the improvements resulting from the tuning of our sensitivity hyperparameter. Our final C dynamically accounts for seasonality yet is stable enough to avoid inflating our false positive rate.

Approach 3: Residual Analysis

This approach was a two-part challenge. We first built a supervised model to predict the typical change in energy consumption for the meter using time series lagged demand and other relevant predictors. We then developed a classification metric to identify usage anomalies and interpret irregular consumption behavior.

Modeling Process

As our training set had sufficient data and contained correlated time series lagged variables, we proceeded with modeling using Random Forest, XGBoost, and neural networks.

We used 3-fold cross validation to identify the optimal hyperparameters and used the RMSE to compare our models’ performances. As we were interested in using our model to detect anomalies, we opted for the RMSE as it penalizes larger errors more.

# Tuning the Random Forest model parametersest = [10,100,200,300]leaf = [15,30,50,70]max_feats = [‘auto’, ‘sqrt’, ‘log2’]arr = tuning(X_train,y_train,X_test,y_test,est,leaf)
Comparison of our model metrics

All of our models performed extremely well. We identified the random forest as the optimal model due to its accuracy and interpretability. On average, this model’s predictions were off by about 15 kwh and it explains about 87.84% of the variation in the change in energy consumption. This model identified both the previous and twice previous changes in energy consumption as the most significant features.

Our Random Forest model’s training and testing predictions

Anomaly Detection from Random Forest residuals

This final anomaly detection method was inspired by this article outlining strategies for time series anomaly detection. This method analyzes the residuals of our random forest model and seeks to identify anomalies by comparing predicted output values to the actual values. If the error for a point is three standard deviations greater or less than the two week moving average error, the point is labeled an outlier. The table below illustrates this calculation for a set of points.

Defining an outlier as ±3 standard deviations from the mean error is only feasible with normal distributed errors
The point labeled in red is labeled an outlier since the error is 3 SD’s lower than the two week mean error

This method is very valuable because of how easy it is to visualize for business stakeholders. The graphs below display anomaly identification for the week March 7- March 13, 2017.

Errors that exceed the confidence interval are labeled anomalies in red

Similarly to the KNN model, we evaluated results based on visualizations of the resulting data. The following graph displays energy consumption over time with outlier labels from the Random Forest.

There appears to be a more consistent distribution of outliers with this method. Seasonality still yields influence over the anomalies, but in a less isolated manner than before. The table below shows the relation between outliers and previous values. Compared with the prior PyOD model, outliers are not as dependent on lagged variables.

Outlier Visualizations

Higher frequency of outliers at very high or low temperatures
Highest proportion of points are outliers on Sunday’s; lowest on Saturday
Higher proportion of points are outliers on Holiday’s

Conclusion

Our final two-step anomaly detection approach performed the best. The first two unsupervised approaches, KNN and Banpei change point analysis, did not offer the most accurate results as they both failed to account for seasonality. By using a predictive model and then analyzing its residuals, our anomaly detection algorithm was in tune to the patterns within the data and insightfully labeled anomalies.

Understanding the relationship between model inaccuracy and data anomalies is essential to training an effective model. It is extremely important to adjust for extreme temperatures when managing energy consumption and anomalies. “Removing” this seasonality and temperature impact allows us to track anomalies according to a baseline, instead of falsely marking high consumption values caused by extreme temperatures as outliers. This allows firms to build a dynamic system for outlier detection that responds to changes in surrounding conditions.

Additional improvements to the model could include understanding the relationship between temperature change, along with the age and quality of the meter, on energy use. This would allow building management to respond to changing temperaturesー for example, when the weather forecast includes a cold frontーand implement preventative maintenance programs for their meters.

Energy modeling and outlier analysis opens doors to influence pricing models, projections for building design and costs, and preventative maintenance of meters. By predicting energy consumption, we can effectively store energy and understand anticipated demand to save costs.

Check out our code:

indialindsay/Anomaly_Detection (github.com)

References:

[1] Srinivas Katipamula & Michael R. Brambley (2005) Review Article: Methods for Fault Detection, Diagnostics, and Prognostics for Building Systems — A Review, Part I, HVAC&R Research, 11:1, 3–25, DOI: 10.1080/10789669.2005.10391123

--

--