Publications by Author: Ferrag, Mohamed-Amine

2022
Berghout T, Benbouzid M, Ferrag M-A. Deep Learning with Recurrent Expansion for Electricity Theft Detection in Smart Grids. 48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022 [Internet]. 2022. Publisher's VersionAbstract
The increase in electricity theft has become one of the main concerns of power distribution networks. Indeed, electricity theft could not only lead to financial losses, but also leads to reputation damage by reducing the quality of supply. With advanced sensing technologies of metering infrastructures, data collection of electricity consumption enables data-driven methods to emerge in such non-technical loss detections as an alternative to traditional experience-based human-centric approaches. In this context, such fraud prediction problems are generally a thematic of missing patterns, class imbalance, and higher level of cardinality where there are many possibilities that a single feature can assume. Therefore, this article is introduced specifically to solve data representation problem and increase the sparseness between different data classes. As a result, deeper representations than deep learning networks are introduced to repeatedly merge the learning models themselves into a more complex architecture in a sort of recurrent expansion. To verify the effectiveness of the proposed recurrent expansion of deep learning (REDL) approach, a realistic dataset of electricity theft is involved. Consequently, REDL has achieved excellent data mapping results proven by both visualization and numerical metrics and shows the ability of separating different classes with higher performance. Another important REDL feature of outliers correction has been also discovered in this study. Finally, comparison to some recent works also proved superiority of REDL model.
Berghout T, Bentrcia T, Ferrag M-A, Benbouzid M. A Heterogeneous Federated Transfer Learning Approach with Extreme Aggregation and Speed. Mathematics [Internet]. 2022;10 (19). Publisher's VersionAbstract
Federated learning (FL) is a data-privacy-preserving, decentralized process that allows local edge devices of smart infrastructures to train a collaborative model independently while keeping data localized. FL algorithms, encompassing a well-structured average of the training parameters (e.g., the weights and biases resulting from training-based stochastic gradient descent variants), are subject to many challenges, namely expensive communication, systems heterogeneity, statistical heterogeneity, and privacy concerns. In this context, our paper targets the four aforementioned challenges while focusing on reducing communication and computational costs by involving recursive least squares (RLS) training rules. Accordingly, to the best of our knowledge, this is the first time that the RLS algorithm is modified to completely accommodate non-independent and identically distributed data (non-IID) for federated transfer learning (FTL). Furthermore, this paper also introduces a newly generated dataset capable of emulating such real conditions and of making data investigation available on ordinary commercial computers with quad-core microprocessors and less need for higher computing hardware. Applications of FTL-RLS on the generated data under different levels of complexity closely related to different levels of cardinality lead to a variety of conclusions supporting its performance for future uses.