Publications

2022
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Benaggoune K, Al-Masry Z, Ma J, Devalland C, Mouss L-H, Zerhouni N. A deep learning pipeline for breast cancer ki-67 proliferation index scoring. Image and Video Processing (eess.IV) [Internet]. 2022. Publisher's VersionAbstract
The Ki-67 proliferation index is an essential biomarker that helps pathologists to diagnose and select appropriate treatments. However, automatic evaluation of Ki-67 is difficult due to nuclei overlapping and complex variations in their properties. This paper proposes an integrated pipeline for accurate automatic counting of Ki-67, where the impact of nuclei separation techniques is highlighted. First, semantic segmentation is performed by combining the Squeez and Excitation Resnet and Unet algorithms to extract nuclei from the background. The extracted nuclei are then divided into overlapped and non-overlapped regions based on eight geometric and statistical features. A marker-based Watershed algorithm is subsequently proposed and applied only to the overlapped regions to separate nuclei. Finally, deep features are extracted from each nucleus patch using Resnet18 and classified into positive or negative by a random forest classifier. The proposed pipeline’s performance is validated on a dataset from the Department of Pathology at Hôpital Nord Franche-Comté hospital.
Berghout T, Benbouzid M, Ferrag M-A. Deep Learning with Recurrent Expansion for Electricity Theft Detection in Smart Grids. 48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022 [Internet]. 2022. Publisher's VersionAbstract
The increase in electricity theft has become one of the main concerns of power distribution networks. Indeed, electricity theft could not only lead to financial losses, but also leads to reputation damage by reducing the quality of supply. With advanced sensing technologies of metering infrastructures, data collection of electricity consumption enables data-driven methods to emerge in such non-technical loss detections as an alternative to traditional experience-based human-centric approaches. In this context, such fraud prediction problems are generally a thematic of missing patterns, class imbalance, and higher level of cardinality where there are many possibilities that a single feature can assume. Therefore, this article is introduced specifically to solve data representation problem and increase the sparseness between different data classes. As a result, deeper representations than deep learning networks are introduced to repeatedly merge the learning models themselves into a more complex architecture in a sort of recurrent expansion. To verify the effectiveness of the proposed recurrent expansion of deep learning (REDL) approach, a realistic dataset of electricity theft is involved. Consequently, REDL has achieved excellent data mapping results proven by both visualization and numerical metrics and shows the ability of separating different classes with higher performance. Another important REDL feature of outliers correction has been also discovered in this study. Finally, comparison to some recent works also proved superiority of REDL model.
Berghout T, Benbouzid M, Ferrag M-A. Deep Learning with Recurrent Expansion for Electricity Theft Detection in Smart Grids. 48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022 [Internet]. 2022. Publisher's VersionAbstract
The increase in electricity theft has become one of the main concerns of power distribution networks. Indeed, electricity theft could not only lead to financial losses, but also leads to reputation damage by reducing the quality of supply. With advanced sensing technologies of metering infrastructures, data collection of electricity consumption enables data-driven methods to emerge in such non-technical loss detections as an alternative to traditional experience-based human-centric approaches. In this context, such fraud prediction problems are generally a thematic of missing patterns, class imbalance, and higher level of cardinality where there are many possibilities that a single feature can assume. Therefore, this article is introduced specifically to solve data representation problem and increase the sparseness between different data classes. As a result, deeper representations than deep learning networks are introduced to repeatedly merge the learning models themselves into a more complex architecture in a sort of recurrent expansion. To verify the effectiveness of the proposed recurrent expansion of deep learning (REDL) approach, a realistic dataset of electricity theft is involved. Consequently, REDL has achieved excellent data mapping results proven by both visualization and numerical metrics and shows the ability of separating different classes with higher performance. Another important REDL feature of outliers correction has been also discovered in this study. Finally, comparison to some recent works also proved superiority of REDL model.
Berghout T, Benbouzid M, Ferrag M-A. Deep Learning with Recurrent Expansion for Electricity Theft Detection in Smart Grids. 48th Annual Conference of the IEEE Industrial Electronics Society, IECON 2022 [Internet]. 2022. Publisher's VersionAbstract
The increase in electricity theft has become one of the main concerns of power distribution networks. Indeed, electricity theft could not only lead to financial losses, but also leads to reputation damage by reducing the quality of supply. With advanced sensing technologies of metering infrastructures, data collection of electricity consumption enables data-driven methods to emerge in such non-technical loss detections as an alternative to traditional experience-based human-centric approaches. In this context, such fraud prediction problems are generally a thematic of missing patterns, class imbalance, and higher level of cardinality where there are many possibilities that a single feature can assume. Therefore, this article is introduced specifically to solve data representation problem and increase the sparseness between different data classes. As a result, deeper representations than deep learning networks are introduced to repeatedly merge the learning models themselves into a more complex architecture in a sort of recurrent expansion. To verify the effectiveness of the proposed recurrent expansion of deep learning (REDL) approach, a realistic dataset of electricity theft is involved. Consequently, REDL has achieved excellent data mapping results proven by both visualization and numerical metrics and shows the ability of separating different classes with higher performance. Another important REDL feature of outliers correction has been also discovered in this study. Finally, comparison to some recent works also proved superiority of REDL model.
Berghout T, Benbouzid M. Detecting Cyberthreats in Smart Grids Using Small-Scale Machine Learning. ELECTRIMACS 2022 [Internet]. 2022. Publisher's VersionAbstract
Due to advanced monitoring technologies including the plug-in of the cyber and physical layers on the Internet, cyber-physical systems are becoming more vulnerable than ever to cyberthreats leading to possible damage of the system. Consequently, many researchers have devoted to studying detection and identification of such threats in order to mitigate their drawbacks. Among used tools, Machine Learning (ML) has become dominant in the field due to many usability characteristics including the blackbox models availability. In this context, this paper is dedicated to the detection of cyberattacks in Smart Grid (SG) networks which uses industrial control systems (ICS), through the integration of ML models assembled on a small scale. More precisely, it therefore aims to study an electric traction substation system used for the railway industry. The main novelty of our contribution lies in the study of the behaviour of more realistic data than the traditional studies previously shown in the state of the art literature by investigating even more realistic types of attacks. It also emulates data analysis and a larger feature space under most commonly used connectivity protocols in today’s industry such as S7Comm and Modbus.
Berghout T, Benbouzid M. Detecting Cyberthreats in Smart Grids Using Small-Scale Machine Learning. ELECTRIMACS 2022 [Internet]. 2022. Publisher's VersionAbstract
Due to advanced monitoring technologies including the plug-in of the cyber and physical layers on the Internet, cyber-physical systems are becoming more vulnerable than ever to cyberthreats leading to possible damage of the system. Consequently, many researchers have devoted to studying detection and identification of such threats in order to mitigate their drawbacks. Among used tools, Machine Learning (ML) has become dominant in the field due to many usability characteristics including the blackbox models availability. In this context, this paper is dedicated to the detection of cyberattacks in Smart Grid (SG) networks which uses industrial control systems (ICS), through the integration of ML models assembled on a small scale. More precisely, it therefore aims to study an electric traction substation system used for the railway industry. The main novelty of our contribution lies in the study of the behaviour of more realistic data than the traditional studies previously shown in the state of the art literature by investigating even more realistic types of attacks. It also emulates data analysis and a larger feature space under most commonly used connectivity protocols in today’s industry such as S7Comm and Modbus.
Zermane H, Drardja A. Development of an efficient cement production monitoring system based on the improved random forest algorithm. The International Journal of Advanced Manufacturing Technology [Internet]. 2022;120 :1853. Publisher's VersionAbstract
Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies; there is increasing competitiveness among them and increasing companies’ value. Machine learning (ML) techniques become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0 and the extensive integration of paradigms such as big data and high computational power. Implementing a system able to identify faults early to avoid critical situations in the production line and its environment is crucial. Therefore, powerful machine learning algorithms are performed for fault diagnosis, real-time data classification, and predicting the state of functioning of the production line. Random forests proved to be a better classifier with an accuracy of 97%, compared to the SVM model’s accuracy which is 94.18%. However, the K-NN model’s accuracy is about 93.83%. An accuracy of 80.25% is achieved by the logistic regression model. About 83.73% is obtained by the decision tree’s model. The excellent experimental results reached on the random forest model demonstrated the merits of this implementation in the production performance, ensuring predictive maintenance and avoiding wasting energy.
Zermane H, Drardja A. Development of an efficient cement production monitoring system based on the improved random forest algorithm. The International Journal of Advanced Manufacturing Technology [Internet]. 2022;120 :1853. Publisher's VersionAbstract
Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies; there is increasing competitiveness among them and increasing companies’ value. Machine learning (ML) techniques become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0 and the extensive integration of paradigms such as big data and high computational power. Implementing a system able to identify faults early to avoid critical situations in the production line and its environment is crucial. Therefore, powerful machine learning algorithms are performed for fault diagnosis, real-time data classification, and predicting the state of functioning of the production line. Random forests proved to be a better classifier with an accuracy of 97%, compared to the SVM model’s accuracy which is 94.18%. However, the K-NN model’s accuracy is about 93.83%. An accuracy of 80.25% is achieved by the logistic regression model. About 83.73% is obtained by the decision tree’s model. The excellent experimental results reached on the random forest model demonstrated the merits of this implementation in the production performance, ensuring predictive maintenance and avoiding wasting energy.
Haouassi H, Mahdaoui R, Chouhal O, Bekhouche A. An efficient classification rule generation for coronary artery disease diagnosis using a novel discrete equilibrium optimizer algorithm. Journal of Intelligent & Fuzzy Systems [Internet]. 2022;43 (3) :2315-2331. Publisher's VersionAbstract
Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.
Haouassi H, Mahdaoui R, Chouhal O, Bekhouche A. An efficient classification rule generation for coronary artery disease diagnosis using a novel discrete equilibrium optimizer algorithm. Journal of Intelligent & Fuzzy Systems [Internet]. 2022;43 (3) :2315-2331. Publisher's VersionAbstract
Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.
Haouassi H, Mahdaoui R, Chouhal O, Bekhouche A. An efficient classification rule generation for coronary artery disease diagnosis using a novel discrete equilibrium optimizer algorithm. Journal of Intelligent & Fuzzy Systems [Internet]. 2022;43 (3) :2315-2331. Publisher's VersionAbstract
Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.
Haouassi H, Mahdaoui R, Chouhal O, Bekhouche A. An efficient classification rule generation for coronary artery disease diagnosis using a novel discrete equilibrium optimizer algorithm. Journal of Intelligent & Fuzzy Systems [Internet]. 2022;43 (3) :2315-2331. Publisher's VersionAbstract
Many machine learning-based methods have been widely applied to Coronary Artery Disease (CAD) and are achieving high accuracy. However, they are black-box methods that are unable to explain the reasons behind the diagnosis. The trade-off between accuracy and interpretability of diagnosis models is important, especially for human disease. This work aims to propose an approach for generating rule-based models for CAD diagnosis. The classification rule generation is modeled as combinatorial optimization problem and it can be solved by means of metaheuristic algorithms. Swarm intelligence algorithms like Equilibrium Optimizer Algorithm (EOA) have demonstrated great performance in solving different optimization problems. Our present study comes up with a Novel Discrete Equilibrium Optimizer Algorithm (NDEOA) for the classification rule generation from training CAD dataset. The proposed NDEOA is a discrete version of EOA, which use a discrete encoding of a particle for representing a classification rule; new discrete operators are also defined for the particle’s position update equation to adapt real operators to discrete space. To evaluate the proposed approach, the real world Z-Alizadeh Sani dataset has been employed. The proposed approach generate a diagnosis model composed of 17 rules, among them, five rules for the class “Normal” and 12 rules for the class “CAD”. In comparison to nine black-box and eight white-box state-of-the-art approaches, the results show that the generated diagnosis model by the proposed approach is more accurate and more interpretable than all white-box models and are competitive to the black-box models. It achieved an overall accuracy, sensitivity and specificity of 93.54%, 80% and 100% respectively; which show that, the proposed approach can be successfully utilized to generate efficient rule-based CAD diagnosis models.
Berghout T, Benbouzid M. EL-NAHL: Exploring labels autoencoding in augmented hidden layers of feedforward neural networks for cybersecurity in smart grids. Reliability Engineering & System Safety [Internet]. 2022;226. Publisher's VersionAbstract
Reliability and security of power distribution and data traffic in smart grid (SG) are very important for industrial control systems (ICS). Indeed, SG cyber-physical connectivity is subject to several vulnerabilities that can damage or disrupt its process immunity via cyberthreats. Today’s ICSs are experiencing highly complex data change and dynamism, increasing the complexity of detecting and mitigating cyberattacks. Subsequently, and since Machine Learning (ML) is widely studied in cybersecurity, the objectives of this paper are twofold. First, for algorithmic simplicity, a small-scale ML algorithm that attempts to reduce computational costs is proposed. The algorithm adopts a neural network with an augmented hidden layer (NAHL) to easily and efficiently accomplish the learning procedures. Second, to solve the data complexity problem regarding rapid change and dynamism, a label autoencoding approach is introduced for Embedding Labels in the NAHL (EL-NAHL) architecture to take advantage of labels propagation when separating data scatters. Furthermore, to provide a more realistic analysis by addressing real-world threat scenarios, a dataset of an electric traction substation used in the high-speed rail industry is adopted in this work. Compared to some existing algorithms and other previous works, the achieved results show that the proposed EL-NAHL architecture is effective even under massive dynamically changed and imbalanced data.
Berghout T, Benbouzid M. EL-NAHL: Exploring labels autoencoding in augmented hidden layers of feedforward neural networks for cybersecurity in smart grids. Reliability Engineering & System Safety [Internet]. 2022;226. Publisher's VersionAbstract
Reliability and security of power distribution and data traffic in smart grid (SG) are very important for industrial control systems (ICS). Indeed, SG cyber-physical connectivity is subject to several vulnerabilities that can damage or disrupt its process immunity via cyberthreats. Today’s ICSs are experiencing highly complex data change and dynamism, increasing the complexity of detecting and mitigating cyberattacks. Subsequently, and since Machine Learning (ML) is widely studied in cybersecurity, the objectives of this paper are twofold. First, for algorithmic simplicity, a small-scale ML algorithm that attempts to reduce computational costs is proposed. The algorithm adopts a neural network with an augmented hidden layer (NAHL) to easily and efficiently accomplish the learning procedures. Second, to solve the data complexity problem regarding rapid change and dynamism, a label autoencoding approach is introduced for Embedding Labels in the NAHL (EL-NAHL) architecture to take advantage of labels propagation when separating data scatters. Furthermore, to provide a more realistic analysis by addressing real-world threat scenarios, a dataset of an electric traction substation used in the high-speed rail industry is adopted in this work. Compared to some existing algorithms and other previous works, the achieved results show that the proposed EL-NAHL architecture is effective even under massive dynamically changed and imbalanced data.
Bellal S-E. Exploration du Potentiel de la vision artificielle pour lareconnaissance d'objets en vue d'une conception d'un dispositif intelligent dans un context industriel. [Internet]. 2022. Publisher's Version

Pages