Granular cabin: An efficient solution to neighborhood learning in big data

K Liu, T Li, X Yang, X Yang, D Liu, P Zhang, J Wang - Information Sciences, 2022 - Elsevier
Neighborhood Learning (NL) is a paradigm covering theories and techniques of
neighborhood, which facilitates data organization, representation and generalization. While …

BIM log mining: Learning and predicting design commands

Y Pan, L Zhang - Automation in Construction, 2020 - Elsevier
This paper develops a framework to learn and predict design commands based upon
building information modeling (BIM) event log data stored in Autodesk Revit journal files …

[HTML][HTML] Multilabel Prototype Generation for data reduction in K-Nearest Neighbour classification

JJ Valero-Mas, AJ Gallego, P Alonso-Jiménez… - Pattern Recognition, 2023 - Elsevier
Prototype Generation (PG) methods are typically considered for improving the efficiency of
the k-Nearest Neighbour (k NN) classifier when tackling high-size corpora. Such …

A self-training method based on density peaks and an extended parameter-free local noise filter for k nearest neighbor

J Li, Q Zhu, Q Wu - Knowledge-Based Systems, 2019 - Elsevier
Self-training method is one of the relatively successful methodologies of semi-supervised
classification. It can exploit both labeled data and unlabeled data to train a satisfactory …

Early and extremely early multi-label fault diagnosis in induction motors

M Juez-Gil, JJ Saucedo-Dorantes, Á Arnaiz-González… - ISA transactions, 2020 - Elsevier
The detection of faulty machinery and its automated diagnosis is an industrial priority
because efficient fault diagnosis implies efficient management of the maintenance times …

[HTML][HTML] A fast instance selection method for support vector machines in building extraction

M Aslani, S Seipel - Applied Soft Computing, 2020 - Elsevier
Training support vector machines (SVMs) for pixel-based feature extraction purposes from
aerial images requires selecting representative pixels (instances) as a training dataset. In …

Data reduction via multi-label prototype generation

S Ougiaroglou, P Filippakis, G Fotiadou, G Evangelidis - Neurocomputing, 2023 - Elsevier
A very common practice to speed up instance based classifiers is to reduce the size of their
training set, that is, replace it by a condensing set, hoping that their accuracy will not worsen …

A parameter-free hybrid instance selection algorithm based on local sets with natural neighbors

J Li, Q Zhu, Q Wu - Applied Intelligence, 2020 - Springer
Instance selection aims to search for the best patterns in the training set and main instance
selection methods include condensation methods, edition methods and hybrid methods …

Extensions to rank-based prototype selection in k-Nearest Neighbour classification

JR Rico-Juan, JJ Valero-Mas, J Calvo-Zaragoza - Applied Soft Computing, 2019 - Elsevier
The k-nearest neighbour rule is commonly considered for classification tasks given its
straightforward implementation and good performance in many applications. However, its …

Combining multi-label classifiers based on projections of the output space using evolutionary algorithms

JM Moyano, EL Gibaja, KJ Cios, S Ventura - Knowledge-Based Systems, 2020 - Elsevier
The multi-label classification task has gained a lot of attention in the last decade thanks to its
good application to many real-world problems where each object could be attached to …