A Brain-Computer Interface (BCI), integrated with the Internet of Medical Things (IoMT) and based on electroencephalogram (EEG) technology, allows users to control external devices by decoding brainwave patterns. Advanced deep learning-based BCIs, especially those utilizing sensorimotor rhythms (SMRs), have emerged as direct brain-device communication facilitators. SMRs involve users imagining limb motions to induce specific brain activity changes in the motor cortex. Despite progress, some users struggle with BCIs due to weak signals, individual variability, and limited task applicability. This study introduces an unsupervised EEG preprocessing pipeline for SMR-based BCIs. It evaluates an EEG dataset recorded during finger movements, employing two cleaning methods: an investigator-dependent pipeline and the proposed unsupervised method. Two distinct feature datasets are generated: one from cleaned EEG data processed into spectrogram images using supervised preprocessing, and another from data cleaned using the proposed unsupervised pipeline. The study extensively assesses five transfer learning convolutional neural network (TL-CNN) models for distinguishing Motor Imagery (MI) from finger movements (Mex) using the generated datasets. A novel probability fusion technique is developed to enhance TL-CNN classification in Mex versus MI finger-pinching actions. Results show that the fusion-based method outperforms other methods when applied to unsupervised EEG data. Achieving 97.9% accuracy, 93.4% precision, 95% recall, and an F1-score of 93.2%, the proposed approach demonstrates significant progress in distinguishing MI and Mex activities through the unsupervised pre-processing pipeline and fusion-based CNN method. This potentially leads to more effective and user-friendly BCI systems.