Machine Learning 2022/23 (24h, 6 ECTS, Laboratory classes)

Data/feature pre-processing

🡢 Data cleaning: missing, inconsistend, and noisy data

🡢 Missing values: univariate vs multivariate features, nearest neighbour imputation

🡢 Feature scaling: standard, min-max, max absolute scaling, uniform/Gaussian distribution mapping

🡢 Feature normalisation

🡢 Feature encoding/embedding

🡢 Discretisation: k-bins, feature binarisation

🡢 Label balancing: up-sampling, down-sampling, advanced balancing methods

Evaluating a model

🡢 Confusion matrix in binary vs multi-class classification

🡢 Classification metrics: micro/macro/weighted averages

🡢 Training and test data splitting: pitfalls, cross-validation, k-fold stratified cross-validation

Ensembles and Neural Networks

🡢 Ensembles: baggin meta-estimator, forests (random forests and estremely randomised trees)

🡢 AdaBoost

🡢 Stacked ensembles

🡢 Multilayer Perceptron

Latent representation and embeddings

🡢 MNIST dataset as benchmarking system

🡢 Use BAE to build simple and boosting-based autoencoders

🡢 Variational Autoencoders (VAEs)

🡢 Vector Quantized Variational Autoencoders (VQ-VAEs)

🡢 Exemplar Autoencoder

🡢 Attention mechanisms and Transformers

Hyperparameter tuning

🡢 Grid search: drawbacks

🡢 Randomised search: drawbacks

🡢 Bayesian optimisation: advantages and drawbacks

Complex algorithms for Electronic Health Record (EHR) data

🡢 Vanilla-LSTM


🡢 BERT-like solution pre-trained (PubMedBERT)

Bardh Prenkaj
Bardh Prenkaj
Computer Scientist, PhD

What’s up!