Faculty of Mathematics

University of Belgrade

Serbia

MLA@MATF is a group of researchers and students interested in various fields of machine learning and its applications. The group is based at the Faculty of Mathematics, University of Belgrade, but it includes researchers, students, and practitioners from other research institutions and from industry. Fields of application we are specially interested in are natural language processing, bioinformatics, network analysis, automated reasoning, software verification, etc.

The group aims at establishing tighter communication between researchers and students interested in machine learning for purposes of:

- intensifying research on machine learning models suitable for application in diverse fields we have expertise in, which either rely on or are increasingly influenced by machine learning,
- sharing experience and knowledge in the field between the participants with special focus on students,
- practical application of machine learning.

Each meeting of MLA@MATF consists of two talks, each being a tutorial or a presentation of a published research paper or research idea being developed. In 2020. meetings are held on Wednesdays at 19h in classroom 706, according to the schedule below. For participation, please contact dr Mladen Nikolić.

- Petar Veličković (DeepMind): Graph Representation Learning for Algorithmic Reasoning

- Marko Milanović (MATF/Everseen): Understanding Black-Box Predictions via Infulence Functions
- Mladen Nikolić (MATF): An Adversarial Approach to Fairness in Machine Learning

- Filip Panjević (YDRIVE): Machine Learning in Geometric Computer Vision Part 2
- Vukašin Ranković (MDCS): ICCV 2019 Short Overview

- Vladimir Perović (INN Vinča): Prediction of Protein Functions and Protein-Protein Interactions Using Machine Learning
- Anđelka Zečević (MATF): Determinantal Point Processes

- Nikola Popović (ETF): Introduction to Gaussian Processes for Machine Learning

- Miloš Jordanski (MATF): Reinforcement Learning: An Introduction
- Miloš Jordanski (MATF): Meta-Learning

- Filip Panjević (YDRIVE): Machine Learning in Geometric Computer Vision
- Andrija Petrović (MF/FON): GCRF for Classification, Fast Approximation, and Applications

- Marko Knežević (Nordeus): Introduction to Object Tracking
- Milan Ilić (MATF/Everseen): Auto-Encoding Variational Bayes

- Milan Čugurović (MATF): A Neural Algorithm of Artistic Style
- Kosta Grujčić (MATF): Simultaneous Localization and Mapping

- Anđelka Zečević (MATF): Summarization with Pointer-Generator Networks
- Uroš Stegić (MATF): Few Shot Learning

- Filip Broćić (MATF): Introduction to Persistent Homology
- Filip Jekić (MATF/ASW): Deep Learning with Topological Signatures

- Uroš Stegić (MATF): Object Detection through Machine Learning
- Vladisav Jelisavčić (MI SANU): Introduction to Probabilistic Graphical Models

- Michał Warchalski (Nordeus): Deep Learning for Football Video Analysis
- Predrag Tadić (ETF): Introduction to Boosting

- Momčilo Vasilijević (MDCS): Recurrent Neural Networks: Introduction and Applications

- Veljko Milutinović (ETF/Maxeler), Miloš Kotlar (ETF/ABB): DataFlow SuperComputing for Big Data and Tensor Calculus
- Miloš Nešić (MATF/Totient): Wasserstein GAN

- Miloš Jordanski (MATF/SBG): Generative Adversarial Networks
- Aleksandar Šuka (MATF/RT-RK): Binarized Neural Networks

- Jovana Mitrović (University of Oxford/DeepMind): Causal Discovery via Kernel Deviance Measures
- Jovana Mitrović (University of Oxford/DeepMind): DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression

- Mladen Nikolić (MATF): Gaussian Conditional Random Fileds for Regression
- Andrija Petrović (MF): Gaussian Conditional Random Fields for Classification

- Mladen Nikolić (MATF): About Machine Learning and Applications Group
- Miloš Jovanović (FON): Regularization for Multi-Task Learning
- Miloš Jordanski (MATF/SBG): Generative Models: An Introduction

- NIPS - Neural Information Processing Systems
- ICML - International Conference of Machine Learning
- ECML-PKDD - The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
- ICLR - International Conference on Learning Representations
- COLT - Conference on Learning Theory
- KDD - SIGKDD Conference on Knowledge Discovery and Data Mining
- AAAI - The AAAI Conference on Artificial Intelligence
- IJCAI - International Joint Conference on Artificial Intelligence
- ECAI - European Conference on Artificial Intelligence

- Journal of Machine Learning Research
- Machine Learning
- Journal of Artificial Intelligence Research
- IEEE Transactions on Neural Networks and Learning Systems
- Knowledge and Information Systems

- C. Bishop, Pattern Recognition and Machine Learning
- K. Murphey, Machine Learning: A Probabilistic Perspective
- T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning
- S. Shalev-Schwartz, s. Ben-David, Understanding Machine Learning: From Theory to Algorithms
- V. Vapnik, Statistical Learning Theory
- I. Goodfellow, Y. Bengio, A. Courville, Deep Learning
- R. Sutton, A. Barto, Reinforcement Learning: An Introduction
- D. Koller, N. Friedman, Probabilistic Graphical Models
- S. Boyd, L. Vandenberghe, Convex Optimization

- O. Bosquet et al., Introduction to Statistical Learning Theory
- Van den Oord et al., WaveNet: A Generative Model for Raw Audio
- M. Diligenti et al., Semantic-based regularization forlearning and inference
- F. Costa, Learning an efficient constructive sampler for graphs
- T. Shi et al., Data Spectroscopy: Eigenspaces of Convolution Operators and Clustering
- E. Saad, Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework
- S. Loos et al., Deep Network Guided Proof Search
- M. Richardson, P. Domingos, Markov Logic Networks
- O. Vinyals, Matching Networks for One Shot Learning
- R. Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
- J. Pennington et al., GloVe: Global Vectors for Word Representation
- K. Cho et al., Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
- J. Weston et al., Memory Networks
- S. Zheng et al., Conditional Random Fields as Recurrent Neural Networks
- K. Xu, Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
- A. Karpathy, L. Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Descriptions
- M. Jaderberg et al., Spatial Transformer Networks
- K. He et al., Deep Residual Learning for Image Recognition
- D. P. Kingma, M. Welling, Auto-Encoding Variational Bayes
- K. Gregor et al., DRAW: A Recurrent Neural Network For Image Generation
- G. Hinton et al., Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
- David Silver et al., Mastering the game of Go with deep neural networks and tree search
- Yonghui Wu et al., Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
- Alex Graves et al., Hybrid computing using a neural network with dynamic external memory
- M. Andrychowicz et al., Learning to learn by gradient descent by gradient descent
- S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- J. L. Ba et al., Layer Normalization
- Y. Gal, Z. Ghahramani, Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
- Y. A. LeCun, et al. Efficient backprop
- D. M. Blei et al., Latent dirichlet allocation
- J. Lafferty et al., Conditional random fields: Probabilistic models for segmenting and labeling sequence data
- Kenji Kawaguchi, Deep Learning without Poor Local Minima
- Jun Wang et al., Parametric Local Metric Learning for Nearest Neighbor Classification
- J. Lee et al., LLORMA: Local Low-Rank Matrix Approximation
- T. S. Jakkola, M. I. Jordan, Bayesian Parameter Estimation via Variational Methods
- S. Roweis, Z. Ghahramani, A Unifying Review of Linear Gaussian Models
- T. Kraska et al., The Case for Learned Index Structures
- D. George et al., A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs
- A. Esteva et al., Dermatologist-level classification of skin cancer with deep neural networks
- M. D. Tissera, M. D. McDonell, Deep extreme learning machines: supervised autoencoding architecture for classification
- A. Faust et al., Automated aerial suspended cargo delivery through reinforcement learning
- V. Minh et al., Playing Atari with Deep Reinforcement Learning
- Z. Wang et al., Dueling Network Architectures for Deep Reinforcement Learning
- H. van Hasselt et al., Deep Reinforcement Learning with Double Q-learning
- H. van Hasselt, Reinforcement Learning in Continuous State and Action Spaces
- T. Schaul et al., Prioritized Experience Replay
- W. Dabney et al., Distributional Reinforcement Learning with Quantile Regression
- M. Ghavamzadeh et al., Bayesian Reinforcement Learning: A Survey
- G. Ostrovski et al., Count-Based Exploration with Neural Density Models
- C. Finn et al., A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
- J. Schulman et al., Trust Region Policy Optimization
- V. Minh et al., Asynchronous methods for deep reinforcement learning
- S. Levine, V. Koltun, Guided policy search
- J. Schulman et al., Proximal policy optimization algorithms
- J. Schulman et al., High-dimensional continuous control using generalized advantage estimation
- S. Gu et al., Q-Prop: sample-efficient policygradient with an off-policy critic
- O. Nachum et al., Bridging the gap between value and policy based reinforcement learning
- H. J. Kappen et al., Optimal control as a graphical model inference problem
- T. P. Lillicrap, Continuous Control with Deep Reinforcement Learning

- UCI Machine Learning Repository
- Kaggle
- MNIST
- CIFAR
- Chars74K
- Enron
- Google Books Ngrams
- Boston Housing
- YouTube-8M

Machine Learning and Applications Group, Faculty of Mathematics, University of Belgrade