The Applied Intelligence Research Centre has a diverse team.
Ph.D. student in Explainable Artificial Intelligence (XAI) sponsored by Science Foundation Ireland
through the SFI Centre for Research Training in Machine Learning (18/CRT/6183)). Recurrent Neural Network (RNN) variants has proved to be quite proficient dealing with time-series data. But if there is some anomaly or bias in the data the prediction made may not reflect the same. So, it’s important to gain insight into RNN functioning and analyze why an RNN does what it does to justify the predictions made. For example - if the RNN trained on historic data predicts the next pandemic to be in 2025, epidemiologists not only need to be confident about the efficacy of the model but the reason behind the prediction. I am trying to extract concise, correct rules from RNN trained on complex time-series data and represent the rules as Finite State Automaton while analyzing how human-understandable the representations are.