George Cevora PhD

Recent research projects

Instability in Deep Learning is a problem attracting a lot of attention within the AI community right now. Oliver Turnbull and I are trying to figure out why is the instability happening and how can we prevent it:

On an example of workforce reskilling Dr Evan Hurwitz and I demonstrated how even very small datasets can be used to improve decision-making and policy.

Upon an unusual request from a team of rowers preparing for a transatlantic race Dr Mate Hartstein and I developed a general method for routing rowing boats across oceans that aims to find the optimal compromise between shortest path and most favourable winds.

AI/ML Education

"Error" is one of the fundamental concepts in Machine Learning, yet most people fail to understand it, read about some of the complexities:

How to slash the error of Covid-19 tests with a sharpie

Algorithms are fair and exact so how does the discrimination happen? Read about proxy variables here:

How Discrimination occurs in Data Analytics and Machine Learning: Proxy Variables

Check out my criticism of the hype around Biological inspiration behind AI:

The relationship between Biological and Artificial Intelligence

Watch my talk about the need for Explainability in AI at ODSC conference 2018:

Fair ML

I'm proud to introduce Rosa, a system that fights discrimination in any Data Analytic pipeline. Rosa is currently available online for free to all in a limited version.

If you are interested in how Rosa works under the hood you may wish to read my paper explaining the underlying methodology - Fair Adversarial Networks - that I have developed over last two years.

If you want to know how Rosa can help you fight discrimination in your Data Science project see my paper providing practical examples of using Rosa to fight discrimination .

Please watch my talk about Rosa at Kings College London Data Science Society that summarises the current issues with discrimination in Data Analytics and Machine Learning, introduces the methodology Rosa is based on and demonstrates performance of Rosa in real-world scenarios.


Check out my PhD thesis about the role of prediction error in learning associations:

The role of Prediction Error in Probabilistic Associative Learning

Read about my argument why the current evidence suggesting that people learn through correcting their errors may in fact be inconclusive:

Reconsidering the Imaging Evidence Used to Implicate Prediction Error as the Driving Force behind Learning