Image “Cloaking” for Personal Privacy


2020 is a watershed year for machine learning. It has seen the true arrival of commodized machine learning, where deep learning models and algorithms are readily available to Internet users. GPUs are cheaper and more readily available than ever, and new training methods like transfer learning have made it possible to train powerful deep learning models using smaller sets of data.

But accessible machine learning also has its downsides as well. A recent New York Times article by Kashmir Hill profiled, an unregulated facial recognition service that has now downloaded over 3 billion photos of people from the Internet and social media, using them to build facial recognition models for millions of citizens without their knowledge or permission. demonstrates just how easy it is to build invasive tools for monitoring and tracking using deep learning.

So how do we protect ourselves against unauthorized third parties building facial recognition models to recognize us wherever we may go? Regulations can and will help restrict usage of machine learning by public companies, but will have negligible impact on private organizations, individuals, or even other nation states with similar goals.