/assets/blog/2020-02-06-ai-trends-what-to-expect-in-2020/2020-02-06-AI-trends-what-to-expect-in-2020-8f9f5846b9.png
blog
AI trends: what to expect in 2020

Thu, Feb 6, 2020

Over the past decade, we have witnessed notable breakthroughs in Artificial Intelligence (AI), thanks in large part to the development of deep learning approaches. Healthcare, finance, human resources, retail, there is no field in which AI has not proven to be a game-changer.

Who would have said just a few years ago that there would be autonomous vehicles on public roads, that large-scale facial recognition would no longer be science fiction, or that fake news could have such an impact socially, economically, and politically?

Some statistics related to AI are dizzying. According to Forbes, 75 countries are currently using AI technology for surveillance purposes via smart city platforms, facial recognition systems, and smart policing. In investment terms, the percentage of companies spending over $50 million on big data and AI increased to 64.8% in 2020 from just 39.7% in 2018. From a geopolitical point of view, China is preparing to take the lead by 2030. Every year, there are more engineers graduating there than in any other country (2-3 times more), while the country is generating 10 times more data than the United States.

All the indicators reveal that we are well on our way to a total popularization of AI, with an unprecedented, profound, and transversal impact in all areas of society. And although the field is vast and often unpredictable, some crucial issues are certain to be addressed this year.

Let's briefly review the best in AI of 2019, as well as some of the most important trends for 2020.

2019 AI outcomes

Last year was fruitful for AI and deep learning, especially in the Natural Language Processing (NLP) and Computer Vision (CV) fields.

Transformer-based language models such as BERT have had a significant impact on NLP, replacing RNNs in standard deep learning architecture. Perhaps the most impressive case is the GPT-2 model, trained by OpenAI researchers using a dataset of 8 million web pages. The objective of the model is very simple: to predict the next word in a piece of text given all of the preceding words. The results, despite being simple, are impressive.

Example of a block of textExample of text automatically generated by GPT-2.

Generative Adversarial Networks GANs) continue to show their enormous potential. Probably the most shocking outcome, and the one that has created the most controversy, pertains to deepfakes. In May 2019, researchers from the Samsung AI Center Moscow published a scientific paper that outlined how they had managed to create talking heads from a standard photo of a person (or even from portrait paintings).

Animation of Mona Lisa talkingExample of a deepfake in which the Mona Lisa talks.

Regarding machine learning and deep learning frameworks, the release of Tensorflow 2.0 with a very interesting Keras integration was a landmark for researchers, data scientists, and developers. That being said, in academics, researchers seem to be quickly abandoning TensorFlow in favor of PyTorch. This does not apply to industry, where TensorFlow remains the preferred platform.

Facebook’s release of PyTorch 1.3, with quantization and Google Cloud TPU support, was also a high point for the year. They are focusing on meeting the needs of researchers, without abandoning the high demands of production use.

In a nutshell, 2019 was a key year for AI in which technology disseminators, decision-makers, and the general public became more aware of the enormous opportunities and potential dangers involving AI technology.

Let's look at the impact this will have on AI trends in 2020. 🔍

AI Explainability

Having explainable AI systems is crucial to the future of artificial intelligence. Being able to evaluate, from a human perspective, whether the reasoning behind a result is valid or not is important. Such explainable AI will build trust among users.

Journalists, politicians, and decision-makers have increasingly emphasized the need to "explain AI's algorithms." The right to explanation is the right of the individual to receive the reasoning behind the output of an algorithm. Why was my bank loan application rejected? Why did the self-driving car turn instead of stopping? Why, as a doctor, should I recommend this treatment to my patient? Because the system says so is a response that is hardly acceptable to those impacted by the decision, is illegal in many cases, and indeed creates general distrust of AI-based systems.

Our minds are not perfect. Optical illusions, for example, are an unavoidable phenomenon, no matter how hard we try. However, as humans, we can explain what we see, interpret, and are wrong about; we know water cannot magically flow uphill.

Waterfall optical ilusionWaterfall, by M. C. Escher.

AI systems are also imperfect. In some cases, they make significant and unusual mistakes, like confusing a turtle with a gun. The adversarial images (i.e., pictures manipulated to deliberately deceive computer vision systems) are equivalent to optical illusions.

Unfortunately, despite the incredible advances in deep neural networks, explaining why a deep learning model returns a particular result for a given input remains a challenge. Neural networks function opaquely and are unable to explain their output, a problem known as the "AI black box."

Big companies are already working on explainable AI. For example, Google has developed a set of tools to help detect and resolve bias (among other problems) and investigate how models behave. However, this is currently limited to certain machine learning models, and the results are intended for data scientists. Other big players such as Microsoft and IBM are also pursuing explainable AI.

Explainable AI will be one of the main themes in 2020, creating many business opportunities. Examples such as Fiddler Labs ($10.2M in Series A Funding in 2019) or Kindi ($20 million in Series B funding in 2019) are a testament to the growing interest in explainable AI.

Predictive analytics

In the world of analytics, predictive analytics will be one of 2020’s hottest topics.

The use of machine learning, particularly deep learning and NLP to retrieve and process data, has had a massive impact on systems that detect trends and predict future events based on existing data. This practice, known as augmented analytics, has become an essential activity for many organizations in recent years, with applications in areas as dissimilar as supply chain optimization, recruitment, energy consumption estimation, price optimization, and customer service.

Example of a block of textPredictive analytics revenues / market size worldwide, from 2016 to 2022 (in billions of dollars)

The predictive analytics market has recorded 21% compound growth since 2016 and is expected to reach $11 billion by 2022.

Special attention should be paid to the retail market, which is probably the sector with the most significant use of predictive analytics. Surveillance applications will continue to make strides, but it is in the analysis of customer behavior that we will see the most considerable progress. Finally, although it has not taken off as expected, predictive maintenance remains a promising branch that could make unexpected headway this year.

Highly sophisticated fraud detection

All technology can be used for fraudulent purposes. We have seen how fake news can affect us. What would happen if fake news could be generated automatically, and with a human likeness, such as the one accomplished by the OpenAI GPT-2 model?

And as for fictitious videos, how big of an impact will deepfakes have on society, and how can we combat them? Presidential elections in the USA take place next November, and it is certain that this issue, which endangers democratic processes in particular, will get a lot of attention.

Martin Scorsese's latest film, The Irishman, gives us an idea of the gravity of the situation. Netflix spent millions of dollars to digitally de-age the characters played by Robert De Niro, Al Pacino, and Joe Pesci. The results are proportional to the money invested: actors in their seventies seem to be thirty years old in certain parts of the film. However, a YouTuber called iFake claims to have achieved better results in just seven days, using free deepfake software.

De-aging comparison video.The Irishman De-Aging: Netflix Millions VS. Free Software!

At least two approaches to countering the problem are emerging and will continue to expand in 2020.

On the one hand, some researchers have decided not to share their results, thus preventing their methodologies from being used for fraudulent purposes. This was the case with OpenAI/GPT-2 for example, as the complete model was not open-sourced at first (they finally released the full version in November 2019). While this may help in the short term, the cat is already out of the bag in certain cases, and doing this goes against the trend in the scientific community.

Alternatively, another strategy is to face the problem head-on by devising opposing legal and technological measures. For example, as of 2020, China considers deepfakes with no clear mention of their inauthenticity a crime. In addition to the obvious requirement that the videos be signed, there are techniques that attempt to detect fake videos. This will probably be the prevailing approach in the future. Initiatives such as the Deepfake Detection Challenge (DFDC) will get a lot of attention and will most likely be replicated in the coming years.

Less data, accelerated training, better results

While systems based on deep learning can produce amazing results, volumes of data are generally required to train such models well. The availability of large datasets such as ImageNet or Amazon Reviews, and the development in recent years of startups dedicated to collecting and annotating quality data, have filled a great void. Despite this, the lack of large quality datasets remains an issue in many cases.

Transfer learning approaches possess immense potential and will continue to improve in 2020. The key idea here is to adapt a model trained with huge amounts of data to a particular domain in which relatively little data is available. In CV, for example, it is possible to create an object detection system with only a few images, thanks to pre-trained models. In NLP, Transformers-based language models, such as BERT or GPT-2, will be confirmed as the new standard. It’s even possible that the quality of these new models could help fulfill the initial promise of more personable chatbots.

Although not a novel idea, reinforcement learning will be a central issue throughout the year. The objective is to enable a system to learn from its own mistakes. While much research effort has been committed, the level of standardization needed to achieve large-scale adoption has not been reached. 2020 may be the year in which reinforcement learning is finally implemented in industry by employing intelligently autonomous robots.

Lastly, data augmentation and synthetic data generation techniques are gaining noticeable traction as they facilitate scaling approaches based on deep learning in environments where data is scarce. The Generative Teaching Networks presented by Uber AI Labs last December represent a good example of what can be achieved. Their approach consists of a learning algorithm that automatically generates training data, learning environments, and curricula to help AI agents rapidly learn. The point is to train a data-generating network that will produce data that another neural network will use to train itself for a specific target task. This is similar to the GANs strategy, although instead of competing, the two networks collaborate with each other.

Popularization of facial recognition

Facial recognition is one of the fastest growing technologies; it will gain tremendous ground in 2020. Its application in the public sphere, mainly by state organizations for security and law enforcement purposes, is only the tip of the iceberg.

In healthcare, for example, facial analysis can be harnessed to detect genetic diseases or track a patient's use of medication. In the retail world, it can be used to identify VIP customers or catch known shoplifters as soon as they enter the store. The Japanese bank Seven Bank is experimenting with facial recognition in ATMS to confirm that the owner of the card is the person who is using it in the moment. Delta Airlines is currently using optional facial recognition boarding at Atlanta International Airport, and has confirmed that it will expand this practice at additional airports.

In 2020, we will engage in a particularly intense debate over security versus privacy since facial recognition will be implemented for the first time at important events. For example, the 2020 Tokyo Olympics will be the first to utilize a large scale facial recognition system to verify the identity of athletes, officials, and media representatives.

We will also witness the emergence of hundreds of applications that rely on facial recognition to allow for biometric identification and financial transactions under various circumstances involving banks, public institutions, shops, and so on.

Facial recognition leads to reluctance and concern on the part of many. While various European governments are planning to employ these technologies in the near future, the EU could temporarily ban (for 3 to 5 years) their use in public places.

One of the main drawbacks is that of false positives (e.g., an innocent individual who is confused with a terrorist). Surely, many efforts will be made in 2020 to improve the accuracy of facial recognition technology.

Final thoughts

Artificial intelligence will continue to contribute to, and have a consequential effect on, all aspects of society. Despite impressive advances in recent years, we still have a long way to go. While approaches based on deep neural networks remain state of the art, the results obtained still lack explanation.

Biases that perpetuate discrimination have also surfaced, especially when it comes to gender and ethnicity. They must be fixed. The proliferation of deepfakes or inappropriate use of facial recognition are sources of legitimate concern that must be addressed, from both legal and technological perspectives.

We think that in 2020, responses to these issues will be central to AI.

At the beginning of the 19th century, members of the Luddites, a secret organization of English textile workers, destroyed textile machinery as a form of protest. They were considered a "fraudulent and deceitful manner" of skirting standard labor practices, and it was feared that the machines would replace humans. Time has shown that this combative and in denial strategy is not sustainable.

We prefer a user-centric philosophy, one that considers the many ways in which AI can benefit society, and let us explore new responsible AI practices.

Are we missing anything? Share your prediction in our comments section; we’ll review it and unveil our own prophecies.

Wondering how AI can help you?