Asset Publisher
javax.portlet.title.customblogportlet_WAR_customblogportlet (Health is Global Blog)

Deep Learning: How to Remain Human in a Data-Driven Future

20.6.2023
Deep Learning 4.jpg
Photo: Images created by a generative artificial intelligence (StableDiffusion)

Today, you may have ridden in a self-driving car, applied a new filter to an Instagram story, or enjoyed a series on your smart TV with very good picture quality. What you may not have realized is that you were using, and even contributing to the training of Deep Learning models.

 

[Pablo Iáñez Picazo (Granada, 1995) is a PhD student in Biomedicine at ISGlobal with a fellowship from the 'la Caixa' Foundation. He is part of ISGlobal's Biomedical Data Science group led by Paula Petrone. He holds a degree in Biochemistry from the University of Granada and a Masters in Bioinformatics from the University of Copenhagen. He is particularly interested in the application of machine learning and Deep Learning algorithms to large amounts of biological data, which will pave the way for precision medicine].

 

Many of us have read George Orwell's 1984, where an omnipresent entity known as Big Brother constantly monitors the citizens of a dystopian Oceania. The recent episode "Crocodile" of Black Mirror (S4E3) introduces a device that can extract data from visual memories as if they were video. The episode explores the themes of video surveillance and privacy, where every eye can be a recording device and any past event involving more than one person can be recapitulated with photographic resolution. Although this scenario is still science fiction, our current technology is not far from enabling a world in which reality is constantly being recorded and analyzed. This is what Eric Sadin, in his book "Artificial Intelligence or the Challenge of the Century", calls the "automated invisible hand" of a data-driven society. Deep Learning models have infiltrated such routinary tasks that we are unaware of their presence. You may have ridden in a self-driving car today, applied a new filter to an Instagram story, or enjoyed a series on your smart TV with very good picture quality. What you may not have realized is that you were using, and even contributing to the training of Deep Learning models.

Deep learning models have infiltrated such routinary tasks that we are unaware of their presence

Deep Learning models are not anthropomorphic

Deep Learning models can be schematised as complex architectures of multiple ordered and connected matrices, where the information flows from the input data to the output of the algorithm. Their original architecture is similar to the connections between neurons in the brain, leading to the concept of 'neural networks'. The term "deep" in Deep Learning refers to the number of hidden layers between the input and output layers. The more advanced a layer is in this hierarchy, the more complex the features it learns from the input data. While the first neural networks were developed in a primitive form by Frank Rosenblatt in 1957, it is only in recent years that we have  the computing power and sufficient amounts of data to run these algorithms. When we say that these models "learn", what they are really doing is adjusting the values within these matrices each time they are presented with a new piece of data in order to improve their performance on a very specific task. Learning is usually "supervised", where the task and the machine learning ecosystem have been carefully designed by a human. The development of an Artificial General Intelligence, or AGI for short, is still a long way off.

 

Images created by a generative artificial intelligence (StableDiffusion).

The black box, explained

These Artificial Intelligence (AI) models are shrouded in a dense fog that obscures our understanding. Interestingly, we are at a point in history where we have developed an extremely efficient technology, but we have not yet had time to fully understand the why of its efficiency. That's why these models are regarded as black boxes: we know the data that goes in, we know the predictions that we get from the model, but we don't really grasp the mechanisms that govern the rules of its learning. And, in an anticlimactic manner, when we zoom in on these architectures, all we find are millions and millions of values grouped in a quirky series of matrices. These models could make crucial decisions for citizens, such as who gets a bank loan or who is diagnosed with lung cancer. It is therefore important to understand why these algorithmic decisions are made, opening the doors to the field of Explainable Artificial Intelligence (XAI).

These models could make crucial decisions for citizens, such as who gets a bank loan or who is diagnosed with lung cancer. It is therefore important to understand why these algorithmic decisions are made

Our future as an AI-augmented society

Deep Learning models are having a disruptive impact on our societies. One of the first success stories of a Deep Learning model was when in 2016, the AlphaGo model developed by DeepMind, Google's AI research unit, beat the world Go champion in a game. Later, in 2021, DeepMind launched AlphaFold, a model capable of predicting the three-dimensional folding of proteins from their amino acid sequence with unprecedented accuracy. The model expanded protein structure databases to include almost all known proteins, leaving many labs that had been working on the problem for dozens of years without much to do. More recently, the introduction last year of Large Language Models (LLMs) such as ChatGPT has shown that they can be more effective than humans at writing text, summarizing content and even creating code in different programming languages.

While it is still difficult to predict the full extent of the impact of these models, they will certainly change the way we learn, work and live. It is predicted that 38% of current jobs in the US could be automated by AI by 2030

While it is still difficult to predict the full extent of the impact of these models, they will certainly change the way we learn, work and live. It is predicted that 38% of current jobs in the US could be automated by AI by 2030. In fact, many people believe that they will be replaced professionally by ChatGPT. The truth is that we will have to learn to live with these models as personalized assistants that suggest and help us make decisions. Ideally, these models will help us complete our professional tasks faster and allow us to produce the same amount of results in significantly less time. Within a good political framework, we could start seeing a reduction in our working hours, allowing us more time for our well-being and leisure. But we need good policies to be able to gradually adapt to this change.

 

Images created by a generative artificial intelligence (StableDiffusion).

Another artist in the room

Deep Learning models are also revolutionizing the contemporary art scene with the advent of generative artificial intelligence. These models are capable of generating new data that has never been observed before. ChatGPT is a generative text model, while DALL-E is a generative image model. Over time, it will become easier and cheaper to generate data with AI, leading to a revalorization of human artistic creations. We will place a much higher value on a picture painted by a human hand than on one created in seconds by a generative network. However, it will be increasingly difficult to tell which image was human-made and which was machine-made, blurring the sense of what is real and what is not. In fact, the advent of generative adversarial networks, which popularised generative models, has filled social networks with fake accounts and images for illicit purposes, known as DeepFakes. Recently, Boris Eldagsen won the prestigious award for best photograph at the Sony World Photography Awards 2023. However, he decided to reject the award after revealing that the winning image was, in fact, a synthetic creation made by an AI. With this in mind, Google suggests that images should always be tagged with metadata so that it can be verified whether they were created by a human or a machine.

Over time, it will become easier and cheaper to generate data with AI, leading to a revalorization of human artistic creations. We will place a much higher value on a picture painted by a human hand than on one created in seconds by a generative network

Data Wars

Many of the apps we use today track our every digital gesture and movement. This personal information is stored by macro-corporations on their digital platforms, which exploit it to create more efficient and sophisticated deep learning systems. Such huge quantities of data can only be processed by models with millions of parameters, which are too large to run on personal computers such as PCs. For this reason, the most advanced systems are based on macro-clusters of GPUs and TPUs to train these models. Companies with the most computing power will be able to develop the best models, increasing their competence and profits. Data has become the oil of the 21st century. Appropriate legal frameworks need to be put in place to regulate the use of data, such as the General Data Protection Act. These measures should curb the power of macro-corporations. For example, Meta was recently fined €1.2 billion by the European Union because of a major private data leak.

Many of the apps we use today track our every digital gesture and movement. This personal information is stored by macro-corporations on their digital platforms, which exploit it to create more efficient and sophisticated deep learning systems

In conclusion

Deep Learning models are abruptly changing our society in all its aspects: politically, economically, artistically and culturally. We must be prepared to accept and integrate them into our daily lives and, as with any other technology, know how to use them in an ethical and responsible manner. Governments must act quickly to regulate the moral aspects of the application of these models, because otherwise they could augment social inequalities and have a profound existential impact on our species. The development of the source code of these models can carry our very own mistakes and beliefs, magnifying the biases of their creators, we humans, imperfect machines. But even this task may soon no longer be ours.

*This text has been translated using a Deep Learning model and then supervised by the author.