What will the future of humanity look amidst all the new technologies entering our lives? We spoke to Alexander Popov, part of team ot ShadowDance magazine, and futurologist Marianna Todorova about “superintelligence” and our possible futures
"William Gibson wrote in one of his stories that 'the street finds its own uses for things,' and this is still true – it’s up to us how technologies will develop and what they will be used for," says Alexander Popov. He is one of the founders of the magazine for fiction, culture and futurology ShadowDance, he does research in the field of computer linguistics and artificial intelligence, and teaches science fiction in the English Department at Sofia University "St. Kliment Ohridski." We met with him to discuss the topic of the future of humanity and computer technology. "The bottom line is that this is largely a political issue that will be decided by a clash of different interests. The sooner we realize this, the better our chance is to have technology working for us and not the other way around," Alexander continues. According to him, on a more philosophical level, the big question is whether we will find a way to use technology to enhance our creative potential, or whether we will ourselves become peripheral devices.
What happens when technology takes over more and more human activities? "The process of automation is not at all new and it does not necessarily follow that humans will become redundant, but perhaps it follows that human work will change in ways that are difficult to predict," says Popov. "Nearly a century ago, the economist John Maynard Keynes, known as the architect of the post-war recovery of the Western world, predicted that in the future people would have shorter and shorter work days and that they would have to face the challenge of finding ways to realize their own potential with so much time. In reality, we work a lot more today, although automation should have taken a huge part of this burden off us. The reason for that is not really technological, but rather economic, we live in a system that requires constant growth. It uses technology, including artificial intelligence, to optimize and speed up human labor even more.”
However, everything could change with the emergence of a so-called "strong" artificial intelligence which would be able to mimic all human activities, including emotions and authentic creativity. In theory, artificial intelligence could begin to improve itself until it reaches a state that Swedish philosopher Nick Bostrom calls "superintelligence.” "It would be beyond our capabilities to understand how and why such a superintelligence operates, just like the extraterrestrial Zone in the story "Roadside Picnic" by the Strugatsky brothers is completely incomprehensible to humans and the ocean in "Solaris" by Stanislav Lem is impervious to science," Popov explains. In this scenario, man could become redundant. "But for now, we can still count on our own talents and try to develop this technology so that it allows us to be more and more human, instead of the other way around," he reassures us.
In fact, the "dumber" kinds of artificial intelligence also pose significant dangers. Bostrom proposes thought experiments along the following lines – an artificial intelligence tasked with eradicating cancer could at some stage try to kill all cancer patients. "In order to produce truly intelligent machines, we must know how to instill in them values compatible with our human values. This would practically mean creating people. It is clear to everyone that this opens up a whole new Pandora's box of moral dilemmas. Therefore, it is reasonable to assume that research into artificial intelligence and human research will increasingly converge in the future.” That is why Popov thinks it is extremely important to write powerful, brave and critical speculative fiction that interrogates the present and poses persistent and interesting questions about the border between the human and non-human (the latter category includes more than just machines). If more and more people ask these questions, this could lead to the creation of technologies that work to reduce suffering – both human and non-human.
New technologies have long entered the realm of art – writing stories, novels, and games using hypertext links was already popular in the 1990s. "Today, the digital environment makes it even easier and more effective to experiment with the mixing of forms – novels with a non-linear structure (like Arcadia by Ian Pearce, designed specifically for the iOS platform), digital comics that use the screen not just as a glowing sheet of paper, but as a medium with its own set of rules, collaborative artistic worlds that take the form of internet encyclopedias," says Popov. Novels co-authored by an artificial intelligence and a human already exist – for example, Pharmako-AI by KAllado-McDowell, written in collaboration with the language model GPT-3. As artificial intelligence enters the world of art, the resulting dangers are likely to be as numerous as the opportunities. Biologist Donald Krakauer talks about two types of technology that interact with human cognition: "complementary" and "competing" ones. Complementary technology allows us to enhance the abilities we already have, like the calculator. Competing technologies do the opposite, they replace our skills and lead to their atrophy; for example, we increasingly buy products offered to us by social network algorithms, to the point where our ability to make decisions ourselves may start to erode. The same processes could unfold (and are probably already unfolding) in the arts. "That's why we need to take a critical approach and seek a deeper understanding of these forms, how they work and what they can do. The humanities and social sciences are no less important for our future than the engineering and science disciplines," Popov concludes.
Futurologist Mariana Todorova is the founder and head of the Bulgarian branch of the global think-tank Millennium Project, an associate professor at the Institute of Philosophy and Sociology at the Bulgarian Academy of Sciences, and she is currently working on her third book – exploring the technological and geopolitical future of the world and the new world order. In addition to the enormous significance of artificial intelligence, she points to the discovery of quantum computing as a turning point for the human future. "It would take computers to a whole new level in terms of speed and complexity of problem solving," she says. The key question is whether the state or corporation that succeeds in creating such a device would retain a monopoly on the technology or share it with the rest of the world. "Here, we can already talk about an extreme new advantage, a rapid fifty- or hundred-year leap ahead in our evolution, and even about the emergence of a new human race."
Another way in which human-machine relationships could develop is the brain-computer interface. Up until 2015-2016, Todorova says, Elon Musk believed that in order to be fair and balanced, artificial intelligence should be open to all kinds of data input. However, experiments with self-learning chatbots linked to social networks show the opposite results – aggression and extreme attitudes begin to dominate. "Musk has never said it directly, but it seems to me that in recent years, he has started to believe that the only way for humanity to continue to exist alongside artificial intelligence is through the technological progress of artificial intelligence itself. This is how the concept of the brain-computer interface emerged," she explains. Musk's company Neurolink began experimenting on pigs and chimpanzees, implanting devices into their skulls. In 2021, some of the animals were shown playing and winning simple computer games. "The main problem with this type of interaction is to comply with the laws of thermodynamics, that is, for the protein to keep its shape and not overheat and fry while transferring information from pure protein to silicon, which is the material of most of these devices. Therefore, engineers will probably work on a device that is not implanted directly into the brain, but placed outside the neurocranium," Todorova says, stressing that we are crossing a border into what will be utopia for some, and dystopia for others.
Is there a danger that people may become redundant in this brave new world? "Starting with the Renaissance, we have been used to man being at the center of the universe, but now he could become a secondary element. Even if we control artificial intelligence, we could lose meaning and existential purpose," she reasons, because many human occupations would disappear due to the desire of the corporate sector, and capitalism more generally, to optimize production through the use of technology. And if a strong artificial intelligence or superintelligence emerges, we could lose all control. "For me, the ideal formula would be to follow the recommendations of the American engineer Eric Drexler, a central figure in the development and popularization of nanotechnology, who has been talking a lot about artificial intelligence lately. He thinks we should keep it at its current level – use it for a range of services in all kinds of areas, but not for anything that could change our very nature and force us to adapt to technology rather than use technology for our purposes.”
You can learn more about ShadowDance, a magazine about all forms and manifestations of the fantastic, at shadowdance.info.
After January and before his March visit to Sofia with A Horse Walks into a Bar, Samuel Finzi tells us why thinking and acting are more important than feeling when you’re on stage
Over 15 years, Metheor has built a different type of thinking about the art of theater
The core member of the BigBanda community finds new challenges in writing for the stage and believes that courage pays off