There is a tendency and a push to simplify any kind of argument and to try to answer any kind of question with a "yes" or a "no": "Should AI be allowed to exist?", "Is communism bad?", "Are immigrants good for our country?", "Should taxes be lower?".
It is as if someone or something is constantly trying to push us into a "please press yes or no" situation, and we are not even allowed the courtesy of a "please".
The interesting fact about the US Congress (and this will probably be true of any society that is halfway governed by what we colloquially call "democracy") is that a) the people elected to represent the people are mostly better educated than the population they represent, and b) almost all of them were trained as layers and economists before becoming lifelong politicians.
Personally, I found that this is an issue, an issue that can be best described with couple of analogies:
1) Survival in a physical world requires physical skills. In other words, it is important that enough people in society know how to make a hunting tool out of a flint stone, or are capable of creating a system that produces a useful outcome - and I would like to define "useful outcome" for this purpose as "something that benefits the survival and happiness of the human race” - in that order. But people (like layers and economists) who may believe that the crafting of words can change physical reality are hardly in a position to understand the implications of new technologies, or even to ask meaningful questions of someone with a technical background, who can then run circles around those ignorant but arrogant lawmakers.
2) In engineering, most of the time it is possible to predict the outcome of a particular physical experiment, or the output of a machine: if you try to use a car outside of predefined parameters, you are likely to kill yourself and most likely other people around you. Or, if the car is not built to specified standards, then the car itself will do its best to kill you. This situation is the result of decades of effort by millions of people to define what a safe car actually is. And although the definition of a safe car has changed over time (remember the discussion in 1960s about the safety belts), it has always been possible to assess the safety of a car in a relatively straightforward way, precisely because the definition of a safe car has been made as clear and as unambiguous as possible at any given time.
3) Our economic and political systems lack this kind of straightforward definition. For example, what do you mean by the word "capitalism"? In the minds of many people in the Western hemisphere, "capitalism" and "democracy" are synonymous - but in any vocabulary, even in the vocabulary books of societies that live a narrative that equates these two terms, there is a clear distinction. Similarly, what do you mean when you say "communism"? Originally, the word "socialism" meant something like "people who work in the same place, with same purpose, and who can decide for themselves what exactly to do with the results of their work because they collectively own the means of production". I would argue that this definition applies to any kind of co-ops. However, the distinction between ‘socialism’ and ‘communism’ is blurred in many people's minds. Today, the term "communism" covers very different concepts, from the gulags of Soviet Union in 1930s to any form of universal health care. This situation is not helpful if you (for instance) want to make a decision for or against communism.
What does all this talk about communism, capitalism and the importance of clear definitions have in common with artificial intelligence? Well, when someone puts you in a position where you have to say yes or no, you have to be very precise with your terminology: you have to have a clear idea of what you mean when you say 'capitalism', 'socialism' or 'communism'. Or what the term 'artificial intelligence' actually means.
So, in the context of artificial intelligence, it really boils down to these two questions:
- What is your understanding of what AI is?
- How do you want to integrate AI into human society?
The answer to the first question is relatively simple: we don't have a universally accepted definition of AI, although you can find some useful overviews online. Of course, almost anyone with a phone and a bit of curiosity has recently started playing with ChatGPT and similar large language models. But there are other application and other useful approaches to AI beyond large language models. However, big language models exist for the simple reason that big tech companies (as well as government agencies) are sitting on huge piles of data, while constantly promising more useful predictions for the purpose of advertising and narrative control. So they have to produce some magic for investors, and the fact that this magic has only been produced after countless inputs from real people working in conditions that can be best described as a slavery has had little impact on anyone wanting to buy into the AI spell.
Therefore, the second question (how do we use something) is inevitably intertwined with the first (what do we create). And whenever there is a birth of a new technology, the history of quantum physics should always be at the forefront of our minds. Born in a few German universities in the 1920s, quantum physics aimed to explain the laws of nature at the atomic and subatomic level. And what was the first practical application of this technology? Oh, of course, we started making bombs with it.
So what AI technology will emerge and how will AI technology be used in the future? At this stage, we can only project past events into the unknown future: just as the first combustion engine car were just a carriage with a motor, the AI models of the future (which are already lurking upon us) will be used to enforce the narratives of the present: control, commercialisation and the wholesale commodification of existing human resources. There is no reason to believe that AI is some inevitable, terrible fate that we as a society have no control over - I don't buy the OpenAI CEO's argument that he is afraid of the new technology that has been unleashed upon us like a Frankenstein monster. Quantum mechanics exists independently of us, beyond technology and humanity; and it can be used to create transistors and microchips, or to destroy mankind in a nuclear apocalypse. And artificial intelligence is only a technology, not a law of nature. The direction and refinement of existing AIs can take any possible direction - because there has never been a law of nature to guide the development and application of the new technology.
Ultimately, we as a humanity need to have a discussion about what benefits we expect from this new toy, and how we can mitigate any unwanted effects that may arise. We cannot leave the definition and scrutiny of AIs in the hands of a powerful but incompetent elected leadership. We all need to define what the term "artificial intelligence" actually means. If we leave this new toy in the hands of techno-nerds, financial markets and unchecked government agencies working for competing nation-states, then we deserve what will happen to us.