#StandWithUkraine

Russian Aggression Must Stop


Our idiot AI overlords

2020/04/25

Tags: tech

I've always been a bit reserved regarding artificial intelligence. When I was a kid I saw many movies about scary AIs that gained sentience and for one reason or another, decided to exterminate us lesser, biological beings. The fear there was science run amok, pace of technology development exceeding our ability to control it.

To this day I remain skeptical of AI, but not necessarily because I'm worried about a Skynet coming to destroy us. Highly intelligent AI, while somewhat scary, is not really a realistic threat at this point in time. Extremely stupid AI, however, is another story.

How AI works, in a simplified sense

I'll preface this by saying I that while I am a computer science major, I haven't specialized in AI. Our university has many, apparently quite high-quality AI courses and a respected AI master's track. I myself have not taken these courses, so in that sense my knowledge into the specifics is limited.

I do know some of the basics, which I have been able to gather by watching CGP Grey's video on AI and Code Bullet's and carykh's AI experiments. I have naturally also run into the topic on a basic level on some of my CS courses.

Basically, to my understanding, many of our AI models work by giving our AI program a bunch of data as an input and through the power of mathematics, the AI spits out some kind of a result. This result is usually used to classify the data in some particular kind of way, like separating bees from threes. Initially the AI absolutely sucks at this and the classifications are largely random. We then train the program by slightly modifying the AI model and retesting it until it can classify the data with a desired level of competence. The modifying of the AI model can be automated by adjusting the model's parameters somewhat randomly and trying multiple AI models in parallel and picking only the best ones. Over time, due to evolution, more and more competent models arise.

The hope is that by subjecting our AI model to a diverse enough set of input data and repeatedly testing it's ability on this training data, we will end up with a computational model that is able to spot patterns in the data, which would be extremely difficult to describe in traditional code.

The problem

To someone who is interested in software freedom, this method has one pretty glaring problem. Since the AI model is built entirely by a machine from repeated exposure to data and retesting, what we end up with is basically a black box. A magic machine that seemingly does what we want it to, but we don't actually understand how it achieves this. But in this case, it is not just us users and consumers who don't know how the magic box, not even the people who built the model know.

This means that on some level, the AI is basically just a guessing machine. It might be incredibly good at guessing, but we cannot really be sure what it bases its guesses on. We can only keep testing it to ensure it keeps making guesses that line up with our expectations.

I personally find this relatively scary. Our traditional code is obviously not immune to making wrong decisions, but when it does we can usually inspect the logic and find where we made a wrong assumption or didn't take some parameter into account. Even humans, who are also bit of arbitrary arbitrators, can usually provide some level of a rational explanation for why they made a particular mistake or error in judgement.

Our overlords

I rag on AI here a bit but don't get me wrong. AI, in its current non-Skynet form, is a perfectly usable tool. Although I view traditional code as more transparent and thus more reliable from an epistomologic point of view, sometimes it just isn't enough to express very complicated patterns. In this sense an AI that is wrong only rarely can help us classify and process data we otherwise would struggle with.

My contention is about when we no longer use these AI systems as tools, but instead set them up to be our overlords. When we set them loose on a particular problem and give them increasing amounts of authority.

If you've been consuming content on YouTube lately, you may have noticed that content creators are replacing certain words with euphemisms. This is because YouTube has unleashed an AI on its users which aims to classify content as either friendly or unfriendly to advertisers. This happened because their previous AI placed various adverts next to Islamic State propaganda. The idea itself is not malicious, YouTube is just trying to appease its advertising partners and due to YouTube's size, AI is just about the only way to even try and tackle the issue.

However, while the motivations might not be malicious, the results of this action have not been entirely positive. Lots of people, who have built their careers making videos of all sorts on YouTube, have been accidentally targeted by this demonetization AI because the AI saw patterns which indicated the video may not have been advertiser friendly.

The amount of false positives this demonetization AI has exhibited has caused those fearful for the loss of their income to either seek alternative methods of financing their efforts (e.g. doing advertisements for unethical games and services like RAID: Shadow Legends) or by self-censoring themselves to a ridiculous degree to try and minimize the chance of the AI going haywire. And even if you are not worried about advertising revenue, YouTube will try to push advertiser-friendly content more, since that is what makes them money. So, if you are worried about losing visibility or by how you will pay your rent, you will need to alter your speech and possibly what you are showing.

I recently watched a video on Philosophy Tube where they too had to resort to euphemisms to deal with actual issues in a way that wouldn't get the video soft-blocked. Our ability to communicate real-world ideas that matter is being harmed here.

Note that this isn't a free speech issue. YouTube is entitled to determine who gets to speak and about what on their platform. Nobody is owed a soap box and I believe there are forms of speech that we are morally obligated to block. The issue here is that in many of these cases YouTube doesn't have a problem with the things being said. YouTube has repeatedly acknowledged that their AI is broken and they are trying to tune it to reduce the number of false positives. But these changes are slow and the appeals processes are clogged up. The overlord is still out of the control of its creators.

Conclusions

Like I said previously, AI is a tool among tools. It can be used or abused. But as with all of our tools, we should be the masters of them and not the other way around. I believe AI, as it currently exists, to be an extreme idiot masquerading as an intelligence. And this appearance of intelligence has caused us to put it into roles where it rules us instead of assisting us.

I believe this is worthy of criticism. For sites as big as YouTube AI might be the only way forward, but if that is the case, we should consider if sites like YouTube are viable in that case. Perhaps spaces for humans can only be reasonably lead by humans.

I believe in terms of classifying content to be acceptable or unacceptable, we can do so without creating impossible to understand magic boxes. I prefer a model similar to that of the Mastodon/Fediverse where where instead of building massive silos, which become impossible to moderate, we build interconnected communities. These communities are responsible of moderating themselves and communities that fail to do so in ways we deem necessary get cast out of our federation. Not only does this allow disagreement of the standards we set upon ourselves but it also distributes the moderation burden.

But in general, I believe we should inspect very carefully what kind of power we put in the hands of AI. We should naturally also be critical of what power we give to software, but in terms of software at least issues of responsibility and ability to fix problems are more clear. In my view, the optimal scenario in terms of utilizing AI would be to leave hard, but ultimately not very important tasks for AIs where the potential harm caused by wrong decisions is minimal. In cases where decisions have potentially harmful consequences, a human should at least monitor and verify these decisions, if only to have a human we can ask for an explanation after mistakes are made.

>> Home