Sentient AI, by Midjourney
Sentient AI, by Midjourney

Are open source LLMs too dangerous?

Last week I was asked this question by Tom Luechtefeld. I thought it was a great question that deserved a post of its own by a respected author. Since I don't any, I thought I would give it a shot, and I got a bit carried away. My objective is to ride the current wave of AI, and ensure that I and my clients are not left behind. However, the future is uncertain.

What I write in this article is a departure from my stated objective of focusing on practical applications for B2B in the context of small businesses, but I can't back down from a good challenge. Also, I am fascinated by this topic, and the opportunity to pretend to be an intellectual. 🤓

The Shift to Cybergenic Content

Allow me to be so bold as to invent a new word: cybergenic. Here is my definition:

Cybergenic, adjective: generated by computation, such as with artificial intelligence. Originally from the Greek kubernetikos, popularized as "cybernetics" by Norbert Wiener in the 1940s, and genic from the Greek gennan, which means "to produce" or "to generate."

AI has no feelings; it is not animated by emotions. AI only calculates coldly. Misinformation, disinformation, and information are all one and the same. AI has no purpose other than executing an algorithm that is too complex to be understood by humans, so we stopped calling it an "algorithm" and instead started anthropomorphizing it.

To AI, there is no "good" content or "bad" content. There is just "content."

The amount of content generated by generative AI, notably those that use LLMs, will be massive, and will dwarf human-generated content.

Welcome to the Cybergenic Web.

The Inconvenient Dangers of the Cybergenic Web

There are a number of inconvenient dangers that will be posed by the Cybergenic Web, but in comparison to some of the other dangers, they will be less of a threat to humanity as a whole. So I am calling them "inconvenient" dangers.

I am referring to topics such as propagating hate, bias, and other social injustices. The Cybergenic Web will certainly make life more difficult for a portion of society, just as the introduction of any new technology has done, but we will either vanquish them or live with them, as we have always done.

People have written enough about these problems already, so let's move on.

Upending Existing Power Structures

I am certain that AI will upend power structures. It is not a question of "if," but of "when."

We have already seen the damage that people can cause with (mis|dis)information. Leaders have been deposed and revolutions have been sparked as a result of Twitter or Facebook. Now, just for fun, let's automate all that and see what happens.

The Cybernetic Web will drown out content as we currently know it. What that will look like is anybody's guess, but the impact will be immense.

Let's use email as an analogy. It is likely a very poor analogy, but the closest one I could think of. Email started out as a curiosity, used only by computer geeks in a very closed context. Eventually, it caught on as a useful tool for society at large, until it was coopted by bad agents. Now, spam (and phishing and all those goodies) are facts of life.

We have invented a number of workarounds to the problem of bad actors who abuse email, but the struggle just seems to continue, with entrepreneurs on both sides profiting, and the "normal" users taking the hits.

If we extrapolate the struggle between "good" versus "bad" use of emails, then it is likely that we will see a continuation of this dynamic in the Cybergenic Web.

One could argue that "good" and "bad" are just different points of view, so to remain on somewhat more solid ground, let's stick to the quantitative. Let's consider ownership, and by implication, power.

AI is creating a chasm between the "haves" and the "have nots". One potential mitigation to a concentration of power is to have communal ownership of AI. Perhaps the best type of communal ownership is via open source, but perhaps not.

This leads to some interesting topics to explore:

  1. Is communal ownership of LLMs desirable?
  2. If we assume that communal ownership of LLMs is desirable, is open source the best way to achieve it?
  3. If LLMs are open sourced, which can mitigate the problem of concentration of power, could it be abused, making the solution worse than the problem it is trying to solve?

Note that these question appear, at least to me, to be analogous to political considerations. As an exercise, replace "LLM" by "land": is communal ownership of land desirable?

The Existential Threat

As a society, we will have to continue to grapple with the problems of ownership and power. I don't know how this will play out, but the struggle is not new.

What is new is potential of AI as an existential threat. It is stunning that a language model could hold the key to our future.

The Cybergenic Web as a passive thing has the potential to upend society, but it probably won't kill us.

If the Cybergenic web becomes agentic and power-seeking, we are potentially doomed. There are plenty of gripping scifi movies that ponder how.

Welcome to the Cybergenic Web.