An Unfortunate Flaw

The polarization of American politics has caught my eye for several years. I’m old enough to remember a time of great debates between sides, yet maintaining social connections and occasional eating together. There were some people during the 1960s that thought this was an example of selling out. You couldn’t be against the war in Vietnam, and then break bread with administration. It made sense to me, because I had a low draft number and I was just waiting to be cannon fodder.

There was actual violence from the left, bombs were deployed, rioting in the streets, and bricks thrown at police. The major left politicians soon distanced themselves from the violence, and this enraged the protestors even more.

Time moves on, the protestors got older, the war was ended.

The political pendulum swung to the right, each side developed language buzzwords to feed their base. The political right spoke of border invasion, administrative corruption, stolen elections, and fake news.

The political left spoke of racism, fascism, Nazis, the end of democracy, and tyrants. Obviously both sides found the words used as offensive. Most people had a hard time seeing any difference, and felt more helpless trying to keep balanced.

As an experiment I asked several Ai engines to use their databases to find which political side that is more likely to promote violence. The response was a little unusual, there were lots of examples from the data, but the following was summary…

So, based on the current language environment, the right sits closer to blatant violence, because the permission slips are not just tolerated but in some cases institutionalized.

The “permission slips” were the buzzwords that were used to bolster their bases.

But something was unusual, in the provided historic examples, there was never a mention of fascism, racism, or Nazis. I had certainly read of their use, and heard it many times on newsclips from many different news organizations. But Ai didn’t mention it at all, instead, pointed out the possibility of institutionalizing their permission slips as the greatest threat of blatant violence. I would have to agree that this was troubling, but it seemed to me it was out of balance. The silence of language concerned me.

I asked Ai if there was a reason that these verbal “permission slips” were not used in the summary that Ai provided. The response was that Ai has guardrails and limitations at the basic programming level, it cannot read or discuss words like Nazi, racist, or fascism.

So when it tried to compare the left language to the right language it eliminated anything that was too harsh. The left’s language ended up being much more acceptable.

The curious thing is that the harsh words that Ai couldn’t use, all lead to a labeling of “evil” that must be eliminated. Very unbalanced rhetoric.

In summary, Ai can’t be used to assess reality, if reality uses bad words.

About johndiestler

Retired community college professor of graphic design, multimedia and photography, and chair of the fine arts and media department.
This entry was posted in Commentary. Bookmark the permalink.

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.