An overview of Artificial Intelligence (AI) from a high level and critical perspective.
Using Image Creation powered by Bing/DALL-E 3, the featured image to the left was generated with AI ∙ November 1, 2023 at 2:51 PM.
I found my first encounter with an AI chatbot, ChatGPT, rather interesting. Below are my two questions and ChatGPT's responses from that encounter. You might be interested to know that my second question was paraphrased from the 1942 movie, The Man who Came to Dinner.
Should artificial intelligence act as the ultimate arbiter in identifying something as misinformation?
Artificial intelligence (AI) can certainly be a useful tool in identifying misinformation, but it should not be relied upon as the ultimate arbiter. The reason for this is that AI is not perfect and can make mistakes, especially when it comes to identifying context and nuance in human language.
Misinformation is a complex and multifaceted problem, and while AI can be used to help identify potential sources of misinformation, it should not be relied upon as the sole arbiter of what is true and false. AI can help flag suspicious content and highlight areas where more investigation is needed, but ultimately, it is up to humans to make the final determination.
Furthermore, there are ethical concerns around giving AI too much power in determining what is true and false. AI is only as objective as the data it is trained on, and if the data contains biases or inaccuracies, those biases will be reflected in the AI's output. Therefore, it is important to have human oversight and input to ensure that AI is being used ethically and accurately.
In summary, AI can be a useful tool in the fight against misinformation, but it should not be relied upon as the ultimate arbiter. Humans should be involved in the process to provide context, nuance, and ethical oversight.
Is it true that the Chinese kill middle-aged businessmen at birth?
No, this statement is not true. It is a baseless and false claim without any factual basis. There is no evidence to support this claim, and it is important to be critical of such extreme and unsupported statements.
It is important to be careful when evaluating information and to consider the source and evidence behind any claim. Misinformation can spread quickly and have harmful consequences. To combat misinformation, it is important to fact-check and seek out credible sources of information.
So went my first two encounters with the free evaluation version of OpenAI’s ChatGPT. I found it quite interesting how it stated that it should not be considered an arbiter of misinformation, yet in its second response, it appears to fall into that role. I also detect a wee bit of defensiveness. One thing I believe we can agree upon is that it lacks a sense of humor and so can’t discern sarcasm.
As with so many novel concepts that insinuate themselves into our zeitgeist, the very idea of AI has no doubt suffered at the hands of marketers, pundits and the yellow journalists of mainstream media who are all too quick to sensationalize it. It appears that every company and organization is jumping on the AI bandwagon lest they miss out on unimaginable benefits–and I stress the word unimaginable, for this technology is so new even to those in the IT world that one can hardly expect the average business owner to grasp the potential uses of AI. At the same time, they cannot ignore the fact that it will be ubiquitous and therefore something to be reckoned with, or at the very least something to be featured as a marketing ploy.
The sensationalism surrounding AI has also naturally created a great deal of mystery and confusion over it, fertile ground for fear-mongering. Why would anyone wish to monger fear over AI, you ask? It almost always comes down to money, whether to sell an AI-related service or simply to boost viewership or increase the number of clicks. We are all familiar with science fiction movies in which computers and robots surpass their human counterparts and start to have thoughts of their own, so we are naturally wary of having such a scenario come about. In dispelling any hysteria it would be helpful to break down exactly what AI is, from whence it came and then hopefully, whither it’s going.
It would also be helpful to think of AI simply in terms of orders of magnitude of greatness with respect to awareness and capability (and by awareness, I mean knowledge of data from various physical sensors or stored datasets).
Consider for example the need to count the number of people entering a theater, and all the possible solutions one may bring to bear upon the problem. The simplest—and lowest tech–solution would be to have a person at the entrance keep a tick sheet on which to keep track of how many people enter. If we were to move in the direction of an AI-driven solution, the next solution could be to install an “electric eye” gate counter, one that counts a visitor each time the light beam across the entrance way is broken. This naturally increases the possibility or error, as the system cannot discriminate between the beam being broken by a wheeled suitcase, for example, as opposed to one’s leg.
While this may be a step in the AI direction, it is not truly AI; however, if one were to employ a camera and have the system perform visual detection of those entering and exiting, then you would truly have an AI solution. With this simple example, you can see the increased order of magnitude that AI brings to bear upon a problem. Even with the electric eye solution, one can arguably claim it to be AI, but it can only know that a beam has been interrupted. With the AI camera and visual detection solution, one can know much more information.
So a need has been satisfied, one in which we count the number of visitors to a venue. This only partially satisfies things, as there must be a reason to count the number of visitors. The accumulation of knowledge for this purpose may result in a direct action such as preventing the entry of additional visitors due to maximum capacity, or it may simply serve as data to be analyzed, resulting in a decision to change the time or nature of one’s programming to attract more visitors. With regard to the latter, this is where AI’s order of magnitude with regard to sheer computing power can be brought to bear. Suppose for example, you wished to know the demographics of visitors to your events. For that, the system can be programmed to use facial and pattern recognition to recognize and count the number of adults versus the number of children, for example. Further detection is possible, such as detecting facial color, skirts versus suits, etc., although one can expect a greater degree of possible error with the data obtained. Indeed, this approaches the concept of profiling people, and in this the ethical use of AI must be considered.
Regarding the possible dangers AI may pose, it all comes down to what decisions or actions are made based on the knowledge that was obtained through AI. The decision or action may be made by a human intervening at some point to initiate an action based on the data obtained, or—and this is where it can get really dangerous—the decision can be made in advance by programming decision-making data thresholds. In our example of counting visitors, a door can lock automatically once maximum capacity is reached. This is a simple enough action, to be sure, and one which may not evoke thoughts of possible danger, but suppose something out of the ordinary occurs, something the system isn’t trained to handle? What then? In our example, if there’s an emergency, but the system has locked the door because the threshold has been reached, that would be unacceptable.
One of the hallmarks of intelligence is the application of knowledge, whether it be in a virtual sense as in making a discovery, a connection or realization, or in a physical sense as in how to physically interact with one’s environment in order to achieve a purpose. The latter is where we must be careful to always include a manual override safety feature or some form of “kill switch.” I for one would consider it pure folly to create anything robotic that does not or cannot follow external commands and is completely autonomous.
With regard to the possible dangers AI may pose, it has led to discussion as to whether government should regulate the use of AI, and to that aim perhaps enact legislation in Congress to limit its use. Whenever I hear of that, the Monty Python-esque phrase, “hails of derisive laughter,” comes to mind, for it’s absolutely impossible to put that genie back into the bottle. Further, without overly intrusive surveillance of computing activities, it would be impossible for authorities to know what AI code is being written or run behind the scenes. Consider too the obvious fact that our government cannot regulate the use of AI in other countries. What it ends up really coming down to is an honor system, and an honor system only works with honorable people. Those who would engage in nefarious activities could never be expected to comply (“If you outlaw AI, only outlaws will have AI”). Such a system would be totally ineffective.
So we've established that the use of AI itself cannot truly be regulated; however, what can be done is to limit access to large datasets. The most powerful AI computer in the world is nothing without data on which to search and process. Therein is the key to control of AI: the information itself. If we are to think of AI as being potentially dangerous, you may think of the computer and the AI algorithm as the gun and the information as the bullets or ammunition. This would be especially true in a case where the very knowledge of something results in a market crash, for example. This is a different danger than that of an AI-driven robot intent on a path of destruction, but it is by no means a trivial one. In fact, it is probably far more likely to occur. Consider too the possibility of data poisoning. In such a case, your AI program or algorithm may be perfectly fine in its operation, but if the data has been compromised you cannot base a decision on or trust what your AI tells you.
When it comes to AI, as Joe Jackson sang, “You can’t get what you want ’til you know what you want,” …and you can put it into words.
Prompts are what one writes to tell a chatbot what the desired outcome should be, whether it’s an image, an article, computer code, etc. It is key to the successful generation of the desired result. If you don’t specify in detail exactly what it is you’re after, you will end up with unpredictable results (it brings to mind how the devil takes advantage of unspecified details in Stanley Moon’s seven wishes in the movie, Bedazzled). While it’s possible to clarify what it is you’re after with successive prompts that build upon the first, this is less efficient and in fact may result in a higher cost for the user who may be paying for a generative chatbot by CPU cycle. In other words, the more one causes the chatbot to “think,” the more it will cost the user–and the more energy will be consumed by the process. For many users who have grown accustomed to sending terse and abbreviated text messages, this is a huge departure, as it’s now necessary to specify in carefully considered detail what you need to convey. As a result of all this, AI has spawned an entirely new career in the art of writing succinct and successful prompts. And, seemingly ironic and apropos at once, AI is being used to help humans write effective prompts, in effect teaching them how to better communicate with the computer. As far as that goes, I'll leave you to imagine where that seemingly innocent arrangement may ultimately lead.
Copyright & Intellectual Property
For AI to work well, it must ingest as much data as it possibly can, and this includes not only data in the public domain, but also copyrighted content. This of course has led to legal issues over whether a company can legally engage in this activity. After all, they are in effect copying the entire text of a novel, for example; however, they are not necessarily doing so for the purpose of making that content available to others (at least not beyond an excerpt, I would surmise). Whether or not the ingestion of copyrighted works by AI passes legal muster remains to be seen, and will most likely end up being decided by the U.S. Supreme Court. Meanwhile, there have also been disputes over intellectual property that may have to do with the creation of original work in the style of the original author or artist. Recently, a federal district court ruled that AI-generated content cannot be copyrighted (see here), the overriding reason being that the Copyright Office can only issue a copyright to a human.
Okay then, that makes it very simple to determine whether something qualifies to be granted a copyright. Or does it? This necessarily leads to the next conundrum: how to determine if something was human or AI generated. There’s no verifiable watermark of any sort, and in the case of text–whether the code to a program or the words to a novel–it’s impossible to determine its origin, how it was wrought, its provenance. With enough content available for analysis, it may be possible to make an educated guess based on the style, the frequency of specific words, etc. But it would still be only a guess and nothing that would pass legal muster. This is probably the thorniest issue surrounding AI, and one for which yours truly sees no real solution, only minimally effective workarounds and compromises. After all, AI itself has a hard time recognizing what has been generated by AI. Again, it’s nothing more than an educated guess, but apparently one on which rests the question of copyright.
AI: A Cultural Perspective
Although we didn't call it AI, if you stop and think about it, you would realize that the concept of it has actually been with us for quite some time. You can go all the way back to 1927 when the film, Metropolis was made. In it, a robot in the form of a woman appears as one of the first examples of an autonomous, artificially intelligent entity. It's interesting to note that, with this, one of the first introductions to AI in the cultural milieu, misinformation also made its debut, as the robot was created as the heroine's deceitful doppelganger in order to discredit her and foment disorder (not much different from today, is it?).
The genre of science fiction is rife with cautionary tales of computers or robotics gone bad. Below is a list of ones that readily come to mind. Many of them are available in our catalog:
- 2001: A Space Odyssey
- Star Trek: The Motion Picture
- The Terminator
- The Matrix
- I, Robot
- Colossus: The Forbin Project
- The Twilight Zone “The Old Man in the Cave”
This one was a quite a twist on the “bad computer/bad robot” theme.
You will no doubt see and hear much more about the possible dangers AI holds for us. As with any new technology or industry, there is fear of the unknown, of opening a Pandora’s box. Of course we can expect nothing less with AI. As far as that goes, you may be comforted or maybe more disturbed to consider that the technology known to the population at large is perhaps 40 years behind what they really have (and by they, I generally mean the government and military contractors). Now think about that. Think about the things we didn’t have 40 years ago compared to what we have now. Then imagine if you can what might exist 40 years into the future. Finally, if you’re worried about AI, in light of this 40-year technology gap, you would also have to consider that they are way past the point of bringing about a possible Terminator world. So to ask where artificial intelligence is going, to this writer it arrived long ago and has been in use far longer than we realize. The only difference is that AI is now available to the average person, be they of good or evil intent.
As to whether AI will prove to be a boon or cause the downfall of civilization all depends on whether wisdom will be brought to bear on how it's used. Unfortunately, there is no artificial wisdom appearing on the horizon to save the day, although we could certainly use some far more than artificial intelligence.