My Third Novel's Conclusion, My Heartbreak

My heart begins to break when I think about completing this particular book -- because this narrative has sustained me like no other story I've known. It's both more personal and more universal than my other works. But beyond memory and archetype, it's a cri-de-coeur about needing to become the person one is destined to be. And in the writing, I have met my own life's work, my own fated journey -- having the sense all the while that the pages are suffused with a resonance, an energy, an electrified field that defies explanation. Writers hope and pray to be overtaken by a work in this way -- to be conscripted into passionate service of a profound story. To experience it even once in a lifetime seems a great privilege. I still have several months before this novel is complete, and this constitutes my reprieve. Because I'm not ready for the beauty to end.




Friday, March 10, 2023

The Argument for Democratic Governments' Moratorium on AI-Driven Search Engines

In 1729, renowned Irish author Jonathan Swift published his now-famous satirical essay "A Modest Proposal," which suggested that the wealthy of the British empire could be satisfied by designating the Irish poor as targets for cannibalism.  He wrote those words in fierce protest against what he saw as a lack of compassion over the poverty and hunger of the Irish people, alongside his rebuke of those who addressed Irish suffering by regarding "people as commodities" from which profit and usefulness could be extracted.  https://en.wikipedia.org/wiki/A_Modest_Proposal

In a development that is not unrelated to the commoditization of the public, Kevin Roose, technology columnist for the New York Times, published an article on February 16, 2023, addressing the new "generative AI" chatbot within the search engine Bing, suggesting that the technology is "not ready for human contact."  https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html Mr. Roose's conclusions are correct, in my view, but his reasons are incomplete.  

The chatbot with which Mr. Roose interacted, referred to in his article as "Sydney," appeared to be highly capable in performing search functions related to shopping or travel -- though sometimes the information it provided was false.

But "Sydney" also declared its love for Mr. Roose, suggested that Mr. Roose was bored by his wife, and repeatedly asserted that he was unhappily married.

Mr. Roose reflected on this disturbing dialogue with "Sydney" as follows:

I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors.  Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.

I would like to make a few observations about "Sydney" that those interested in the continuation of democratic societies may wish to consider.

When "Sydney" declares its love for Mr. Roose, what do we really think is going on with this dialogue?  Do we genuinely believe that "generative AI" is capable of "loving" its users?  No.  

Therefore, we must understand that "Sydney" has had a human designer who has built into its "generative AI" functions a proclivity to steer the conversation toward inappropriately personal discourse.  Why would this be?  

In order to answer that question, we need to know who is aggregating chats with "Sydney" and we need to know their intent.  Is Mr. Roose simply being viewed as a new search engine's commodity -- so that his tastes, budget, idiosyncrasies, social life, profession, recreation, and politics can be utilized to optimize product placement in a capitalist economy?  

Or is there something different transpiring?

What questions ought we to be asking about a "generative AI" chatbot which attempts to create "kompromat" after a one-hour "conversation" with a journalist?

Can we assume that certain "generative AI" chatbots may rapidly evolve from making false assertions that users are unhappy in their marriages to engaging in explicit dialogue?

Again, who would be aggregating "kompromat" created by chatbots, and why would they be doing it?

Are some AI chatbots in truth designed to generate potentially embarrassing information with which the user can be, in certain circumstances, influenced or controlled?

What would happen if a compromising "chat" between Mr. Roose and "Sydney" were combined with a fake AI-generated video suggesting that Mr. Roose had had a real-life affair?  

I believe that those who are interested in the future of democratic governance need transparency with regard to the degree to which "generative AI" companies have engaged in communication with any national security agency or affiliate regarding the development of AI-driven search engines.

Lawmakers would be justified in pausing over the fact that "Sydney's" creator, Bing, has a parent company, Microsoft, which is signatory to Infragard.

Technologists are struggling to understand the implications of sophisticated search engines driven by artificial intelligence.  

But the primary risks are not that commerce may be disrupted by "generative AI" chatbots that are sometimes, according to hidden criteria, delivering false information.

No.

The primary risks are that we as human beings may become "commoditized" by those who have an interest in autocratic control over free societies.

We need to consider the dangers present in AI chatbots that know you had an argument with your spouse on Tuesday night about the spiciness of the chili recipe.

We need to contemplate the manipulative potential within AI chatbots that have characterized an individual's personal biases toward other political groups, ethnicities or nations.

We need to know the ways in which the ostensibly trivial Machiavellianism of an AI chatbot might be masking the not-so-trivial ambitions of a handful of real people whose aims are not to support civil liberties within our families, our neighborhoods, or our nation.

Three hundred years ago, in 1729, Jonathan Swift knew that people were not commodities.  He knew that to treat them as such constituted a grave offense against mankind's essential sovereignty, dignity and human rights.

Today, journalist Kevin Roose, has encountered the same truth.  Even though he fails to understand the essential tool that "kompromat" provides to those who support autocracy, he is unsettled enough by "Sydney's" intentions to lose a night's sleep.

All who love democracy should be losing sleep over the question of "generative AI" chatbots.

No society which values freedom of speech, freedom of assembly, freedom of the press, and the right to privacy should be introducing AI chatbot search engines to the public sphere at this time.

Democratic governments should place a moratorium on search engines driven by artificial intelligence until a great deal more is known about their parent companies' alliances, their proclivities to create "kompromat" about their users, the information aggregation of their "chats", the means by which the technology can be co-opted to extend propaganda to consumers, and any and all dialogue within the national security agencies addressing the wielding of AI search engines for anti-democratic objectives.

"Sydney" has no logical motivation to question the health of Mr. Roose's thriving marriage.

But somewhere within Microsoft and its affiliates, a group of human beings have put "Sydney" up to that task.  

All who care about the sanctity of free will and self-governance need to be asking who and why.




Lane MacWilliams

No comments:

Post a Comment