I have been asked to speak to AI regulation in the rather pressing context of the future freedom of mankind.
Recently, the leaders of AI companies have proposed in discussions with the Senate that they be regulated by a separate agency, designed to address the unique risks posed by ever accelerating AI capabilities on the larger stage of human civilization.
What are the perils in this design?
Is it likely that a regulatory agency overseeing AI developments would almost certainly, in today's political landscape, become an extension of FBI/CIA/DoD intelligence and propaganda capabilities?
How could such an outcome be prevented?
Is the proper architecture directing and counterbalancing AI's vast potential power to be found in a traditional government regulatory agency?
Or should it have the oversight of a citizen review board populated by ethicists, humanitarians, historians, and whistleblowers?
If a democracy is partly predicated on an informed electorate, who will undertake the task of informing the voters about the ever-changing landscape of AI's influence over human society?
Given that AI will soon exert influence over democratic electorates' opinions, how can the public be continuously encouraged to think independently of AI's inevitable production of disinformation?
In what context can AI be allowed to enter into the territory of human conflict? Are there important AI technologies that should be developed for defensive purposes only? What are the most critical inflection points with regard to the AI capabilities now being developed by the Pentagon?
What are the risks of disconnecting AI battlefield deployments from human decision-makers?
When the first U.S. citizen is harmed by a drone on U.S. territory, how will we address the demand of the public for safety and security? By militarizing the skies over residential neighborhoods?
How can we ensure that AI is harnessed to serve humanity and the environment, and that humanity and the environment are not harnessed to serve the unelected few who control the levers of AI?
How can our highest principles, conscience, ethics, and awareness of disempowered groups of people assist us in establishing real-time guardrails on AI's ever-accelerating advances?
How important is it for us to understand that, when it comes to AI, there is no "soul in the machine"? That the sanctity of the human spirit remains a concern of human society, and the preservation and development of the human spirit remains a uniquely human endeavor?
When AI technologies have already been harnessed by the far right within our three-letter agencies to defame law-abiding American citizens for anti-democratic objectives, how can we ensure that these violations of human rights are illuminated? How can we ensure that they never recur?
What are the best ways to ensure the international cooperation of democratic societies in addressing AI's rapid advance?
Again, how can citizens of the highest integrity, ethics, and awareness of disempowered groups be given meaningful oversight regarding AI's advancing influence over human society?
These questions are foremost in my mind at this time with regard to the architecture of AI guardrails and oversight.
We will benefit by having them addressed by the leading humanitarians among us, alongside the conscientious scientists and academics who are laboring to preserve democratic society as one of free will, free speech, the sovereignty of human endeavor, and the sanctity of the human spirit.
Lane MacWilliams
No comments:
Post a Comment