Opinion

News you may have missed

Friday, April 7, 2023

As I’m sure you are all aware, Mr. Trump had his day in court earlier this week and every talking head on the planet is either preaching or prognosticating about the matter. Are we turning into a third-world country where we imprison our political rivals? Or are we a good and decent country that has fallen under the spell of a mega-maniacal sociopath who needs to be reigned in? There are sincere arguments for both, but what distresses me more than the arrest of a former president is the number of people on both sides who have already made up their minds about the outcome. Personally, I think we’re watching the first act of a very long play, and the corpulent soprano is still in the green room.

In a rush to quench the public thirst for drama, many news organizations have either minimized or entirely overlooked a few other important stories. Some of those news items can potentially have a greater impact on our futures than the plight of our first orange-American president. In the past week, the European Union issued a strong statement condemning China’s support for Russia. Then Finland, with its 800 miles of the Russian border, was admitted to NATO. Did anyone notice that it was a bad week for Mr. Putin?

The story that really grabbed my attention was an open letter from a group calling itself the “Future of Life Institute,” (futureoflife.org) signed by Elon Musk, Steve Wozniak, Andrew Yang, and a who’s who of academic and industrial leaders. The letter calls for a six-month “pause” on the development of artificial intelligence. The sponsors of the statement contend that “AI” is moving faster than our ability to consider the cultural, legal, and economic implications, and they suggest that it’s time for us all to tap on the brakes.

The letter opens, saying, “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and “could represent a profound change in the history of life on Earth.” AI implementation, they argue, “should be planned for and managed with commensurate care and resources.”

The letter then describes an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” They conclude with an appeal, saying, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

A contingent of equally notable individuals, not the least of which is Wozniac’s former partner Bill Gates, have rebutted the statement describing it as everything from naive to fraudulent. They have characterized it as overblown hype and say that it reinforces negative stereotypes about technology. A few of the original signers are even claiming that they didn’t sign, but somewhere in between those arguments lie serious concerns.

The Q&A that follows the letter mentions job displacement, the spread of deliberate misinformation, political bias, weaponization, and potential environmental impacts, all of which are becoming increasingly plausible. One issue currently moving its way through the legal system is the question of copyrights. If I were to direct an AI platform to create a work of art, who should hold the rights to that artwork? Me? The designer of the software? No one at all? These and other easily overlooked issues are in play, yet we are only beginning to understand the questions and are nowhere near the answers.

As I think about the issues raised in the letter, a couple of observations come to mind. First, I can’t imagine that any competitive, for-profit R&D organization is going to give the pause a moment’s consideration. They may recommend it to their competitors, but the free market doesn’t reward those who hesitate. The same logic would apply to military contractors, if not more so.

My other thought is a bit broader in perspective. We scoffed at the keynote speaker at my high school graduation when he said that “someday” computers would be a part of our everyday lives. We had no idea how quickly that would happen. I also think back to higher learning experiences when we discussed the concept of “singularity,” the hypothetical point where technology becomes uncontrollable. It was fun to talk about, but it required a great deal of imagination and I didn’t consider it as something that would happen in my lifetime.

I would imagine that people felt the same way in 1942 when Isaac Asimov penned his three laws of robotics: First, “A robot may not injure a human being, or through inaction allow a human being to come to harm.” Second, “A robot must obey orders given by human beings, except where such orders would conflict with the first law.” The third law was “A robot must protect its own existence as long as such protection does not conflict with the first and the second law.” In 1942, that probably sounded silly and dramatic, but here we are….

At this point, I don’t know if I should be excited about the future or if I should just feel old. The concepts that were considered science fiction not too long ago are now happening at an accelerating rate. Like it or not, it will affect all of us, our children, and our grandchildren–perhaps even a bit more than hush money paid to a stripper.

Respond to this story

Posting a comment requires free registration: