We should be careful of machines
I think I've lived during some of the best times in human history. I grew up in the '50s in a small town in Arkansas where things were leisurely and relaxed. Those were the days of flat-top haircuts, wheat jeans and penny loafers and every day during the summer was fun. I was fortunate that I had a group of friends who loved baseball as much as I did and we would meet at the ballfield literally as soon as the sun came up in the morning and would play ball all day long. There were no cellphones then of course and we all had the same rules from our parents; be home before the street lights came on.
There was no crime in my small town and the media told us only what they thought we had a right to know; not everything they can find like they do today. Personal computers, satellite television and theater size television screens for the home were still part of science-fiction lore and none of us ever expected to see those things in our lifetime. But we have. In fact, we've seen the beginning of some things that may lead to our eventual destruction as a species.
I'm talking about computers. According to a survey of AI (artificial intelligence) experts carried out by Bostrom, there's a 50 percent chance that we'll create a computer with human-level intelligence by 2050 and a 90 percent chance it will happen by 2075 and no one knows what will happen when computers become smarter than their creators. Computer power has doubled every 18 months since 1956 and that will continue. The resulting intelligence gap between machines and people would be similar to the one between human and insects.
Computers are designed to solve problems as efficiently as possible. The difficulty occurs when imperfect humans are factored into their equations. According to The Week magazine, inventor Elon Musk recently warned that "we need to be super careful with artificial intelligence," calling it "potentially more dangerous than nukes."
But perhaps the biggest problem is that tech firms are designing ever-more intelligent computers without fully understanding or even giving much thought to the implications of their inventions. When we do that, we're asking for problems that are perhaps unsolvable by humans. Most of us remember Hal, the super-computer in the movie, "2001, A Space Odyssey."
In the sequel to that movie, Hal overrode human instructions as he had been programmed to do and people died in the process. We have the ability to create computers that can deduce logical solutions thousands of times faster than the human brain but computers don't have a conscience. They're not socially aware. Their only job is to find the most efficient solution and that solution may mean wiping out humans in the process.
We face the same dilemma in our constant search for extra-terrestrial life, not knowing what we will find if and when we do. We're just as likely to find aliens who want to destroy the earth as we are aliens who want to be our friends but we concentrate on the latter and ignore the former.
We're doing the same thing with computers, focusing only on the good things they can do for us and paying little attention at all to the dark side of the equation. When we create computers that can think for themselves rather than simply respond to our commands, we will have made something that has the ability to control us rather than us controlling them.
And that might be the end of us all.