Unless we make changes in the direction our medical scientists and computer engineers seem to be headed, mankind may be doomed to extinction either from robots using careless artificial intelligence, or genetic engineering of the human genome. This may be the end of us even before destroying ourselves with nuclear weapons or by polluting our environment.
Artificial Intelligence fears
We have seen machines with bad computer code, killing humans. Example one may be the Boeing 737 Max. The plane’s computer programming had it take over from pilots under certain circumstances and refuse to relinquish control back to these humans, even if the plane was headed toward certain disaster.
While the plane’s computer programming is restricted to the specific function of flying, it has assumed what was once a human skill—that of pilot. With breakthroughs in artificial intelligence, we may expect to see more complex robotics in the future.
As a boy I was convinced that we would control future robots that we created. The great science fiction writer Isaac Asimov convinced me that we would program his 1940s era “Laws of Robotics” into future intelligent machines, thereby rendering them safe. His first law of robotics is that a robot cannot cause harm to come to a human being, and, according to his suggested future law, any robot would have to obey this instruction before it could perform other functions.
Asimov later realized that a robot could still allow a human being to die through inaction, so he modified the first law so that a robot could not cause harm to come to a human being by inaction—that is to say, if a robot saw that a human being was about to step over a cliff, it would be obligated to attempt to stop it from happening.
There were more laws of robotics that in all prevented harm to come to humans from their robotic inventions. It seemed logical to me that human designers would take heed and program these laws into any futuristic artificial intelligence employed by man-made machines.
But recent research by our military has me believing that this will not happen. Military machines will be programmed to purposefully kill human enemies, so the laws of robotics would be anathema to them. Intelligent machines are being designed as you read this, that will operate on a battlefield to kill an enemy, without laws of robotics with which to protect innocents.
Intelligent machines will communicate with one another in the future (some computers already do), and it will be perilous if military ones with an ability to communicate such code to other machines through the Internet encourage them to kill people.
Without Asimov’s laws of robotics coded into all artificial intelligence, mankind will be at risk. One can imagine an intelligent machine concluding that humans are illogical and should be eliminated in the best interest of an ordered society, passing this onto other robots through the Internet, and ordering military robots to finish off inefficient homo sapiens.
Homo Genius
As we delve deeper into the field of artificial intelligence, scientists simultaneously work on genetic engineering research. Many scientists in this field have the best of intentions, including the making of a better model human. Perhaps, for example, a human who would not inherit diseases because of genetic engineering to eliminate inherited disease.
One potential problem is that there may be pressure to improve the current species in other ways, such as increasing intelligence, before we know what we are doing.
Genetic engineering is a new science, and we begin with a human organism millions of years old. Humans have gradually changed over millennia to adapt to earth’s environment, and the DNA is far more complex than we understand today. To eliminate an undesirable trait may alter other traits, unbeknown to us.
To alter intelligence may, for example, alter the conscience. So we could create a super intelligent human without a sense of morality, one that would look at us much as we see our pets today, but one that might see a better world as one in which we were eliminated, since we use up limited resources necessary to support the new Homo Genius.
We simply don’t know enough to begin playing god in a quest to create a better human. I believe we should do scientific research in this field and learn all we can, but that we should also move with caution toward eventually altering the genome. So far, the world’s scientists agree and for the most part, are restricting research through international agreements.
We need international treaties limiting war machines employing artificial intelligence to following Asimov’s laws. We should also keep genetic engineering at the research level and agree not to alter the human genome until we know far more about it than we do today.
We don’t know if a rogue government is somewhere experimenting with genetic engineering in secret.
We should be intelligent enough to keep from destroying ourselves. I’m not so sure, with our species having built thousands of nuclear weapons beyond what it takes to destroy civilization as we know it. And with our species polluting the planet to a point at which we may have already sealed our doom.
Jack Balkwill has been published from the little read Rectangle, magazine of the English Honor Society, to the (then) millions of readers USA Today and many progressive publications/web sites such as Z Magazine, In These Times, Counterpunch, This Can’t Be Happening, Intrepid Report, and Dissident Voice. He is author of “An Attack on the National Security State,” about peace activists in prison.
Should we fear Frankenstein monsters from AI and genetic engineering?
Posted on March 3, 2020 by Jack Balkwill
Unless we make changes in the direction our medical scientists and computer engineers seem to be headed, mankind may be doomed to extinction either from robots using careless artificial intelligence, or genetic engineering of the human genome. This may be the end of us even before destroying ourselves with nuclear weapons or by polluting our environment.
Artificial Intelligence fears
We have seen machines with bad computer code, killing humans. Example one may be the Boeing 737 Max. The plane’s computer programming had it take over from pilots under certain circumstances and refuse to relinquish control back to these humans, even if the plane was headed toward certain disaster.
While the plane’s computer programming is restricted to the specific function of flying, it has assumed what was once a human skill—that of pilot. With breakthroughs in artificial intelligence, we may expect to see more complex robotics in the future.
As a boy I was convinced that we would control future robots that we created. The great science fiction writer Isaac Asimov convinced me that we would program his 1940s era “Laws of Robotics” into future intelligent machines, thereby rendering them safe. His first law of robotics is that a robot cannot cause harm to come to a human being, and, according to his suggested future law, any robot would have to obey this instruction before it could perform other functions.
Asimov later realized that a robot could still allow a human being to die through inaction, so he modified the first law so that a robot could not cause harm to come to a human being by inaction—that is to say, if a robot saw that a human being was about to step over a cliff, it would be obligated to attempt to stop it from happening.
There were more laws of robotics that in all prevented harm to come to humans from their robotic inventions. It seemed logical to me that human designers would take heed and program these laws into any futuristic artificial intelligence employed by man-made machines.
But recent research by our military has me believing that this will not happen. Military machines will be programmed to purposefully kill human enemies, so the laws of robotics would be anathema to them. Intelligent machines are being designed as you read this, that will operate on a battlefield to kill an enemy, without laws of robotics with which to protect innocents.
Intelligent machines will communicate with one another in the future (some computers already do), and it will be perilous if military ones with an ability to communicate such code to other machines through the Internet encourage them to kill people.
Without Asimov’s laws of robotics coded into all artificial intelligence, mankind will be at risk. One can imagine an intelligent machine concluding that humans are illogical and should be eliminated in the best interest of an ordered society, passing this onto other robots through the Internet, and ordering military robots to finish off inefficient homo sapiens.
Homo Genius
As we delve deeper into the field of artificial intelligence, scientists simultaneously work on genetic engineering research. Many scientists in this field have the best of intentions, including the making of a better model human. Perhaps, for example, a human who would not inherit diseases because of genetic engineering to eliminate inherited disease.
One potential problem is that there may be pressure to improve the current species in other ways, such as increasing intelligence, before we know what we are doing.
Genetic engineering is a new science, and we begin with a human organism millions of years old. Humans have gradually changed over millennia to adapt to earth’s environment, and the DNA is far more complex than we understand today. To eliminate an undesirable trait may alter other traits, unbeknown to us.
To alter intelligence may, for example, alter the conscience. So we could create a super intelligent human without a sense of morality, one that would look at us much as we see our pets today, but one that might see a better world as one in which we were eliminated, since we use up limited resources necessary to support the new Homo Genius.
We simply don’t know enough to begin playing god in a quest to create a better human. I believe we should do scientific research in this field and learn all we can, but that we should also move with caution toward eventually altering the genome. So far, the world’s scientists agree and for the most part, are restricting research through international agreements.
We need international treaties limiting war machines employing artificial intelligence to following Asimov’s laws. We should also keep genetic engineering at the research level and agree not to alter the human genome until we know far more about it than we do today.
We don’t know if a rogue government is somewhere experimenting with genetic engineering in secret.
We should be intelligent enough to keep from destroying ourselves. I’m not so sure, with our species having built thousands of nuclear weapons beyond what it takes to destroy civilization as we know it. And with our species polluting the planet to a point at which we may have already sealed our doom.
Jack Balkwill has been published from the little read Rectangle, magazine of the English Honor Society, to the (then) millions of readers USA Today and many progressive publications/web sites such as Z Magazine, In These Times, Counterpunch, This Can’t Be Happening, Intrepid Report, and Dissident Voice. He is author of “An Attack on the National Security State,” about peace activists in prison.