Artificial intelligence is no big deal — until it is

There is a lot of hype regarding artificial intelligence, known also as A.I.  That hype has been fed by old movies such as Colossus: The Forbin Project and by less fictional but still spectacular predictions of a technological singularity.

The Forbin Project is science fiction, and like the best of science fiction, it has a kernel of truth that makes it worth thinking about as a precaution.  It speculates that computer networks may become a cruel master of humanity.  The technological singularity is a theory, not a wild guess, that predicts that computer intelligence may suddenly, and the key word is suddenly, spring out of human control.  Taking these two together, one can understand why there is so much nervous attention being paid to A.I.  

We have always had sudden, dramatic advances in technology that changed the world.  At the beginning of written history is the invention of writing itself, without which we could not have the complex civilization we now have.  Even so, there were ancient wise men who decried the invention as one that would weaken the human mind.  More recently, nuclear power altered the course of history, with the genuinely revolutionary explosion of the atomic bombs that are credited with ending World War Two, but which also threaten humanity with extinction.  Added to these, there were such innovations as the wheel, control of fire, and electric power, among many, many others.

This is different.

According to the Forbes article linked above, "the scale, scope and complexity of the impact of intelligence evolution in machines is unlike anything humankind has experienced before. ... [There is] no historical precedent and [it] is fundamentally disrupting everything in the human ecosystem."  

That may sound overly dramatic, but Forbes articles are known more for sober analysis than for hyperbole.  

That said, we need to tap the brakes in our examination of A.I.  I am not the first to point out that computers are not genuinely intelligent.  Not only do they mimic intelligence, but they may even give the outward appearance of being alive and conscious.  They are none of that.  They are "stuff," wires and diodes, etc., made by people.  In that sense, they are like any other tool.  Yes, they are complex, too complex for most of us to understand, and getting more complex all the time.  Even so, the real threat comes not from computers and A.I., but from the people who build, program, and use them.  

A.I. is made by biased people who program their biases, including political and social, right into those wires and diodes, into the lines of code, into the algorithms that make A.I. a useful tool — but a tool as subject to abuse as are guns.  

Perhaps the first red flag concerning the abuse of A.I. came to us with the production of self-driving military vehicles.  Those vehicles, which use computers to navigate difficult terrain, can be fitted with weapons: machine guns and the like.  Just as traffic decisions of the vehicle are made by computers, so also can be the targeting decisions.  A mindless bundle of wires and diodes can select a target, such as a human, and kill it.  If the final decision of whether to shoot is made by a human, then we still have control, not the machine.  However, as with all things in life, nothing is ever so simple, at least not for long.  It seems that some decisions cannot always be left to humans.

For example, aircraft such as the B-2 stealth bomber have computers that constantly make critical aeronautic corrections to control surfaces at the rate of twenty or more per second.  No human could ever react that quickly.  Likewise, decisions of whether or not to shoot, in cases where collateral damage is highly possible, might require instantaneous decision-making.  Tragic "friendly fire" fatalities are not as rare as we would like them to be.  In a hostage situation, for example, the decision not to shoot can be as fatal to the hostage as an erroneous decision to shoot.  

Therefore, in some cases, at least, leaving decisions entirely to computers is unavoidable.  

We have already crossed the technological Rubicon.  There is no turning back.  We are stuck with a flight path taking us into the fog of an unmapped future, where "here be dragons."  

It behooves us, then, to recognize A.I. as the useful tool it is — to use it, but not depend on it for making the moral decisions that only we, not it, can make.  Instead of the artificial variety, we must use our natural, God-given intelligence.

Image via Max Pixel.

If you experience technical problems, please write to helpdesk@americanthinker.com