Machine learning

Trained by our machines

Our chief consultant realizes how effective our machines have been at training us to perform tasks efficiently and exactly.

Our chief consultant writes:

One of our staff has recently begun to learn a new computer language.  He has learned others in the course of his career, each for a specific project; this time he may wind up teaching it also.  As often occurs, the first attempt at running a sample program did not work.  The problem proved to be a semicolon out of place.

Well, he had experienced the like before; it was quickly fixed.  But it reminded us of how unforgiving computer programs are.  To work at all, much less deliver the results one intends, the programmer must be exceedingly careful and knowledgeable.  In effect, the machine trains the programmer, and the training must be exact and thorough.  In the pre-digital age, when a calculation was given to a computer it was a person, and a person could query any apparent mistake in the directions and supply assumptions or information left out.

This kind of machine traiining did not begin with computers.  When personal transport depended on the horse, there was another mind in the loop.  The rider could make mistakes and still get where he needed to go.  Indeed, if the horse was familiar with the route the rider only needed to stay on.  Cars need much more active handling, and mistakes can be fatal.

So while our machines have immeasurably increased the things we can accomplish, we have to be more thoroughly and carefully trained to use them.  They haven’t the context and background to correctly interpret a mis-spelled sign, for example, as we’re noted.

But it seems that may be changing.  To use a computer in the old days, one had to type in commands in an exact format.  Nowadays one clicks on a button, drags a mouse, or even speaks to a digital assistant.  (These are still very restricted actions compared with how we deal with other people, though.)  “Machine learning” now means the computer assimilating your own habits and desires, in order to anticipate them.  And several groups are working on driverless cars, where people need only be passengers; almost as good an invention as the horse.

For most people, then, these tasks have become less exacting.  But that only shifts the burden to others: the people who write the learning code, and the programmers who try to anticipate situations on the road.  If the machine-learning program doesn’t work, that may only mean inconvenience for some users; but it might  make a device (or several) useless.  Or it can open the user to many kinds of cyber-attack.  A problem with a driverless-car program could result in not just one crash, but many.

As our machines become more capable and more powerful, someone has to be more careful and thorough.  The process of being trained by our creations proceeds.

Share Button