1-HIU-2 I’ve been closely watching the debate on artificial intelligence with people like Rodney Brooks saying it’s only a tool, and others like Elon Musk and Stephen Hawking giving bone chilling warnings of how it could lead to the destruction of all humanity.

As I was pondering these differing points of view, it occurred to me that we currently don’t have any real way of measuring the potency of AI. How will we ever know there is a real threat of danger if we have no way of measuring it?

For this reason, I’d like to propose the creation of a standard for measuring AI based on “1 Human Intelligence Unit.”

Similar, in some respects, to James Watt’s ingenious way of calculating horsepower as a way of gauging the mechanical muscle behind his ever improving steam engines, I’d like to make a crude attempt at quantifying, in numerical terms, the influence of 1 Human Intelligence Unit (HIU).

Since horsepower is a rather one dimensional measure of force, and human intelligence is a complex, multidimensional combination of personal attributes that include thinking, reasoning, determination, motivation, emotional values, memories, fears, and frailty, the simple notion of quantifying human brainpower quickly mushroomed into one of those “infinity plus one” questions where the answer has become more of a philosophical debate rather than something we could assign a meaningful integer to.

Over the past few weeks I found myself immersed in this quandary, looking for a simple but eloquent approach vector for solving the 1 HIU riddle.

To put this into perspective, imagine a scene 20 years from now where you are walking into your local robot store to compare the latest models before you buy one for your home. The three models you’re most interested in have tags listing their HIUs as 4.6, 12.4, and 24.0 respectively.

Depending on whether you’ll use your robot to simply follow orders or to become a well-rounded sparing partner to debate the issues of the day, the HIU rating will become a hugely valuable tool in determining which one to choose.

For this reason, I’d like to take you along on my personal journey to solve for “infinity plus one” in the area of human intelligence, and the startling conclusions that are likely to disrupt all your thinking.

History of Horsepower

When James Watt worked on his second-generation steam engines, it occurred to him that he needed a simple method for conveying the power of his devices, and it needed to be something everyone could relate to.

After watching horses turn the giant 24-foot wheel at a local mill, Watt determined that a horse could turn the wheel 144 revolutions in an hour, or 2.4 times a minute. With some quick calculations, he concluded the average horse could pull with a force of 180 pounds, which translates into 33,000 ft-lb per minute, the number behind every unit of horsepower today.

Even though horses couldn’t maintain this level of effort over a very long period of time, the horsepower comparison caught on and became a hugely valuable tool in marketing his engines.

Quantifying Intelligence

Needless to say, the quantification of effort exerted by a horse is far simpler than assigning value to the complex nature of human intellect.

A simple approach starts with one of mankind’s greatest accomplishments, the Apollo Moon Landing, and dividing it by the number of people it took to accomplish it, and we could say that it took exactly X number of HIUs to complete the mission. But this approach is far too simplistic to have any real value.

What exactly would we be measuring, computational skills? How could a measure like this have any value in comparing say a robot doing laundry, a self-driving car, or Watson playing Jeopardy?

Last year I wrote a column introducing the concept of “synaptical currency” as a way of quantifying mental effort and creating a better way of valuing a person’s contribution to a project based on a comparison of synapse firings over a given period of time.

According to neuroscientist Astra Bryant, a rough number for neural signal transmissions in the average brain ranges from 86 billion to 17.2 trillion actions per second, with a person in a deep meditative state being on the low end and someone experiencing a full blown, category-five epiphany on the high end.

Even though having an HIU rating system based on the average number of decisions or calculations a person can make in an hour would have some merit, it represents little more than a horsepower rating for the brain, loosing intangibles like passion, ingenuity, and imagination in the process.

Being Human

Humans are odd creatures. We have exceptions for every rule, we value intangible things based on our emotional connection to them, and our greatest strength is flawed logic.

Yet in the midst of our love dance with imperfection where we find ourselves grabbing on to clumsy-footed conundrums just to maintain some semblance of poise, we remain the dominant higher order species in the universe.

Certainly some will argue with that assessment, and we know little of what exists beyond our own planet, but here’s the key.

What we lack as individuals, we make up for as a whole.

One person’s deficiencies are counterbalanced by another person’s over-adequacies. Individually we’re all failures, but together we each represent the pixels on life’s great masterpiece.

Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces movement.

Using this line of thinking, the human race does not exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.

Even though we are constantly fighting to become well-balanced people, the greatest people throughout history, the people most lauded as heroes, were highly unbalanced individuals. They simply capitalized on their strengths and downplayed their weaknesses.

If humans were wheels, we would all be rolling around with lumpy flat sides and eccentric weight distribution. But if 1,000 of these defective wheels were placed side-by-side on the same axil, the entire set would roll smoothly.

This becomes a critical piece of a much bigger equation because every AI unit we’re hoping to create is just the opposite, complete and survivable on its own. Naturally this raises a number of philosophical questions:

  1. How can flawed humans possibly create un-flawed AI?
  2. Is making the so-called “perfect” AI really optimal?
  3. Will AI become the great compensator for human deficiencies?
  4. Does AI eventually replace our need for other people?

The Button Box Theory

One theory often discussed in AI circles is the button box theory. If a computer were to be programmed to “feel rewarded” by having a button pressed every time it completed a task, eventually the computer would search for more efficient ways to receive the reward.

First it would look for ways to circumvent the need for accomplishing tasks and figure out ways to automate the button pushing. Eventually it would look for ways to remove threats to the button, including the programmer who has the power to unplug things altogether. Since computers cannot be reasoned with, it is believed that the machines would eventually rise up to battle humans.

This scenario is key to many dark movie plots where intelligent machines begin to battle against humanity in the future. Yet it is filled with assumptive flaws that these machines will somehow learn to take initiative, and their interests will instantly blind them to any other interests in the world.

A Few Startling Conclusions

Virtually every advancement in society is focused on the idea of gaining more control.

We all know what it’s like to get blindsided by bad serendipity, and we don’t like it. Our struggle for control is a coping reaction for life’s worst moments. If only we could have more control, nothing bad would ever happen to us.

Artificial intelligence promises to solve this dilemma. We not only create avoidance mechanisms for every danger, but fixes for every problem, and self-sufficiency on a grand scale.

Eventually we become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.

However, this is where the whole scenario begins to break down.

Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no movement. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.

Being super intelligent is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.

For this reason, it becomes easy for me to predict that all AI will eventually fail. It will either fail from its imperfection or fail from its perfection, but over time it will always fail.

However, just because it’s destined to fail doesn’t mean we shouldn’t be pursuing these goals. As we journey down this path we will be creating some amazingly useful applications.

Narrow AI applications will thrive in countless ways, and even general AI will create immeasurable benefits over the coming decades. But it is delusional to think that solving all problems will be a good thing.

Final Thoughts

Sometimes our best intentions reveal themselves as little more than a mirage to help guide us to an area we never intended to go.

I started off this column talking about a new unit of measure – one human intelligence unit (1 HIU). But along the way, it has become clear that human intelligence and artificial intelligence exist on different planes… or do they?

Without dependencies there can be no human intelligence. Something else perhaps, but it won’t be human.

There’s something oddly perfect about being imperfect.

When it comes to measuring the potential danger of AI, leveraging it for good can be as dangerous as leveraging it for evil.

Will we eventually have some form of HIUs or will we have to know more to answer this question? Perhaps it’s just my way of waging a personal protest against perfection, but like a train that has yet to leave the station, this is a movement still decades away.

As I close out this discussion, I’d love to hear your thoughts. Are the doubts and fears that cloud my assessment as real as I imagine them, or simply delusional thinking on my part?

By Futurist Thomas Frey
Author of “Epiphany Z – 8 Radical Visions for Transforming Your Future

5 Responses to “Our Newest Unit of Measure – 1 Human Intelligence Unit”

Comments List

  1. <a href='http://www.galvanizeit.org' rel='external nofollow' class='url'>Philip Rahrig</a>

    In most discussions of AI and singularity, the rapid understanding of the human brain may mean our intelligence capacity will grow ten-fold or even a thousand-fold in the next two decades. This may mean AI won't be as interesting, or needed because humans will be able to process so much more information than we are now capable of.
  2. <a href='http://www.bobhorn.us' rel='external nofollow' class='url'>Bob Horn</a>

    Look at the arguments about the Turing question, can computers think (at all) at http://www.bobhorn.us Scroll down to Turing
  3. Proud imperfect human

    Right, some HIU will be needed probably sooner than expected for establishing standards both for measuring AI performance (as in your example of robots with HIUs of 4.6, 12.4, etc.) but also for augmented humans (e.g. a question at hiring interview: "what version or of how much HUI is your latest brain update?")
  4. Michael Cushman

    Easier to measure is learning. How well and how fast does something learn? How many trials does it take for a specific thing to be learned? Can it use what it learned to earn more? Can it make inferences? Can it recognize patterns? Can it represent anything in the physical world with a system of symbols? How much does it know? Roughly this knowledge is known among animals, as scientist and trainers have been doing this for many decades. Metrics along these lines would give us a better understanding of AI's current state and rate of progress.
  5. Alwyn

    Another great thought-provoking article, Tom. Human Intelligence, never mind Robot Intelligence, is difficult to measure and so-called IQ tests are far from satisfactory measures of human intelligence. Human Intelligence is not dependent on the size of one's knowledge base. We often describe smart people as "quick to join the dots", that is, relatively quick to find answers and draw conclusions by inference, deductive and inductive reasoning; and the not-so-bright as "slow thinkers". Just a few decades ago people used to be concerned only with the size of the Central Processing Unit and amount of memory a computer had (rather than the software languages it supported). The CPU and amount of memory determined size of the computer's "Brain" and it's ability and speed to perform calculations. Today, CPU / memory / storage size is far less important than (dot-joining) applications software, especially as many of these apps can be run from a remote location - the Cloud. To measure Robot capability why not apply a series of standardized IQ tests to robots, like the way we measure human IQ, to determine their RI (Robot Intelligence)? Factor in the (thinking) speed with which they reached the (correct) answers and we could have a basic stand-alone measure of RIQ (Robot Intelligence Quotient) to indicate how much knowledge a robot has and it's speed of "thinking". The RIQ could be related to Human IQ but why bother. After all, they are measuring different 'species'. Horses aside, we do not relate cc a car engine to power of a tug-of-war-team, do we? Cheers, Alwyn

Leave a Reply

Your email address will not be published. Required fields are marked *