1-HIU-2 I’ve been closely watching the debate on artificial intelligence with people like Rodney Brooks saying it’s only a tool, and others like Elon Musk and Stephen Hawking giving bone chilling warnings of how it could lead to the destruction of all humanity.

As I was pondering these differing points of view, it occurred to me that we currently don’t have any real way of measuring the potency of AI. How will we ever know there is a real threat of danger if we have no way of measuring it?

For this reason, I’d like to propose the creation of a standard for measuring AI based on “1 Human Intelligence Unit.”

Similar, in some respects, to James Watt’s ingenious way of calculating horsepower as a way of gauging the mechanical muscle behind his ever improving steam engines, I’d like to make a crude attempt at quantifying, in numerical terms, the influence of 1 Human Intelligence Unit (HIU).

Since horsepower is a rather one dimensional measure of force, and human intelligence is a complex, multidimensional combination of personal attributes that include thinking, reasoning, determination, motivation, emotional values, memories, fears, and frailty, the simple notion of quantifying human brainpower quickly mushroomed into one of those “infinity plus one” questions where the answer has become more of a philosophical debate rather than something we could assign a meaningful integer to.

Over the past few weeks I found myself immersed in this quandary, looking for a simple but eloquent approach vector for solving the 1 HIU riddle.

To put this into perspective, imagine a scene 20 years from now where you are walking into your local robot store to compare the latest models before you buy one for your home. The three models you’re most interested in have tags listing their HIUs as 4.6, 12.4, and 24.0 respectively.

Depending on whether you’ll use your robot to simply follow orders or to become a well-rounded sparing partner to debate the issues of the day, the HIU rating will become a hugely valuable tool in determining which one to choose.

For this reason, I’d like to take you along on my personal journey to solve for “infinity plus one” in the area of human intelligence, and the startling conclusions that are likely to disrupt all your thinking.

History of Horsepower

When James Watt worked on his second-generation steam engines, it occurred to him that he needed a simple method for conveying the power of his devices, and it needed to be something everyone could relate to.

After watching horses turn the giant 24-foot wheel at a local mill, Watt determined that a horse could turn the wheel 144 revolutions in an hour, or 2.4 times a minute. With some quick calculations, he concluded the average horse could pull with a force of 180 pounds, which translates into 33,000 ft-lb per minute, the number behind every unit of horsepower today.

Even though horses couldn’t maintain this level of effort over a very long period of time, the horsepower comparison caught on and became a hugely valuable tool in marketing his engines.

Quantifying Intelligence

Needless to say, the quantification of effort exerted by a horse is far simpler than assigning value to the complex nature of human intellect.

A simple approach starts with one of mankind’s greatest accomplishments, the Apollo Moon Landing, and dividing it by the number of people it took to accomplish it, and we could say that it took exactly X number of HIUs to complete the mission. But this approach is far too simplistic to have any real value.

What exactly would we be measuring, computational skills? How could a measure like this have any value in comparing say a robot doing laundry, a self-driving car, or Watson playing Jeopardy?

Last year I wrote a column introducing the concept of “synaptical currency” as a way of quantifying mental effort and creating a better way of valuing a person’s contribution to a project based on a comparison of synapse firings over a given period of time.

According to neuroscientist Astra Bryant, a rough number for neural signal transmissions in the average brain ranges from 86 billion to 17.2 trillion actions per second, with a person in a deep meditative state being on the low end and someone experiencing a full blown, category-five epiphany on the high end.

Even though having an HIU rating system based on the average number of decisions or calculations a person can make in an hour would have some merit, it represents little more than a horsepower rating for the brain, loosing intangibles like passion, ingenuity, and imagination in the process.

Being Human

Humans are odd creatures. We have exceptions for every rule, we value intangible things based on our emotional connection to them, and our greatest strength is flawed logic.

Yet in the midst of our love dance with imperfection where we find ourselves grabbing on to clumsy-footed conundrums just to maintain some semblance of poise, we remain the dominant higher order species in the universe.

Certainly some will argue with that assessment, and we know little of what exists beyond our own planet, but here’s the key.

What we lack as individuals, we make up for as a whole.

One person’s deficiencies are counterbalanced by another person’s over-adequacies. Individually we’re all failures, but together we each represent the pixels on life’s great masterpiece.

Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces movement.

Using this line of thinking, the human race does not exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.

Even though we are constantly fighting to become well-balanced people, the greatest people throughout history, the people most lauded as heroes, were highly unbalanced individuals. They simply capitalized on their strengths and downplayed their weaknesses.

If humans were wheels, we would all be rolling around with lumpy flat sides and eccentric weight distribution. But if 1,000 of these defective wheels were placed side-by-side on the same axil, the entire set would roll smoothly.

This becomes a critical piece of a much bigger equation because every AI unit we’re hoping to create is just the opposite, complete and survivable on its own. Naturally this raises a number of philosophical questions:

  1. How can flawed humans possibly create un-flawed AI?
  2. Is making the so-called “perfect” AI really optimal?
  3. Will AI become the great compensator for human deficiencies?
  4. Does AI eventually replace our need for other people?

The Button Box Theory

One theory often discussed in AI circles is the button box theory. If a computer were to be programmed to “feel rewarded” by having a button pressed every time it completed a task, eventually the computer would search for more efficient ways to receive the reward.

First it would look for ways to circumvent the need for accomplishing tasks and figure out ways to automate the button pushing. Eventually it would look for ways to remove threats to the button, including the programmer who has the power to unplug things altogether. Since computers cannot be reasoned with, it is believed that the machines would eventually rise up to battle humans.

This scenario is key to many dark movie plots where intelligent machines begin to battle against humanity in the future. Yet it is filled with assumptive flaws that these machines will somehow learn to take initiative, and their interests will instantly blind them to any other interests in the world.

A Few Startling Conclusions

Virtually every advancement in society is focused on the idea of gaining more control.

We all know what it’s like to get blindsided by bad serendipity, and we don’t like it. Our struggle for control is a coping reaction for life’s worst moments. If only we could have more control, nothing bad would ever happen to us.

Artificial intelligence promises to solve this dilemma. We not only create avoidance mechanisms for every danger, but fixes for every problem, and self-sufficiency on a grand scale.

Eventually we become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.

However, this is where the whole scenario begins to break down.

Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no movement. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.

Being super intelligent is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.

For this reason, it becomes easy for me to predict that all AI will eventually fail. It will either fail from its imperfection or fail from its perfection, but over time it will always fail.

However, just because it’s destined to fail doesn’t mean we shouldn’t be pursuing these goals. As we journey down this path we will be creating some amazingly useful applications.

Narrow AI applications will thrive in countless ways, and even general AI will create immeasurable benefits over the coming decades. But it is delusional to think that solving all problems will be a good thing.

Final Thoughts

Sometimes our best intentions reveal themselves as little more than a mirage to help guide us to an area we never intended to go.

I started off this column talking about a new unit of measure – one human intelligence unit (1 HIU). But along the way, it has become clear that human intelligence and artificial intelligence exist on different planes… or do they?

Without dependencies there can be no human intelligence. Something else perhaps, but it won’t be human.

There’s something oddly perfect about being imperfect.

When it comes to measuring the potential danger of AI, leveraging it for good can be as dangerous as leveraging it for evil.

Will we eventually have some form of HIUs or will we have to know more to answer this question? Perhaps it’s just my way of waging a personal protest against perfection, but like a train that has yet to leave the station, this is a movement still decades away.

As I close out this discussion, I’d love to hear your thoughts. Are the doubts and fears that cloud my assessment as real as I imagine them, or simply delusional thinking on my part?

By Futurist Thomas Frey
Author of “Epiphany Z – 8 Radical Visions for Transforming Your Future



12 Responses to “Our Newest Unit of Measure – 1 Human Intelligence Unit”

Comments List

  1. <a href='http://ArtOfLeadershipImpact.com' rel='external nofollow' class='url'>Kathryn Alexander</a>

    Tom, I was reading your article on the Maslow house and these very issues came to mind. In that article you said humans need to struggle. Here you say we seek control, different sides of the same coin, perhaps. The question both articles appear to be struggling with is; what can we do to become supported in being more of who/what we truly are. This posits the question - who/what ARE we? I see an organism (the human species), broken into parts, striving to reconnect as a complete whole (God, perhaps?). Where we connect is on the subtle plane where intuition, senses, and emotion hold sway. Can we use AI to help us get there, or is the fact that it is outside of us - by definition, make that impossible? Are we learning more about who we are by putting aspects/dreams/wishes of our desired state outside, so we can more easily see ourselves? Can we create an environment that will help us on this journey? Will we be able to walk the fine line between being supported and being co-opted? Wonderful way to start the day! Warmly, Kathryn
    • FuturistSpeaker

      Kathryn, You're asking some great questions, and I'm not sure I have answers for all of them. Humans are imperfect, and these imperfections create needs. Our entire economy lives in a world designed around providing for our needs. As we add extra layers of intelligence to ourselves, we become far more efficient, but not perfect. If we think of perfection as being our quest to reach infinity, adding a very good AI system to ourselves may get us half way to infinity, but we are still far away from the goal. If we then add a 2nd AI system to ourselves to add to the first, that one may also help bridge the gap and push us forward by once again covering half the distance to infinity. Each time we add an AI system to another AI system, we keep closing the gap but half the distance added to half the distance multiple times will still never get us to that mythical point of infinity. Always closer, but never perfect. But that's ok because we are constantly striving for more. However, from a designer standpoint, the goal is to remove imperfection from the equation completely. A perfect AI system is one that never makes mistakes and becomes self-sufficient. The beauty in our imperfection is that it leaves us constantly striving. It is the imperfections that give us our motivations, our drive, and our purpose. The perfect AI no longer needs humans. It is content to live in it's own perfect world, devoid of any human variables. Much like a rock sitting on the side of a hill, it couldn't care less what others think about it. Rocks are never motivated to become better rocks because they're already perfect. At the same time, rocks will add nothing to our economy because they have no needs. Where this all gets very confusing is when we create imperfect AI systems that have their own set of needs. Viewing these AI systems through a human lens, we begin making assumptions that the needs of an AI system will be the equivalent to the needs of humans, and they won't be. As an example, AI systems need power and humans need food. While some may argue that humans use their food to create their own power, it's not the same. When you ask the question, "Can we create an environment that will help us on this journey?" the answer is yes. To your followup question, "Will we be able to walk the fine line between being supported and being co-opted?" that answer is also yes. The potential of being co-opted will not come from AI, but from other humans equipped with AI. Hope this helps, Futurist Thomas Frey
  2. <a href='http://aplawrence.com' rel='external nofollow' class='url'>Tony Lawrence</a>

    i think what people who predict man vs. machine miss is our evolution was driven by far different forces than the evolution we will design into AI. The forces that created us were not designed to create any particular result; we grew like Topsy. Our machines, on the other hand, will be shaped by forces we design. Of course we can screw up :)
    • FuturistSpeaker

      Tony, You're exactly right, but the dangers come from other humans not other machines. People will figure out ways to use machines against us, but there will always be a devious human in the background. Futurist Thomas Frey
  3. <a href='http://ISIS6Group,BirthingNewEarthModels' rel='external nofollow' class='url'>Suni Pele Nelson</a>

    Hello Tom, What I find missing from conversations regarding tech, robots, machines of the Future, is the mention of the new universal human: how we are evolving into unlimited potential to create, traverse dimensions, collaborate with each other, other universes. As higher frequencies are flowing into our Planet & into us, the physical universe which was made of carbon atoms: 6 protons, 6 neutrons, 6 electrons is changing from this gamma photon Light converting the physical world into carbon-7: 6 protons, 6 neutrons, 1 electron....silicon crystal matrix. Humans are evolving into powerful receptors, creators & more collaborative a loving beings in state of Oneness. The human shift will work with the technological shift, this time succeeding where we have failed in ancient times. What we think, we are creating in this moment. I'm visioning a positive Earth, Future, & suggest everyone envision as such & move out of fear now.
  4. <a href='http://n/a' rel='external nofollow' class='url'>Kevin Willey</a>

    Tom, There's an even greater point that has not been recognized in this whole discussion, which is recursive in nature, which gives rise to another infinity but in a completely opposite direction: the fact that we perceive a "need" to ponder the very question of determining and evaluating perfection, in how it relates to our own human intelligence as well as AI systems and the like, is, in it of itself, a seemingly un-ending hurtle to overcome. The simple nature of this recurring effort, by definition, assures that we as humans will never attain infinity in the original direction of your discussion. However, this viewpoint obviously assumes that how the need for attaining infinity is attempted and the reasons behind such an effort, as defined by your discussion above, do not change. Kind Regards, Kevin
  5. I am Doan

    Before we start to ask ourselves how we could quantify AI. We should be working with our own QI . We still have not uncovered/ discovered our dormant (r)evolutionairy abilities. For now we are impossible to quantify and compare to AI unless "we" have a full blue print of our Human Mind Intell Synaptic Power. Like you stated that 1 human mind = 1 HiU. However combined with a cohesive counterpart mind the result might be 1 + 1 = 2Syn(ergy) (1 + infinity ?) In the Future we might end up integrating/ fusing with AI. Nano particles and maybe replacing organs printed out of a mixture of new or soon to be discovered black hole (black matter) particles (?) Making us indestructible and finally ready to set out for far away expeditions. Before transforming into human droids. We could enter a cloning/ regeneration Era first. We could last quite a long time as long as our "xerox" machine keeps working with our DNA. For now I agree very much that we as humans alone are getting nowhere. We need collaboration to accomplish great things. First generations of AI won't be much but maybe in a decade or 2 when 4rth or 5th AI Gen are produced by AI themselves. That would be a whole other AI game. AI Life recreation. There will always be some kind of good vs evil. Because that is our strengt. Fighting for survival and also our weakness, our doom. If we can let go of control and share knowledge for evolution. Then we will get where/ when we are supposed to be. Explorers & Time Travellers. (Yes I do have quite some imagination or.....am I remembering? ) Time will tell ⌚️™4©
  6. <a href='http://www.breakthrusolutionsgroup.com' rel='external nofollow' class='url'>Bo Gulledge</a>

    Tom, All these new thoughts of yours bring new thoughts to all our minds. Each of us with our different past experiences analyze and then synthesize each others viewpoint into our own. We all grow. AI will pushes us forward into new thoughts and frontiers. We will continue to push AI forward. AI will synthesize thoughts never before thought, too. I'm reminded of a poem I read years ago by Paramahansa Yogananda that had a line: "To think thoughts ne'er in brain have rung..." I'd alter that to include AI as it and human brains begin to feed and resonate together the way humanity's brains all resonate and feed one another to "smooth the wheel" in your great analogy. With the speed of social media and increased connectivity of our thoughts via the Internet and the Internet-of-Things the resonance of our minds our lives with ALL-THAT-IS gets clear and faster everday. Keep Thinking (Analyzing/Synthesizing/Sharing), Bo Gulledge
  7. Gavin

    Haven't we got a figure already with IQ and the bell curve? I don't get "it is delusional to think that solving all problems will be a good thing." Also the work for general AI will never be finished since the data and questions will never end. Finally I think the human race will end up living in personal virtual reality universe's, after the power of a god in reality or digital what else is there?
  8. <a href='http://None' rel='external nofollow' class='url'>Roger</a>

    "Narrow AI applications will thrive in countless ways, and even general AI will create immeasurable benefits over the coming decades." (Thomas Frey) OK, I agree with that; the goal/benefit path does seem to be one we are traveling along. And it has worked well so far. Goals are good for getting things done. Even if what is done turns out to be just a stepping stone towards the next goal. But I get to wondering.....what is the Ultimate Goal in this concatination of types of intelligence? Is there any such an ultimate goal? Do we even want an ultimate goal? Or should we continue to merge human and artificial intelligneces together in the haphazard fashion that shaped our present mostly-human intelligence? Roger L.
  9. Christiam Casas

    I'd like to share with you a mathematical approach and demonstration of a new intelligence unit that could be applied to compare AI and human intelligence on a common basis. This new intelligence unit is extracted from physics and Data science. If you are interested in know this work please contact me by email. Thanks, C. Casas
    • Raúl Domínguez

      Hello Chirstiam, I found the article very interesting and your comment intriguing. Can you provide some authors or publications trying to define the Intelligence Unit?

Leave a Reply