Machine-Learning-3I installed my Nest Thermostat a little over a year ago. This “learning” machine was billed as being able to study the habits of people and adjust the settings to optimize both temperature and energy usage.

But ever since then I’ve found myself in a constant battle with my thermostat. It’s cooling things down when I need heat, warming things up when I’d rather be cool, and the amount of energy it’s saved is far less than the loss of productivity I’ve experienced from being uncomfortable.

This is also true with my other “smart” devices.

My washing machine still doesn’t understand the fabrics it’s trying to wash. My smart door lock still doesn’t know who I am. And our home security system does a far better job of keeping the good guys in, instead of the bad guys out.

Much of the “smartness” we’ve added to our lives has been in meager doses, slightly better than before, but not much.

That said, the level of intelligence in our homes, cars, clothes, and offices is about to move quickly up the exponential learning curve as connected devices combine remote processing power with everything around us.

Our orange juice bottles, cans of soup, and boxes of crackers will all have a way of reordering themselves when inventories get low. Toasters will soon be toasting reminders onto the sides of our bread so we won’t forget birthdays and anniversaries.

Biometric coffee makers will know exactly how much caffeine to put into our coffee, and our fireplace will even know what color of flame we’re in the mood for.

If I’m feeling ill, not only will my devices know what’s wrong, they’ll be able to scan my home and give me a quick recipe for a cure.

“Add 2 oz of turpentine from the garage, 3 tablespoons of shoe polish, four capfuls of Listerine, and 2 cough drops to a cup of boiling water, and what floats to the top will fix your problem.”

I refer to this as “MacGyvering medicine.”

Our learning machines will pave the way for a hyper-individualized world where everything around us syncs perfectly with our personal needs and desires. But that’s the point where the train begins to derail, and all our best intentions start to work against us. Here’s why.

Some Background on Machine Learning

Machine learning is an offshoot of the early work done on expert systems, neural networks, and artificial intelligence in the 1980s.

Since then we’ve figured out how to connect devices so one device can talk to another, added a pervasive Internet that attaches remote capabilities to imbedded chips, and today’s machine learning has morphed into something far different than anything researchers dreamed of in the 1980s.

With algorithms that can “learn” from past data, it uses sophisticated forms of predictive analysis and decision trees to closely simulate the human decision-making process.

As the number of sensors grows and the amount of data increases, the human-machine relationship will become more refined, and our ability to delineate between personal decisions and machine decisions will become an increasingly fine line.

At the same time, machine learning creates a number of quandaries or paradoxes for us to contend with.

Paradox #1 – Optimized humans will become less human 

If every smart device were able to tap into the mood of people it came into contact with, it could easily make decisions for them, and in the process, optimize their performance.

I’ve always been drawn to the idea of walking into a building and have the building recognize me. Parking spaces magically appear; the pathway to where I’m going lights up; music in the air perfectly matches my mood; temperature, humidity and environmental condition instantly sync with my body; and impeccably prepared food supernaturally appears whenever I’m the least bit hungry.

This utopian dream of living the easy life certainly has its appeal, but grossly oversimplifies our need for obstacles to overcome, problems to wrestle with, and adversarial challenges for us to tackle.

When life becomes too simple, we become less durable.

Without the need to struggle, we become less resilient. If we were to find ourselves living the soft cushy life on easy street, every new danger will leave us cowering in fear, unable to muster a response to the hazards ahead.

Paradox #2 – Originality becomes impossible when all possible options can be machine generated 

Humans place great value on creativity, originality, and discovery. History books are filled with talented people who figured out how to “zig left” when everyone else “zagged right.”

Recently, a company called Qentis offhandedly claimed its computers were in the process of generating every possible combination of words, and preemptively copyrighting all creative text.

It will also be possible for them to generate every possible combination of musical notes to enable them to claim first rights to every “new” musical score.

Similarly, Cloem is a company that has developed software capable of linguistically manipulating the claims on a patent filing, substituting keywords with synonyms, reordering steps, and rephrasing core concepts in order to generate tens of thousands of potentially patentable “new” inventions.

In much the same way computers are capable of generating every possible combination of lottery numbers to guarantee a win, patent and copyright trolls will soon have the ability to play their game of “fleecing the innovators” at an entirely new level.

More importantly, it confuses the concept of originality, and compromises the contribution of an individual if a version of every “new” idea already exists.

Paradox #3 – Perfection eliminates dependencies, removes our sense of purpose, and will destroy our economy

Humans are odd creatures. We have exceptions to every rule, we value intangible things based on our emotional connection to them, and our greatest strength is flawed logic.

One person’s deficiencies are counterbalanced by another person’s over-adequacies. Individually we’re all failures, but together we each represent the pixels on life’s great masterpiece.

Wherever we find insufficiencies, we create dependencies to help fill the gap, and every “need” produces economic value.

Using this line of thinking, the human race cannot exist as self-sufficient organisms. We all pride ourselves as being rugged individualists, yet we have little chance of surviving without each other.

Machine learning comes with the promise that we’ll soon become stand-alone organisms, content in our surroundings, wielding off-the-chart levels of intelligence and capabilities exceeding our wildest imagination.

However, this is where the whole scenario begins to break down.

Self-sufficiency will lead to isolation and our need for each other will begin to vanish. Without needs and dependencies, there is no economy. And without the drive for fixing every insufficiency, our sense of purpose begins to vanish.

Having a super intelligent machine is meaningless if there is nothing to apply the intelligence to. Much like a perpetual motion machine that never gets used, there’s little purpose for its existence.

How do we make the best possible decision?

Final Thoughts 

Yes, I love the idea of having a laundry soap dispenser that is connected to sensors in the washing machine and able to mix multiple channels of organic ingredients dynamically to suit the conditions of the wash and optimize the cleaning process.

I also love the idea of not having to make so many decisions. Until now, every new device seems to add more decision points to my daily routine, not less.

However we need to be aware of the quandaries ahead. Not all changes are for the better, and many times simple little shifts will have far reaching ripple effects that force us to rethink our systems, our communities, and our way of life.

Sometimes our best intentions are little more than a mirage that leads us to an area we never intended to go.

Machine learning is neither good nor bad. It’s up to us to decide.

By Futurist Thomas Frey

Author of “Communicating with the Future” – the book that changes everything

3 Responses to “Three Great Machine Learning Paradoxes”

Comments List

  1. <a href='http://Www.breakthrusolutionsgroup.com' rel='external nofollow' class='url'>Bo Gulledge</a>

    I highly recommend the book, "Antifragile" by Nicholas Taleb. When asked what the future will look like in 25 years he responded that people will still wear natural fabrics, drink coffee and wine, and read printed books. He gives reasons for this which I won't recount here. His main point is that the opposite of fragile is not robust but anti-fragile. Anti-fragile mean something that grows stronger through stress. The proposed future that eliminates stress through machines is actually fragile. And it increases the fragility of humans by eliminating stress necessary for humans to grow. It's a well-known fact that bones become stronger through stress and exercise. Our immune system grows stronger through stress. Hopefully our helpful machines of the future will "know" about the demands for stress that help us grow and be strong and therefore will not remove all stress from our lives.
  2. Russ Derickson

    Like Bo, I heartily recommend Taleb's book Antifragile. Taleb is the author of the books "Fooled by Randomness" and "The Black Swan," both worth a read, as well. In a nutshell, we humans are poor with probabilities and understanding extreme events. These "deficiencies" are based on the fact that our survival as a species did not depend strongly on these facilities. In the case of extreme events, they tend not to be global, so some part of us, somewhere, will tend to survive. Those rare extreme events that are global do, however, lead to mass extinctions, as evident in the fossil record. But I want to bring my thoughts back to our "human" state of imperfection (as Tom discusses) and what it means for our future. Over the years I have read the works of E.O. Wilson, Carl Jung, and Rene Dubos, and most recently watched the new Cosmos series with Neil Degrasse Tyson. This is too much to say here, so I will perpetrate a bit of an intellectual transgression and make a few statements that leave things more than somewhat in short shrift. Rather than focus so strongly on technology (but certainly not abandoning it by any means), we need to learn more about symbiosis, systems thinking, and cybernetics. Not as academic nerds, but as vital practitioners that engage in life in a more comprehensive, directed way. That does not preclude our flights of fancy, our barroom fights, our rounds of poetry, but it does preclude war and atrocities toward other humans. Certain things are encoded in our DNA that allowed us to survive in our evolutionary past that work against us in current times. We should not, however, reengineer our genetics, but we need to supersede any of the limitations they currently impose on us with what the Dalai Lama says we should pursue: "Critical thought, followed by action."
  3. David Schatsky

    You raise important issues about the future of artificial intelligence and its impact on our lives and our humanity. In case you find it useful, I will point you to two articles I have recently published about artificial intelligence. The provide some grounding in where the field is now. We are a long way from intelligent machines running our lives, in my opinion. Here's an overview of AI technologies: http://dupress.com/articles/what-is-cognitive-technology/?icid=hp:ft:01 Here's an analysis of the current opportunities in AI for businesses: http://dupress.com/articles/cognitive-technologies-business-applications/ Best regards, David Schatsky

Leave a Reply