Press enter to see results or esc to cancel.

Artificially Inflated Future?

Every time I hear someone use the term “AI” I usually cringe.

You say tomaytoes, I say tomahtoes.

They say Potaytoes, I say a pair of King Edwards to that.

AI is NOT all the same thing.

Many people use the term to cover wildly different things – from Siris to C3POs to HAL9000s – and in the process misapprehend current capability with something that doesn’t actually exist (yet).

As it stands, AI has 3 very different meanings:

  • Artificial Superintelligence (ASI) and it’s accompanying Singularity event – think SkyNet
  • General AI – let’s say your typical sci-fi robots – and
  • Narrow AI – which is (much much) better called Machine Learning.

Machine Learning (ML) is the current “lowest” so-called AI wrung and pretty much the only one we’re on – so let’s not transmogrify it.  More on this here…

For Narrow AI Moravec’s Paradox is key:

“It is comparatively easy to make [Machine Learning] computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

So when we say AI, we’re being preeeeetty generous in the use of the descriptor “intelligence”.

And as for Turing’s test? Filip Piekniewski puts it nicely:

“In his famous formulation Turing confined intelligence as a solution to a verbal game played against humans. This in particular sets intelligence as a (1) solution to a game, and (2) puts human in the judgement position. This definition is extremely deceptive and has not served the field well. Dogs, monkeys, elephants and even rodents are very intelligent creatures but are not verbal and hence would fail the Turing test.”

The Future: Coping with the Hype

So if we can agree (please?) that we’re talking about Machine Learning, how’s it doing and what’s the future like?

Machine Learning is doing very well, it can thank you very much but doesn’t mean it.

Here’s what machines can currently do – with the caveat: from what I’ve read and currently understand…

A machine can be trained by humans to recognise certain instances/symbols and make inferences about what it is “seeing”.

But it has to be taught what these things are.

And it has quite binary answers to any question. 1 or 2. Yes or no.

It can’t make leaps of thinking.

When fed large amounts of data, very large or deep neural networks can recognize subtle patterns. Give a deep neural network lots of pictures of dogs, for instance, and it will figure out how to spot a dog in just about any image. But there are limits to what deep learning can do, and some radical new ideas may well be needed to bring about the next leap forward. For example, a dog-spotting deep-learning system doesn’t understand that dogs typically have four legs, fur, and a wet nose. And it cannot recognize other types of animals, or a drawing of a dog, without further training. [source]

Though certain machines have been known to take calculated (statistical) risks, such as AlphaGo surprising the world with Move 37 in the 2nd match against the world’s best Go player, Lee Sedol.

But as one of my colleagues put it:

Machines don’t have intelligence to do mundane tasks. Expecting them to handle tasks that require broad understanding is a decade or so away. A machine at best can annotate/index content for easy access for the humans to make decisions. Using correlations without any understanding of causation for making decisions is highly fallible. Humans, in absence of reason (having run out of, or not having any), also do this. It is called instinct, and it is the place where biases reside. Machines devoid of reasoning are always going to mirror the same set of biases.”

And he is right about that instinct part. Kahneman explained this in Thinking Fast and Slow. The reality is that there is no System 2 (yet) in AI, the hard slow part – the learning part that helps us change and adapt, that makes us human – you do know what Watch Me Think does as its day job? 🙂

It’s what you don’t see that counts

I have a story which should, I hope, illustrate my point…

During WW2 a group of mathematical scientists in the US were asked where they should place armour on bomber planes in order to increase their chances of returning from bombing raids.

To aid the analysis, the US Military sent the scientists statistics on where the most bullet holes were found on these returning planes.

The inference being that the armour should go where the most holes were found (fuselage) and ignore where the least number was found (the engines).

A gentleman named Abraham Wald pointed out the mistaken inference.

“The reason planes were coming back with fewer hits to the engine is that planes that got hit in the engine weren’t coming back. Whereas the large number of planes returning to base with a thoroughly Swiss-cheesed fuselage is pretty strong evidence that hits to the fuselage can (and therefore should) be tolerated.”

In the above story, a Machine would calculate very effectively, based on what it sees (the data it was given by humans), where the most effective placement of armour would be.

But it could not (yet) make the leap, as Wald did, to say that what was not being seen mattered most.

Machine Learning and MRX

So given current Machine limitations and the problems faced by quantum computers (which is where the leap to General AI may occur), how could ML affect the world going forward?

In Market Research I’d have to say that anything that can be counted, and from which generalised conclusions drawn, is looking a bit of a dodo-ish aspect of anyone’s job description right now.

Machines can learn the rules of what to look out for – and they can use their own learning to reinforce that learning – but beware mistakes that get repeated and reinforced (mickey mouse and his magic broom come to mind, for some reason).

Machines can count (obvs). And they can make inferences.  But only based on what they’ve been told by a human.

So to all those feeling in any way threatened: BE THAT HUMAN.

Do what machines can’t currently do: find the unusual and the unseen.

Make the leaps and the connections.

Be human, really.

Cotton machines didn’t make the masses unemployable. Automation (which essentially is what ML is) just drove people on to jobs helping the machines get better/work well, into new industries to support the machines, or within adapted industries that make the best use of what the machines now mass produced for even bigger markets.

We’re the product of many evolutions, we can roll with this people.

Read More: