As AI accelerates and the singularity approaches, they might just kick back.

We could live to see three revolutions in our lifetimes.

The first one has already happened — the Internet revolution. Every computer is already online.

Next, we will see everything else go online. That's the second revolution, the IoT revolution. It's already begun. Eventually, if it uses electricity, it will go online. Everything will be "smart." And you know what that means. I'm the father of Hypponen's Law: If something is "smart," it's vulnerable. Smart watch, smart TV, smart city ... you get the idea.

Related: ‘Father Of The Internet’ Issues IOT Alert

And if we go jogging, eat salad and lead healthy lives, we may live long enough to see the third revolution, which will be the real artificial intelligence revolution.

Just imagine that one of the projects to replicate the full human brain succeeds in our lifetime. If we come up with a mechanism that can move actual memories and processes from real brains to computers, you could live forever.

There are only two ways the AI revolution will go: very, very good, or very, very bad.

We know that superior artificial intelligence will arrive at some point. What we don't know is what will happen when humans become the second most intelligent beings on the planet. If it goes well, we'll have an intelligence that can solve all our problems. If it goes wrong, we'll have “Terminator II.”

See: Is Google Creating ‘The Terminator’s Skynet?

Of course, all three of the technological revolutions have implications for our safety, our security and our privacy.

In cyber security, F-Secure Labs is on the front lines of this transformation. We first implemented a neural network using both AI and machine learning to identify malicious applications in 2006. This was early for the industry, but a bit late by other standards considering that by then, the concept had been around for more than 60 years.

Today, the merging of man and machine is crucial to our protection. We analyze hundreds of thousands of malware samples a day. On a good day, we'll do half a million samples. On a bad day, 700,000. There's no way humans can analyze every sample. So for the last eight years we've been building what we now call the Machine Learning Project. It's a system that learns what's good and what's bad. This is one benefit of AI that already exists.

In our industry, we have no choice but to pursue AI and machine learning. Our job is to stay ahead of criminals, and you can bet that as soon as someone can make money using these technologies in an attack, they will. And there will never be fewer "smart" targets vulnerable to cyberattacks than there are today.

We are the first generation that can be tracked throughout our lives. And it's only going to get worse.

The reverse of Moore's Law says that the price of existing computer power halves every 18 months. This means the price of turning a "dumb" thing into a "smart" thing is plummeting. Eventually, those chips are going to cost two cents or one cent or half a cent.

With prices that low, why wouldn't appliance manufacturers start putting a chip into everything?

Before we know it, we’ll lose track of which appliances go online and which don't. And they won't need passwords before they connect. By the time this happens, the real AI revolution will be ready to take off, if it hasn't already.

This is why the rules we create to guide AI before it becomes a superior intelligence are so crucial. We need to define ethics and boundaries for machines now, because we cannot do it afterwards.

See: Facebook Pulls Plug On Conspiratorial Bots

What concerns me most is that we are in a race. Google wants to be first. IBM wants to be first. So do Facebook, Amazon and Apple. And when you are in a race, you don't really stop to ask "Are we doing this safely?"

This drive to be first leads to an inevitable conclusion: Autonomous robots will become a reality, probably even before we've considered the consequences.

Look at drones. They may not be autonomous. They are probably operated by someone in Nevada to attack someone in Syria, but the obvious weakness is the link from drone to human, as that link can be disrupted or cut or spied on. So removing that weakness is simple: Make the drone smart enough to work without the human. It's going to happen and it is scary as hell.

So whatever you do, don't kick the robots. Why?

Because one day those robots will be smart enough to go to YouTube and watch the videos. So whatever you do, please don't kick the robots.

Mikko Hypponen is chief research officer of F-Secure, a 30-year-old cyber security company whose technology combines the power of machine learning with human expertise. 

Featured

Related Articles