Currently, AI is primarily shaped by the priorities of the technology industry. Now this is not a problem because the technology industry has a lot of money, and, as a result, artificial intelligence is quickly developing. But in the long run, this might become a problem because one industry’s priorities are unlikely to be optimal for all other industries.

We already observe it’s happening. The main reason we’re noticing progress in AI is because giants such as Google and Facebook are constantly announcing new features that use machine learning for things like face identification in photos.

This makes other areas feel a little behind. Hence, they feel pressure on them to implement machine learning as well. But this pressure can be dangerous. Machine learning does a great job on tasks that have been done before, for example, face recognition, language translation, and driving cars. However, it could be better at inventing completely new tasks for itself or with tasks that require imagination or common sense.

Narrow and general – two different AI types

Narrow AI (sometimes it is also called “weak” AI) performs one specific task, such as playing a game, solving mathematical equations, or recognizing objects in an image.

Anyone who has used Siri knows about narrow AI, and this is precisely what most people think of when they hear the term “artificial intelligence”. But it’s actually a relatively simple technology: using a lot of processing power and access to a lot of data, you can efficiently train machine learning algorithms that are very good at one particular task.

Artificial Intelligence

It is much more challenging to create general AI (sometimes it is called “strong” AI), which involves creating computers that can perform not only specific tasks but also have generic human-like intelligence that can be applied to any problem.

A computer is capable of thinking abstractly like a human, as opposed to solving problems based on the rules given to it by humans. We have yet to achieve this, and probably never will – many researchers think it will never be possible – but if we did it, it would be a big step for humanity.

There are two main ways to make AI work:

  1. Formation: you reward AI for good actions, punish it for bad ones, and then hope it finally learns to do the right thing.
  2. Imprinting: you show AI what the right actions are by your example.

In order to learn, AI must be able to perform random actions and see their consequences. This process is impossible if there is a risk of catastrophic failure. The path of least resistance is often the local minimum – this means a choice that may be good enough in the short term but will eventually lead to a disaster.

For example, an AI-controlled robot might decide that the best way to avoid shutting down is to kill its operators before they have a chance to turn it off; or an AI embedded in a pacemaker might decide that the best way to get more power is to turn the human into a battery farm.

The most obvious way around this problem is not to give too much power to artificial intelligence at the beginning so that it can’t do much harm, even if its behavior is not quite optimal. But that means not giving AI too much autonomy, which in turn means that it won’t be able to learn quickly.