BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

We'd Like To See Deep Learning Hand To Hand With Deep Feeling

This article is more than 5 years old.

Heidelberg Laureate Forum Foundation

Hardly could anyone find a point at the Heidelberg Laurate Forum space-time without hearing about Artificial Intelligence, Machine Learning and Deep Learning. The truth is that the majority of mathematician lecturers were fascinated by the allure of algorithms. The most conservative ones have already transformed computing machines into self-developing systems functioning in a limited field of knowledge while some brave ones design broader AI applications. One of the few exceptions to the rule was Sir Michael Francis Atiyah. Defying the skepticism of his peers he claimed to have solved the Riemann Hypothesis which has remained unsolved for more than 160 years, making it a million dollar problem. In the rare case that it is borne out we might witness a total transformation of Math Science similar to the Golden Age of Physics at the dawn of 20th century.

Invaluable data and the power of information

Metaxourgio in Athens is one of the most visited sites by Greek and foreigners as well. But this was not the case twenty years ago when the area was red because of high criminality rate. The whole landscape altered when some influential investors, potentially having relevant inside information, took advantage of low budget real estate along with the proximity to the city center and revived the whole area. Initial investments have now soared while real estate management flourishes. This simple example sheds light on the power of information which, both  in the case when it is summarizable in a few sentences, but also in the case when it requires terabytes of data to be extracted, might hold very high value; as long as it is relevant to taking actions that will produce benefits, monetary or non-monetary. This is often what makes companies worldwide compete to get control over such valuable information, which could end up benefitting them with billions or with other invaluable rewards, not necessarily directly translatable to dollars.

Transition from data to information and therefore to knowledge demands huge computing energy and cost effective targeted algorithms. Market-driven A.I. re-enters the story revitalized ready to head on towards maturity. The big question to be answered is who will take over the responsibility to educate A.I., who will set the ethics for the right or wrong course of action in an ecosystem of close human-machine interaction? For example would it be morally wrong for A.I. to take the life of a human being but morally right to destruct a computer system threatening business security? According to the Three Laws of Robotics devised by the science fiction author Isaac Asimov in 1942, which ironically enough, is the only legal framework regarding A.I., since then, robots must not harm humans but they may harm other robots.

However, in real life, it is often difficult, even for the most intelligent machines of the future, to predict indirect consequences of actions; and thus, an action taken by them which might not harm a human directly, might indirectly do so. Also, there exist cases, where in order to achieve a greater good (for example, save ten human lives), it might be almost certainly needed to sacrifice, though action or inaction, a single human life. Interestingly enough, a frequent scenario in the tales of the book 'I Robot' written by Isaac Asimov in 1950, is concerned with these two cases; in one of the tales in this book, a robot could indeed put a human life at risk  if it deducted the conclusion that the destruction of a robot could harm some people but benefit many more.

Although many steps have been taken to establish ethical framework and legal framework in certain areas, i.e. drones. AI has the power of transforming societies at scale and it is expected to grow at an unprecedented rate. Societies, academia and industry should call for a shared ethical framework that will clearly determine where accountability lies. Towards the end of 2016  six of the biggest technology companies made up the organization Partnership for AI. One of their aims was to develop an ethical framework for AI applications. Despite the good will of states as well as organizations, an Ethics Code concerning AI limitations in national level or even a coalition of countries interested, has not been established yet. An austere ethics framework stipulating what is right or wrong might still be miles away.

According to Dr. Nikolaos Mavridis, academic and consultant as well as PhD graduate of the Massachusetts Institute of Technology (MIT) Media Lab: 'AI can think whatever, but not do whatever'. And he furthermore clarifies: 'And even what AI can 'think' depends not only on its algorithms, but on its access to massive data and to real-time sensory information, enabling it to 'read' the physical state of the world and the opinions of people. And of course, the capabilities of AI often also depend on massive computational power - especially for the 'training' phases of Machine Learning systems. And nowadays, very few organizations worldwide have access to such big data and resources.' Also, note that very few AI systems nowadays are connected directly and in non-retractable ways to generalized ‘actuators’: capable of mechanical, visual, or socially effective actions: for example, spreading news or messing up the traffic light infrastructure of a city.' That means that AI alone cannot just ‘press buttons’ and start an event that may be critical for human lives and the stability of our societies. The effort now is being focused on a frame that regulates the interactions between the components of cyber-physical-social systems that consist of humans and machines connected to the physical and social realms, and makes way for safety features that prevent threatening results, while monitoring the system and always leaving the final ‘red button pressing’ to human judgement.

Emotionally intelligent machines

In 1964 Michael Beloch referred to the term of ‘emotional intelligence’ which gradually gained popularity when in 1995 a book came out by the same title written by Daniel Goleman, author, psychologist and science journalist. Although scientific community was very harsh on Goleman's theory, it still draws a line between reason and emotion when a decision is taken. In the univesre of computers, algorithmic reasoning is the key component that drives decision making process. On the other hand the human brain might at first sight look even more sophisticated; it may be also affected by factors such as emotions or conflicting parameters. Thus, no matter whether we are talking about professional judges and jury or ordinary people in daily routines humans try to weigh up a problem considering all the clues that color the interpretation of law with an array of, sometimes clashing, colors.

The question that arises is how we are going to train the AI systems as to become emotionally aware in decision taking procedures. In this field too research has started the opposite direction, since several robots and avatars have already been trained to recognize human emotions. For example systems based on SoftBank's Pepper robot. Furthermore systems able to imitate human facial expressions, such as Charles from Computer Science and Technology department of Cambridge University. However, still we are at the early stages of the integration of emotions in AI applications. One reason behing this being the fact that human brain remains a territory that has just recently started being charted by neuroscience.

Interestingly enough, Science Finction has envisioned the possibility of natural evolution of 'machine feeling' through human interaction. Exotic Data from Star Trek, initially alienated, finally managed to restore quality communication with the rest of the crew. Independently to what history will judge the majority agree that human-machine symbiosis should be governed by mutual trust and respect. However , dissenting voices discern a rather a bleak future for mankind.

Ranging between extreme nightmares of dystopias and wishful hopes of utopias, the future will unfold; and how it will look like, might hopefully be much more in the hands of humanity. Even with Prometheus bound (or only with Prometheus bound?), was the power of Fire, both destructive as well as constructive, transferred from Olympus into the hands of Humanity.

Note: Patrick Levy Rosenthal, Founder & CEO at Emoshape, after reading the article contact us and told us that his company developed the first emotion chip for AI and robots, allowing the AI to experience more than 64 trillion possible emotional states every 1/10tjh of seconds. More importantly we allow the machine to take decision based on emotion reasoning.