Potential "shit" threats surrounding AI and ML

Artificial intelligence (AI) and machine learning (ML) are the most viral topics discussed at this time. It has been a big controversy among scientists today, and their benefits for humanity can not be overestimated. We need to monitor and understand the potential threats of "holy shit" that surround AI and ML.

Who could have imagined that one day the intelligence of the machine would surpass that of a human – a moment that futurists call singularity? Well, a renowned scientist (the forerunner of AI), Alan Turing, proposed in 1950 – that a machine could be taught as a child.

Turing asked, "Can machines think?

Turing also explores the answers to this question and others in one of his most read theses:Computer and intelligence. "

In 1955, John McCarthy invented the LISP programming language called "artificial intelligence." A few years later, researchers and scientists started using computers to code, recognize images, translate languages, and so on. Even in 1955, it was hoped that they would one day make the computer to talk and think.

Great researchers like Hans Moravec (robot scientist), Vernor Vinge (science fiction writer) and Ray Kurzweil thought in a broader sense. These men envisioned the moment when a machine would become capable of conceiving alone the means to achieve its objectives.

Big names like Stephen Hawking warn that, when people become unable to compete with advanced AI, "this could sound the death knell of the human race". "I would say that one of the things we should not to do is to do everything possible to reinforce the buildings' superintelligence without worrying about the potential risks. That sounds a little silly to me, "said Stuart J. Russell, a professor of computer science at the University of California at Berkeley.

Here are five possible dangers of implementing ML and AI and how to fix it:

1. Machine learning (ML) models can be skewed – in human nature.

Although machine learning and AI technology are promising, its model may also be vulnerable to unintended biases. Yes, some people have the impression that BC models are unbiased in decision making. Well, they are not mistaken, but they forget that humans teach these machines – and by nature we are not perfect.

In addition, BC models can also be biased in decision making as they sneak into the data. You know sentiment biased data (incomplete data), until the robot self-learning. Can a machine lead to a dangerous outcome?

Take for example, you operate a wholesale store, and you want to build a model that will include your customers. So you build a model that is less likely to lack the power of buying your distinctive products. You also hope to use the results of your model to reward your client at the end of the year.

You bring together your customers who buy records – those with a long history of good credit scores, and then you have developed a model.

What happens if a quota of your most trusted buyers gets into debt with the banks and they are unable to find their brands in time? Of course, their buying power will drop; so what happens to your model?

Of course, it will not be possible to predict the unexpected rate your customers will default. Technically, if you then decide to work with the final result at the end of the year, you will work with biased data.

Note: Data is a sensitive element when it comes to learning automatically and overcoming data distortions – hire experts who will handle this data carefully for you.

Also note that no one else was looking for this data – but now your unsuspecting customer has a record – and you are holding the "smoking gun" so to speak.

These experts should be prepared to honestly question any notion existing in the data accumulation processes; And since this is a delicate process, they should also be willing to actively look for ways to highlight these biases in the data. But look at what kind of data and record you have created.

2. Fixed model model.

In cognitive technology, this is one of the risks that should not be overlooked when developing a model. Unfortunately, most developed models, especially those designed for investment strategy, are victims of this risk.

Imagine spending several months developing a model for your investment. After several tries, you always get a "precise exit". When you try your model with "real world inputs" (data), you get a worthless result.

Why is it? Indeed, the model lacks variability. This model is built using a specific set of data. This only works perfectly with the data with which it was designed.

For this reason, AI developers and security conscious MLs should learn to manage this risk while developing algorithmic models in the future. By capturing all forms of variability in the data they may find, for example, demographic datasets [yet, that is not all the data.]

3. Misinterpretation of output data could be an obstacle.

A misinterpretation of data output is another risk for machine learning in the future. Imagine that after working so hard to get good data, you do everything you can to develop a machine. You decided to share the result of your outing with another party, maybe your boss for verification.

After all, your boss's interpretation is not even close to your own point of view. It has a different thought process – and therefore a different bias than you. You feel lousy in thinking how much effort you have given for success.

This scenario occurs all the time. This is why every data scientist should not only be useful for modeling, but also to understand and correctly interpret "every bit" of the result obtained from any model designed.

In machine learning, there is no room for errors and assumptions – it must simply be as perfect as possible. If we do not consider all angles and possibilities, we risk that this technology will harm humanity.

Note: Any misinterpretation of any information disseminated by the machine could be catastrophic for the company. Therefore, scientists, researchers and those involved in data processing should not ignore this aspect. Their intention to develop a machine learning model should be positive and not the other way around.

4. AI and ML are still not fully understood by science.

In fact, many scientists are still trying to fully understand what is AI and ML. While both are still finding their way into the emerging market, many researchers and data experts are still looking to find out more.

With this inconclusive understanding of AI and ML, many people are still scared because they believe that there are still unknown risks to know.

Even big tech companies like Google, Microsoft are not perfect yet.

Tay Ai, an intelligent artificial ChatterBot, was launched on March 23, 2016 by Microsoft Corporation. It was published via Twitter to interact with Twitter users – but unfortunately, it was found to be racist. It was closed within 24 hours.

Facebook also found that their chatbots deviated from the original script and began to communicate in a new language he had himself created. Interestingly, humans can not understand this newly created language. Odd, right? Still not corrected – read the fine print.

Note: To solve this "existential threat," scientists and researchers must understand what AI and ML are. In addition, they must also test, test and test the efficiency of the operation of the machine before its official publication.

5. He is an immortal manipulative dictator.

A machine continues forever – and this is another potential danger that should not be ignored. AI and ML robots can not die like humans. They are immortal. Once they are trained in certain tasks, they continue to perform, often unattended.

If the properties of artificial intelligence and machine learning are not properly managed or monitored, they can become an independent killer machine. Of course, this technology could be beneficial for the military – but what will happen to innocent citizens if the robot can not tell the difference between enemies and innocent citizens?

This machine model is very manipulative. They learn our fears, hate and love, and can use this data against us. Note: AI creators must be ready to assume full responsibility. ensuring that this risk is taken into account when designing any algorithmic model.


Machine learning is without a doubt one of the most advanced technical capabilities in the world and has a promising commercial value in the real world, especially in the case of a merger with Big Data technology.

This may seem promising – we must not neglect the fact that careful planning is needed to avoid the aforementioned potential threats: data bias, fixed model model, misinterpretation, uncertainties and immortal manipulative dictator.

Ejiofor Francis

Entrepreneur, Digital Marketing, Freelance IT / Technology Writer

Entrepreneur and incoming marketing consultant with over 6 years of experience as a guest blogger. He is a big fan of technology and professional events. Ejiofor Francis is the founder of EffectiveMarketingIdeas (EMI), a professional content marketing agency for startups and mid-sized companies. When he does not learn something new about his sector, you will find him working on his client's projects. Do you mean hello? You can send him an email at [email protected]