Blog post -

AI – a big step for mankind, but a misstep for humanity?

AI is on everyone’s lips these days. It is the Holy Grail of mankind, it seems. Historically, the Holy Grail refers to a cup or dish with miraculous powers that provide everything from unity with God, eternal youth, happiness and infinite abundance. Therefore, it signifies something that is worth to striving for. Is it? Maybe not without reservations!

The boundaries between ‘algorithms’ and AI is vague. We have been and are constantly subject to choices algorithms take for us every day. When we open a webpage, algorithms decide what ads we see, and on social media, algorithms also decide what news we see. Hence, the concept of ‘echo chambers’ was born: You are fed only information that reinforces the viewpoints you already have. Or as a German puts it: What do we talk to our neighbors about when they read different news? Do we have a common platform?

Maybe the most disturbing fact is that we may be denied information about a product or service, private or public, based on the outcome of an algorithm, and we may have to pay a different price than your neighbor.

It is sinister, because we do not know how these algorithms work, what criteria they use, which information they are fed and how updated they are. What we do know is that they, for the most part, reinforce stereotyping and biases, known and unknown. The old English phrase of “know your station in life” embodies that cultural bias of assuming that a blue-collar worker should be treated differently from a white-collar worker, should get different job offers, is interested in different sports, travel to different destinations; and the list goes on. Algorithms today are based on finding patterns by big data analysis. If there is a pattern, prediction is possible. If the sun keeps rising in the east, it will most likely also do so tomorrow. These algorithmic predictions repeat and reinforce such patterns, and, in most circumstances, this is not what we want. We want to develop and change, transcend and rise. In short, the goal is to enrich ourselves.

Although I am not sure where one ends and the other starts, learning algorithms brings us over to AI. AI will add advanced flexibility and pliability to algorithms. It will make the decisions taken for us more advanced; it will be easier to use, as it will no longer be necessary to code. It will understand natural language and have built in human decision-making capabilities.

A Swedish venture capitalist has invested in a firm that helps the Danish 911 equivalent to determine how to handle distress calls. Analyzing what words are used, tone of voice, breathing and background noise, the AI software Corti can help decide if a call needs top priority or not. It’s all good, is it not?

The platform Remote Laboratory uses a webcam to track 70 spots on your face and can determine if you are angry, sad, surprised or happy for example. It is designed to help teachers better help their students. It’s all good, yes?

Now, consider Facebook's new Suicide Watch. Is it all good also? Seemingly. Since the second most frequent cause of death among 15-29-year old’s is suicide, it is a noble quest to help detect and prevent suicides – for sure. A month after the introduction (selected markets) of this feature, Mark Zuckerberg stated that it had already helped more than 100 persons (Source: World Economic Forum 2018). In the selected market it has been reported that the feature cannot be turned off. But, it’s all good, really?

This shows what AI is capable of. But what if it is not used to help, but to pry on vulnerable teens? Maybe not so good?

Today, Facebook lets companies market products or services to people based on their state of mind. According to a 23-page document leaked to The Australian, Facebook offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure." This includes youngsters down to 14 years of age. (Source: The Australian)

But there is more to come. Facebook has applied for a patent, based on a proprietary technology, allowing the company to determine your state of mind through the webcam or phone cam. Another patent is linked to how ‘hard’ you write on the keyboard, how big the font is that you are using, etc.

AI will only need access to data, large amounts of it. Then it will go about reproducing these patterns within each silo, cohort, segment, sociographic, demographic, political, ethnic, ... you name it, group.

As we go from algorithmic to AI-based products, services and service delivery, the inherent problem of limiting our choice, deepening of biases and polarization, increasing discrimination in access to products and services, are all reinforced. The tech giants (GAFA in the west and BAT in Asia) have a history of using all the information that they can get hold of to make more money, with little or no regard to ethics or privacy. So, many promises have been broken, seemingly with impunity, and the transgressions are many and severe. It is so easy to package AI as 'progress' and 'tools for doing good' and 'improving services', but is that really why it is invented and implemented? And, more importantly is that how it will be used?

Should we do something to stop this? Is it possible – or is it just a fact, like the sun rising in the east all over the Globe? We cannot afford not do anything about it! It is possible to influence how it is used! We still have choices to make – so start making them!

Topics

  • New media

Categories

  • social networks
  • social monitoring
  • social media platforms
  • protecting privacy
  • invasion of privacy
  • integrity
  • integritet
  • big data

Contacts

Elizabeth Perry

Press contact Chief Marketing Officer Marketing & Communication

Related content