Blog post -

​Privacy is a Human Right; So is Digital Privacy

In most countries, people have an expectation of reasonable privacy. But that privacy is being eroded daily by technology. Facebook and Google record your every activity online. New Internet of Things devices such as Alexa track your voice and feed that data back to Google, supposedly for product improvement purposes. The sentiment among most people is if you’re not doing anything wrong, why worry about privacy? Well, you’re not doing anything wrong when you use the toilet, but you expect a certain amount of privacy. You’re not doing anything wrong when you have sexual encounters with your spouse, but there is an expectation of privacy. People have a right to privacy, but it’s slowly being taken from us without our full consent and without a full understanding of the long-term risks and consequences.

Those of us who live in Europe are not far removed from the days of the old USSR communist block in Eastern Europe. During that time, everything you said or did was recorded and monitored, even in your home. There was no sense of privacy. There are people who still remember what it was like to be constantly watched and listened to. The damage to human relations and individual freedom was immeasurable.

Whether you’re being spied on by a government or a commercial entity, the purpose is the same – to manipulate your behavior and your understanding of the world around you. Facebook, for example, performed psychological experiments with its newsfeed a few years ago, which could have dangerous consequences. Facebook wanted to see if they could manipulate emotions by making some people’s newsfeeds happy and others sad. For people who were suffering from depression and mental illness, this was a dangerous experiment, and it was done without user consent or knowledge. But the manipulation didn’t end there. After acknowledging the experiment and promising to do better, Facebook went ahead and used the data gathered from this experiment to work with Cambridge Analytica. Cambridge Analytica used those Facebook profiles to create propaganda that would manipulate individuals’ fears and emotions. They used individual psychological data to sway voters to either vote a certain way, or even convince them that voting was futile. The full impact of Cambridge Analytica’s interference in multiple elections around the world has still not been entirely vetted.

The problem is we’ve opened a kind of Pandora’s Box. We have no idea how our online profiles and personal data are being used or how it will be used in the future. The data is already out there and has been shared with multiple organizations. Like the old commercial said, “and they told two friends, and they told two friends, and so on.” We don’t know how many times over the data has been reused and sold to other entities. Therefore, there’s no way to put all that has been released back into Pandora’s Box. As Artificial Intelligence (AI) and other machines make use of the data, we have no idea how it can be manipulated by companies, governments and criminals to at minimum persuade us to buy something or at worst, re-shape societies.

We recently learned that Facebook was developing partnerships with hospitals and employers to share user data. Could that mean an impact on insurance or health services? If hospitals can combine your social media profile and your health profile, what wrong conclusions could they reach? If a police department can combine your public records with your social media profile and see you have a friend who was convicted of a crime, will they profile you as a criminal too? Will an AI machine in use at an airport put that data together and deny you entry into a country or the right to board an airplane? Can an employer use the psychological profile of your social media data to deny employment because a computer thinks you may have a baby soon or you might be a drug user?

Technologies use data to build profiles using machine learning algorithms. Most people assume the computer is always right. But the computer and its algorithms are only as good as the people who program them. Even AI learns from human biases, as we saw with Microsoft’s AI chatbot Tay learning racist and sexist terms. We may not fully understand how a machine learning algorithm predicts human behavior, but the consequences can be devastating. If we allow machines to take over the ethical and human decision-making that is inherently our responsibility, machines might make logical, but unethical choices.

For example, United Airlines recently oversold a flight. It’s computer systems used an algorithm to determine which customer should be bumped to a later flight. The computer prioritized ranking by price of ticket and frequent flyer status. However, none of the humans involved questioned the computer’s decision. Not even after learning the man they were removing was a surgeon, flying to assist a patient. Instead, United walked into a public relations nightmare when it forcibly removed the doctor from the plane. Our unquestionable trust of computer algorithms is dangerous when computers assume the role that humans are most equipped to play, which is moral and ethical decision-making.

For companies on the Internet, it’s important to know that your data is being gathered and collected too. If you use Google Drive or Gmail, Google can read your documents and your emails. If you don’t use Gmail, but you reply to someone using a Gmail account, Google can still read your email. Your company secrets are not secret. Google can see them and use them however they please. Perhaps to create a competitive business even. You gave them that right when you took their free service.

And now, we see Google and Facebook trying to get into the security business. Facebook offered a Virtual Private Network (VPN) in the guise of secure Internet browsing. However, Facebook was able to track everything users did on the Internet when they used that VPN. Google is now offering website security certificates. But that allows them to track web traffic, even on intranets, not just the public Internet. If Google and Facebook own all the data on the Internet and see all the data on the Internet, and their business models are to sell and exploit that data as broadly as possible, is there such a thing as security? Do we want all traffic on the internet to be the property of one or two companies?

The technology industry needs to develop a guideline of ethics and regulations for the collection and use of data, as well as how it is used to implement future technologies like AI. In the medical industry, ethics guide many of the practices. For example, just because we can clone a human doesn’t mean we should. Ethical practices, not technical ability, stop the medical community from cloning humans. The same kinds of ethics should be applied to technology. Just because we can build an AI-driven, human-like robot doesn’t mean we should do it until we consider the potential failures, malicious uses and safeguards needed to prevent unintended consequences. Computers and robots should not be allowed to experiment on people without their knowledge, for example, as Facebook did to gain psychological profiles of its users. And why do we need a face scan? What could possibly go wrong with a technology that can identify everyone around you by their face – who you meet with, who you hang out with, where you go, what catches your gaze – all of this can be gathered and tracked today. But how that information is used is completely unchecked.

One good rule of thumb for consumers and businesses is, if the online service is free, then you are the product, not the customer. Forgoing your right to privacy is your payment.

Technology companies should be held to ethical standards, just like the medical community. They should not be able to collect or scrape data from you without your permission, and they should not be able to share or use data that cannot be contained or controlled. Businesses online will always derive data from their customers. They’ve been doing that even offline since businesses began. But there should be stringent rules for how they can use that derived data.

Digital human rights should include a person’s right to consent to data collection, see what’s being collected, change the data that’s being collected and delete the data collected upon request. Of course, government entities should have the right to collect data, but only through correct legal procedures based on reasonable suspicion of criminal activity.

Until people can control and contain their privacy online, we will continue to see breeches like Facebook become more frequent and more damaging. Deeper ethical violations will accelerate as algorithms distort reality with fake news and false associations. The stakes are too high - mental well-being, a functioning society, and democracy itself. It’s time for a digital bill of rights that will end the unfettered collection and manipulation of data that intimately defines who we are as individuals and collectively as a society, and return control where it rightfully belongs – with us.

Topics

  • New media

Categories

  • social networks
  • social monitoring
  • social media platforms
  • protecting privacy
  • privacy concerns
  • invasion of privacy
  • messaging
  • integrity
  • integritet
  • big data

Contacts

Elizabeth Perry

Press contact Chief Marketing Officer Marketing & Communication

Related content