A Look at the Top 10 Digital Trust Challenges – Part 1

Given the potential of today’s rapidly evolving technology to create major risks as well as enormous opportunities, the question of digital trust has emerged as one of the defining issues of our era. Trust has always been situated at the very heart of a fully functioning society, but today, the institutions that have long protected key societal norms, like property rights and privacy ownership, are under huge strain now that digital disruption has challenged the traditional qualities and behaviors that have long defined trustworthiness.

So how can today’s institutions, and indeed the entire system, adapt in order to create trust within this new and constantly changing context of digital transformation? One of the first steps is to break the concept of digital trust down into its component parts. A recent article from global professional services firm PwC looks at 10 of the most important digital trust challenges that institutions and organizations must develop the capabilities to address. Read on to learn more about the critical questions surrounding the first five of these issues.

  1. Data privacy and usage

data securityCompanies of all sizes are now capable of collecting huge amounts of data on everyone they come into contact with, from customers to partners to employees; similarly, sophisticated analytics tools now allow people to leverage this data to create new knowledge and insights, thereby affecting how the data itself is used. And while public has been slow to realize just how much personal information is being collected and used by companies, the majority of consumers now rank data privacy and usage as one of their top digital trust concerns.

Against this backdrop, organizations must ask themselves some tough questions about their information governance strategies. What level of transparency, for example, are their customers entitled to? Can they legitimize data access and analysis through a customer application’s legal terms and conditions? How can they educate their customers and the public on managing their technology footprint? The challenge here lies in striking the right balance between the competing interests of individual privacy rights and the use of data for a greater benefit.

  1. The ethics of people data

People analytics is transforming talent management, turning the recruitment and retention of employees into a function that is much more scientific, fact-based, and linked to business strategy and performance than ever before. But what are the rules of behavior in this new operating environment? How much personal employee information is necessary or acceptable for employers to use?

Take, for example, the fact that many people today have little separation online between their personal lives and their professional existence. Are employers entitled to use information shared on social media platforms to make character decisions when hiring or to evaluate whether existing employees pose a risk to their organization? Are new governance guidelines and processes needed to ensure that such data is monitored and used ethically? What new policies could help manage trust issues around managerial use of personal information?

  1. Predictions and profiling

digital profileWhen big data is used to profile individuals and predict behavior, the business benefits, such as strategic workforce planning or targeted customer segmentation, are clear. But equally obvious are the issues that arise in relation to individual rights and privacy, both in a workplace context and in wider arenas like policing and national security.

If data allows us to predict behavior to a highly accurate degree, are institutions entitled to take steps before that behavior is acted on? What if public safety is at stake? Similarly, is it reasonable for employers who use predictive analytics to anticipate future workplace risks to shape policies that could impact specific individuals or groups in a punitive fashion?

  1. Algorithmic regulation

Algorithms and automated rules are rapidly coming to define more and more of our everyday lives. From the suggestions of “customers who purchased this also purchased these” on retail websites to automated credit card approvals, digital code is the new law of our era. But when the enormous power of these algorithms is combined with their significant lack of transparency, it’s clear that there is a massive trust issue at stake. Without clear transparency and accountability, for example, how can we be sure what companies are basing algorithmic decisions on? And, in the cases of algorithms being regulated by other algorithms, to what extent can we trust machines to effectively control other machines.

  1. Creating AI safeguards

Now that algorithms can learn and make predictions from data without the need for any human input, the machines that surround us are becoming smarter. But, science fiction-style speculation aside, is there a real risk that these algorithms might “go rogue” and take over? Do we need to have override controls that we can use, if necessary, to take control back from our machines? While there’s no clear answer to the contentious artificial intelligence debate, it’s clear that we must further explore the ethical and trust implications of this issue, particularly as collaborations between humans and machines become a more common feature in the workplace.

Advertisements