Today’s Reality – Privacy Controls in Social Media Analysis are a Feature

Today’s Reality – Privacy Controls in Social Media Analysis are a Feature

DigitalStakeout’s analytics leverage public social media data and other public data from the Internet to assist our customers in solving serious and unavoidable digital risk situations. In some cases, there are regulatory and statutory requirements that drive the need for our products. In other cases, using one of our products enables a customer to perform their job(s) more efficiently and effectively to solve a digital risk challenge. We also provide warning to numerous cyber security and digital threat situations that others cannot.

By accessing content that is generated by end-users, it is natural that privacy & intent become part of the conversation. We understand that we and others in our industry have a responsibility to innovate ways our customers, especially government subscribers, can use digital tools to do their job in a manner that does not infringe on public trust and the public good.

Today’s Reality – Privacy Controls in Social Media Analysis are a Feature

DigitalStakeout invested a significant amount of time, money and resources on privacy controls. On our new corporate responsibility page, we describe the controls embedded in our risk detection products. More importantly, we describe what we don’t and won’t do with user-generated data. In this post, we’re going to demonstrate key controls in our government products that access public social media data and why they prevent, detect and deter behavior that would be a risk to the public, customers, DigitalStakeout and its data partners.

Understanding Intent

It’s important we clearly understand what problem a customer is trying to solve & why. When we identify use cases, we can communicate to external stakeholders that our solution is being used in the scope and manner that we represent.

use case control

In product features that have an ability to query & monitor user-generated social data such as Twitter, government customer end-users must select the most applicable use case that is authorized. This is a required field and this process is executed on search by search basis. Our product support team reviews these settings on a frequent basis to assure compliance to our terms of service.

Preventing Bias Situations

With the volume and veracity of data on the Internet, many things can be easily misunderstood. It’s important to stem off many forms of bias.  It takes years of experience to understand all the complexities and realities that surround this topic and how it effects factual decision making. When government customers access features that present user-generated social data, we redact information to assist in preventing many forms of bias.

Redacted Avatars

People choose avatars for all kinds of reasons. When an avatar is set, it doesn’t necessarily mean what the image “could” imply.  In a public safety setting, race or religion bias will have negative impacts on building trust in communities. An image can bias an end-user for all kinds of reasons that can lead negative consequences. While the argument exists for being able to see an avatar, we err on the side of caution.

redacted_avatar

All social media data avatars are redacted into a benign silhouette when present raw post to an end-user. Government users see avatars on social media sites, they cannot see avatars in features and product that possess social data monitoring and analysis capabilities.

Redacted User ID, Screen Name & Person Name from Posts

People choose screen names and create profile names for all kinds of reasons. When a profile is not a verified profile by a social media source, the profile could be a fake, imposter or satire. Secondly, a screen name choice doesn’t necessarily mean what the name “could” imply. From a certainty perspective, the “identity” behind the post is not authenticated. It takes a VERY detailed process to authenticate a social media account to a single person in a manner that would survive nonrepudiation.

redacted_meta

We redact user id, screen name, and person name elements with a hashing method where we essentially make it impossible to reverse engineer the element.

Protecting peaceful discourse & innocent speech

We do not condone, support or authorize any activity that would be considered politically profiling, tracking, and targeting innocent people who are expressing their political opinions online. We’re proud to say that we’re the first and only company delivering social media analytics that has a patent pending feature in our product now to protect political & religious speech while conducting social media analysis.

blacklist_terms

blacklist-terms-geo

notmypresident, justice4, justice for, black lives matter, blacklivesmatter, and blm are just a few examples of hundreds of political and religious terms and their variants that are prevented from use in our products. Our research team routinely updates these terms. As new situations evolve, automated processes discover these terms and approximately 2,500 other benign keywords are excluded from use.

Updated Terms of Service

We’ve updated our Terms of Service with a Responsible Use clause. The following is a non-exhaustive list of practices that would be considered a breach of Responsible Use:

  • Bypassing automated controls that proactively protect against the creation of non-compliant searches, alerts, etc.
  • Manipulate DigitalStakeout features to collect private & protected data.
  • Acting as a data collection proxy for an unidentified city, state or federal government – foreign or domestic.
  • Customers may not use DigitalStakeout to conduct criminal profiling, targeting, tracking or develop a pattern of life or dossiers (i.e. conduct surveillance) on any individual, group, location or event.

Other practices may be relevant in determining Responsible Use and DigitalStakeout reserves the right to take any unlawful, prohibited, abnormal or unusual activity into account in making its determination.

A Continuous Journey

As this post describes just a few of many controls we deploy to protect privacy, we hope we’ve demonstrated that DigitalStakeout is leading by example on what can and should be done.

We understand that this topic is a journey without a destination. There is needed work to create standards, definitions and procedures in industry where all parties with an interest on this topic are on the same page. While we can build in control after control into product, the need for education is paramount. Preventable scenarios with unintended consequences will be avoided and real positive life-saving situations will be more frequently accomplished.

While there are social responsibilities on DigitalStakeout’s part or any company that accesses public social data, there are actions that our social data partners are taking to reduce risk for all. Those actions will reduce reputation risk from misunderstanding innocent and unintended scenarios, prevent abuse from parties who don’t want to play by the rules, and reduce the privacy and digital risk of public end-users.

We live in a data-driven world reality. We look forward to when hundreds of companies in the social media analytics and the big-data industry that require public social data to fairly compete in the marketplace are all viewed as positive. There is a lot of work to be done on this front and it’s going to take the leadership and open-minds of many to get it done.