Interset & Responsible AI – Part 1: Well-being, Autonomy, and Privacy

At Interset, we believe AI should support, not replace human beings in cybersecurity.


This blog series follows my introductory blog, “Interset, Ethics, and Responsible AI.” Be sure to check it out!

In November 2018, the Montréal Declaration for Responsible AI was officially released after a year of discussion, debate, and careful consideration by leading technologists, ethicists, and researchers in Canada. Led by l’Université de Montréal, the Declaration ensures that creators of artificial intelligence systems (AIS) are upholding ethical and moral values, and it’s fueling the global conversation on responsible AI.  

Today, we’re officially sharing the exciting news that Interset signed the Declaration at the start of 2019, formally committing us as an organization to the 10 principles laid out by the document. So what does that commitment look like? To answer that, I’d like to explore how Interset, both our technology and company as a whole, intentionally align with the values of the Declaration. Together, we’ll take a look at each of the 10 principles, starting with the first three that highlights the role and importance of human well-being, autonomy, and privacy in the development of responsible AI.

Mission Statement
Interset’s mission statement.

Principle 1: Well-being

“The development and use of AIS must permit the growth of the well-being of all sentient beings.”

The first principle is straightforward. It’s designed to ensure that all AIS is created with human welfare in mind and commands outcomes like improved living conditions and health, the ability to pursue individual preferences and exercise mental and physical capacities, and more.

This principle is a great one to start with because it speaks to the foundations on which Interset was started. At the time of our inception, we were a team of analytics and AI experts—not cybersecurity experts. Yet we saw an opportunity to make a positive impact on society by applying our analytics engine to cybersecurity. Math is impartial; it’s not good or bad. The application of math, however, can take sides, which is why it has to be executed responsibly. We recognized this and, in designing Interset, began to apply data science and AI-led approaches to cybersecurity in an effort to do good—to solve difficult problems and keep organizations, their customers, and their partners, safe from security threats. Societal well-being was a critical motivator for us, and so our internal mission statement begins with, “We catch bad guys with math.” Today, we maintain that same goal.

Principle 2: Respect for Autonomy

“AIS must be developed and used while respecting people’s autonomy, and with the goal of increasing people’s control over their lives and their surroundings.”

Each principle in the Montreal Declaration is divided into sub-principles. As we look at this principle that recognizes the importance of human autonomy and control, the fifth sub-principle highlights the importance of trustworthy AI systems:

“AIS must not be developed to spread untrustworthy information, lies, or propaganda, and should be designed with a view to containing their dissemination.”

There are some extravagant claims that we commonly see in cybersecurity, particularly in the security analytics space. Some vendors claim that their AI-enabled analytics are 100% accurate, have no false positives, and, subsequently, that their AI eliminates the need for humans in the security operations center (SOC). At Interset, we have never made these claims because we do not believe they can or should be made. We have no intention of replacing humans in the SOC, and our analytics was not designed to do so.

Our technology gives security teams a probability-weighed, risk-based assessment—not a binary declaration of whether something is good or bad. In addition to this, we always provide in clear, natural language explanation of why our models have flagged certain behaviors as unusual and why your security team should investigate it. This way, we promote a human-machine team, with respect given to the SOC team’s autonomy. Our AI does the math and provides a human-readable explanation, but the final judgment call is always left in the hands of the human SOC analyst. While math itself is not biased, data can be biased, inaccurate, or incomplete. This means that the final decision on whether a risky behavior constitutes a true, real threat has to remain in the hands of the human. Our mission is not to reduce analysts’ control over their SOC operations, but rather streamline it and enable them to make informed decisions based on a more holistic view of what’s going on in their enterprise.

Principle 3: Protection of Privacy and Intimacy

“Privacy and intimacy must be protected from AIS intrusion and data acquisition and archiving systems (DAAS).”

Where security analytics systems are concerned, there is a natural and important tension between data privacy and insider threat use cases that may require identifying a potential attacker to a security team for investigation. What’s important to note is that external threats can often exhibit the characteristics of an internal threat: for example, an external attacker compromises an internal employee’s account and uses that account to steal confidential data. Because these behaviors are similar, the same analytical models that can be used to incriminate an individual, may importantly also be used to exonerate an individual in the case of a security incident. For this, and many other reasons, it is important to respect the privacy and identity of individuals by default, revealing private information such as names only when required.

When properly designed, security analytical systems don’t need to know the identity of an individual by name; tokenization or pseudonymized values (e.g., “user12X34Y” instead of “Jane Smith”) can be just as effective. In addition, there are privacy techniques that introduce “noise” into individual profile data that only minimally impact statistical performance. At Interset, we understand the value of privacy and have included anonymization, profile redaction and removal features in the platform since its inception.

Techniques such as these  align with two important sub-principles of this third Montreal Declaration principle:

  • DAAS must guarantee data confidentiality and personal profile anonymity.”
  • Every person must be able to exercise extensive control over their personal data, especially when it comes to its collection, use, and dissemination.”

They also enable important individual privacy and right-to-be-forgotten requirements of GDPR, which can learn more about in our blog, Interset and GDPR. (Also, Data Privacy Day just passed. If you missed it, be sure to check out our Data Privacy Day blog.)

In my next blog, we’ll look at the next three principles of the Declaration. Stay tuned!

Read the next chapter: Interset & Responsible AI – Part 2: Solidarity, Democratic Participation, and Equity