Interset & Responsible AI – Part 3: Diversity Inclusion, Prudence, and Responsibility

AI must be designed and executed with caution and the anticipation of unintended consequences.


Read the previous chapter in this series: Interset & Responsible AI – Part 2: Solidarity, Democratic Participation, and Equity. 

Welcome to the third part of our blog series, in which I am exploring how Interset’s technology and organizational objectives align with the values of the Montréal Declaration for Responsible AI. So far, we’ve discussed important aspects of responsible AI innovation, such as transparency and protection of individual privacy. Today, we’ll look at the next three principles and how Interset’s AI is designed to reduce negative unintended consequences.

Principle 7: Diversity Inclusion

“The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences.”

As I mentioned before, math isn’t good or bad. AI itself doesn’t discriminate, but a dataset that feeds it can contain biases. For example, real-world biases have been shown to trickle into facial recognition software. This is why Interset relies on machine learning that is performed “online” and leverages unsupervised methods.

Online machine learning means that our models are learning in situ within a customer’s environment so that no external data can pollute the models. Online learning reduces the potential of biased datasets and increases effectiveness because every customer environment is different—we don’t want to make assumptions. In addition, an unsupervised machine learning approach gives us further protection against bias because it allows the automatic discovery patterns within datasets instead of relying on curated data, which typically involve human-created labels (that can easily contain biases). Of course, it’s not perfect. For example, your AIS algorithms can unintentionally contain biases if you are not careful. But every bit helps!

Additionally, multiculturalism and inclusiveness is a priority for us as a company, too, and we are incredibly proud of how inclusive our team is.

Principle 8: Prudence

“Every person involved in AI development must exercise caution by anticipating, as far as possible, the adverse consequences of AIS use and by taking the appropriate measures to avoid them.”

I’ve mentioned previously that part of the challenge we face in the world of AI is that autonomous technologies have been and are being invented without much consideration for potential consequences. While AI itself is not inherently bad, it has the potential to be misused for bad intentions.

There’s a lot going on in this principle, but as we look at our application of prudence, the fourth sub-principle stands out to us:

“The development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data.”

The concepts I discussed in part one of this series around the principles of respect for autonomy and privacy pertain here too. We have put in place measures such as anonymization to ensure a balance between confidentiality and effective threat detection. In addition, our analytics preempt the misuse of data by ensuring that humans are always able to make the final judgments on behaviors that are flagged as high-risk. Even if you choose to automate responses under specific conditions, this again needs to be a human decision to do so, and doing so offers the opportunity for mindful, ethical oversight and policy review. In this respect, our technology was designed with a mission of protecting against unintended consequences that go against an individual’s interests or rights.

Principle 9: Responsibility

“The development and use of AIS must not contribute to lessen the responsibility of human beings when decisions must be made.”

The second sub-principle states:

“In all areas where a decision that affects a person’s life, quality of life, or reputation must be made, where time and circumstance permit, the final decision must be taken by a human being and that decision should be free and informed.”

This principle relates back to previous concepts, including the one we’ve just discussed above. It ultimately ties back, too, to the heart of an important piece of the AI puzzle: the final decision maker—human or machine?

In cybersecurity, this is an important topic. Like I’ve mentioned before, some vendors in our space claim to be able to replace humans in the SOC. We don’t make this claim because we don’t believe it can be done responsibly. Our math enables a human-machine team where analytics does the time-consuming work of sifting through data and pinpointing unusual behaviors, but the human SOC operator investigates the flagged behavior(s) and responds appropriately.

Security investigations can be high-risk; you do not want to accurately identify someone as guilty or innocent. In “insider threat” use cases, external actors can exhibit the behaviors of an insider when an account is compromised, which means that analytics, combined with an investigation, has to be able to suss out the true role of the insider. Our AI gives SOC teams the best view of behavior inside of their organization, and their final human judgment accounts for any undue inaccuracies in the dataset and factors in any necessary sociological context.

Additionally, the teaming concept comes into play with techniques like semi-supervised machine learning and human-in-the-loop analysis, where a SOC analyst is able to review an analytical result and provide feedback on it so that the AI learns from this human feedback and gets better next time. In other words, effective AI involves a bidirectional dialog between the math and the human. It turns out that effective AI is also responsible AI.

AI has incredible power to automate processes, but it can’t operate separately from human beings altogether. In my next blog, I’ll take a look at the final principle of the Montreal Declaration as we close out this series. Stay tuned.

Read the final chapter in this series: Interset & Responsible AI – Part 4: Sustainable Development and Final Thoughts.