Interset, Ethics, and Responsible Artificial Intelligence

Canada is leading the charge for responsible adoption of artificial intelligence that fosters trust, inclusivity, and sustainability.


If you’re familiar with Interset, you’ll know we’re very outspoken about the incredible power of artificial intelligence (AI). In the cybersecurity industry, there is a lot of discussion around AI and its ability to help resource-strapped security professionals find and stop threats faster.

But of course, as with any powerful and revolutionary technology, there are important and hard conversations that need to be had around AI. In fact, in most industries where AI has made an appearance, there are folks throwing up red flags about the physical and ethical risks associated with technology that can run with limited human intervention. Is it ethical to let autonomous algorithms disseminate only specific content based on your personal data on social media? What if autonomous weapons become altogether uncontrollable? And, of course, doesn’t AI make it easier for cybercriminals to carry out attacks, too?

We’ve already incorporated AI into so many aspects of society, unaware of what the potential impact may be. And we often don’t seem to be thinking enough about the consequences of the innovative technologies we’ve yet to unleash on the world. In December, this was the exact topic of conversation at the G7 Multistakeholder Conference on Artificial Intelligence in Montréal—an event I was fortunate enough to be invited to attend. This one-day summit gathered 150 participants from the private and public sectors and selected by G7 partners to explore the question: As we grow AI innovation, how do we foster societal trust and the responsible adoption of AI? To that end, I and a number of experts in the fields of AI, law, and human ethics discussed and made recommendations on a number of topics to the G7 countries.

The summit was stewarded by Canada, which has become a driving force in the discussion around ethical AI globally. Canada was the first country to release a national AI strategy, and last year, l’Université de Montréal published the Montréal Declaration for Responsible AI, which set forth an ethical framework for AI and initiated a dialogue for achieving sustainable, inclusive, and equitable AI. Whether you’re an individual or an organization, the Declaration’s framework is an excellent foundation for ethical AI, as it lays out 10 principles in detail that represent fundamental interests of people that should be respected by AI innovation. In January 2019, Interset signed the Montreal Declaration, committing us as an organization to the principles laid out by the Declaration.

Coincidentally, Interset is smack-dab in the middle of this conversation in Canada: literally! Toronto and Montréal have gained reputations as leaders in AI innovation, and Ottawa—where Interset is headquartered—is quickly catching up. Ottawa has more engineers, scientists, and doctorate degrees than any other city in Canada per capita, and the city has become a hotbed for data-driven organizations and is a perfect environment for breeding AI expertise.

We’re really proud to be a part of this community because it’s true: AI does have both advantages and disadvantages, and it’s our responsibility to be aware of and account for that reality. In our product development and business objectives, we have always sought to leverage AI technologies ethically and responsibly, and we see our efforts mirrored in the industry discussions around us. In my upcoming blogs, I’ll be looking at each of the 10 principles outlined by the Montreal Declaration of Responsible AI and discussing how Interset incorporates them into our product and company initiatives. Stay tuned!

View the next entry in this series, “Interset & Responsible AI – Part One: Well-being, Autonomy, and Privacy.”