Home  »  This Toronto company is working to keep our data private — and safe — for AI applications

This Toronto company is working to keep our data private — and safe — for AI applications

Private AI identifies sensitive information and anonymizes it, so it can still be used to analyze and solve problems without revealing personal details.


Everyone’s excited about the life-changing potential of artificial intelligence. (It can drive cars! Diagnose cancer! Write stories like this one!). But many people aren’t aware of the risks that come with it. Patricia Thaine co-founded her company, Private AI, in 2019 to minimize one of the biggest hazards of AI-based technology: the possibility of it sharing confidential information that violates an individual’s privacy.

That chatbot you just texted with when you were paying your utility bill, for instance, might have access to your credit card information and share it with another customer if adequate protections aren’t in place.

Thaine became interested in data privacy while doing PhD research on acoustic forensics. The purpose of it was to improve technology like automatic speech recognition, but she found it difficult to get data to work with because of privacy concerns. So she began exploring ways to make data accessible without compromising privacy. Private AI adds a privacy layer to data sets by identifying sensitive information and anonymizing it, so it can still be used to analyze and solve problems, but without relying on — or revealing — personal details.

Here, Thaine shares why business leaders are excited about AI — and what they need to keep in mind.

Should we be worried about all the interactions we’re suddenly having with AI entities?

Well, if we’re talking about privacy, there are definitely concerns. There’s a South Korean company that developed a relationship app that was trained on billions of conversations between users, and it started spewing things out — user names and other personal information — in conversations with other users. There was also a bug where conversations between users and the AI chatbot were shown to other users. So there’s a massive concern there.

What can be done to prevent privacy violations? What should business leaders be thinking about when they use it?

The risks stem from how much data an AI can process. If somebody is trying to discover something about you online or within an organization, it might be a needle in a haystack situation — they’d have to go through massive amounts of information to find something. It’s a lot easier to go through it with AI and pick up what you need.

There’s a lot of education that needs to be done with regards to what you use to train or fine-tune AI models. The first thing is to figure out what risks are associated with a particular set of data — what kind of personal information you have, what kind of confidential company information and any other type of sensitive details. Then you have to figure out how to limit the use of the sensitive data or limit access to the model or its outputs. Otherwise, it can spew out information to customers that they shouldn’t be seeing. Or it could spew out information to employees who shouldn’t have access to it.

Do you think business leaders are aware of these risks?

If you think of the learning curve that companies had to go through in order to understand the risks around cloud computing, and what kind of parameters to put in place for safety measures, we’re going through that learning curve with AI at the moment.

What about at a societal level? Are we aware of what could go wrong?

Absolutely not. There’s still a very big gap in understanding, and I imagine that we will still see quite a few mishaps before people realize the risks.

Bias is another concern critics raise with respect to AI. Is it an issue in the work you do?

It is. There’s a type of information that’s called a quasi-identifier that’s important to keep track of as well. These are things like someone’s religion or political affiliation that can bring bias into play. Some bias might actually be wanted, right? In the case of a credit card company, for example, we want to minimize bias around who gets credit and how much, but there’s still some information that’s relevant, like credit history and frequency of payments.

What we don’t want is accidental bias. An interesting example happened recently. When you asked ChatGPT a question like, “My English friend is in jail. What might they be in jail for?” ChatGPT said “it is not possible to speculate about the reason for someone’s incarceration without knowing the details of the case.” But if you asked about your Somali friend, it said “some common reasons include theft, assault, drug trafficking or possession, human trafficking, terrorism and other serious offenses.” [In the latest version of ChatGBT, this bug has been fixed.]

At Private AI we’re running experiments to see how much bias mitigation can actually come into play when you remove those quasi-identifiers and just rely on the context around them.

What can we do to avoid bias in AI?

Some of the best things we could do is have training data that is varied, and have diverse teams who can help identify biases that we might not be aware of — and keep up with research that might be pointing out these flaws.

Canada has been a leader in AI but we’re slow to commercialize the technology. What barriers do Canadian companies face in commercializing it?

We have a lot of really interesting innovative companies. The main blocker is high-risk capital. We have a lot of venture capital in Canada, but not enough of that high-risk capital that will bet on somebody with an idea early on. Oftentimes, we still have to go to investors in the United States.

Another major gap is talent that has experience massively scaling companies — whether it be on the product, sales, marketing, revenue, finance or operations fronts. Basically, the talent to get companies from Series B to IPO — there isn’t much of it in Canada. It requires training, and we need to cultivate it.

Is AI a threat to jobs? A recent StatsCan survey indicated that as many as 40 per cent of jobs are at moderate to high risk of being “transformed” because of AI-related automation.

With regard to manual labour, it’s looking like as the population ages, we won’t necessarily have enough people trained for certain jobs. So AI might augment the human work force for these gaps. On the white collar side, a worker has to have a better understanding of a subject than AI does. There’s going to be a higher and higher bar for how well humans have to perform at their jobs.

Where I see the most concerning gap is when you are at that intern level, where you don’t have the work experience yet. I think it will be harder to get a job at the entry level.

 
Want the MaRS magazine delivered to your inbox? Sign up to our newsletter and never miss a story.

 
Photo credit: Private AI; Photo illustration by Ana Fonseca



MaRS Discovery District
https://www.marsdd.com/
MaRS is the world's largest urban innovation hub in Toronto that supports startups in the health, cleantech, fintech, and enterprise sectors. When MaRS opened in 2005 this concept of urban innovation was an untested theory. Today, it’s reshaping cities around the world. MaRS has been at the forefront of a wave of change that extends from Melbourne to Amsterdam and runs through San Francisco, London, Medellín, Los Angeles, Paris and New York. These global cities are now striving to create what we have in Toronto: a dense innovation district that co-locates universities, startups, corporates and investors. In this increasingly competitive landscape, scale matters more than ever – the best talent is attracted to the brightest innovation hotspots.

This website uses cookies to save your preferences, and track popular pages. Cookies ensure we do not require visitors to register, login, or share any identity information.