Cohere, a Toronto-based AI startup that provides language models to power chatbots and search engines, recently raised US$270 million. It is the latest sign that the appetite for artificial intelligence continues unabated.
But the rampant adoption of tools with such incredible potential and disruptive power is also sounding alarms. In March, Emilia Javorsky, a director at the Future of Life Institute, wrote an open letter (which has now gathered more than 33,000 signatures) calling for a six-month pause on training of higher-level AI, saying “The speed at which it’s moving is outpacing our ability to make sense of it, know what risks it poses, and our ability to mitigate those risks.”
Although Parliament is considering Bill C-27, privacy legislation that if passed would impact the regulation for the design, development and use of AI systems, government — by necessity — moves slowly and meticulously. AI technology, on the other hand, progresses at lightning speed.
That raises all sorts of questions, for which there are no easy answers. How can AI be created responsibly? Can it be regulated? Who needs to be involved to ensure that it is used for the benefit of society? We asked four experts to weigh in.
Nick Frosst, co-founder of Cohere
AI is changing a lot, but the conversation is changing faster. There’s a lot of talk about long-term existential risk, and I worry that obfuscates some of the more immediate consequences the deployment of this technology will have on the job market and education. We’re really thinking about making sure we’re happy with the application of this technology today, as it is right now — not what happens if this technology takes over. A lot of these conversations are getting muddied, and that makes it difficult.
As builders of technology, we want to make sure that its consequence in the world is something we’re happy about and is used for good. So we spend a lot of time on data filtration and human feedback and making sure that we’re aligning the model with our own beliefs and views about how this tech should be used. We try to engage with a wide variety of people and that includes other people in the space and the broad community.
Ultimately, it falls on the creators of the technology to make something they’re proud of. In the early 2010s, a claim social media companies would make was, ‘we’re just making the tech; we can’t decide what’s good and what’s bad.’ That no longer flies. People expect technology companies to be making decisions and acting as best they can.
Deval Pandya, vice-president and head of AI engineering at the Vector Institute
We are in this age of machine learning and AI — it’s going to affect everything. And my vision is that it will create a massive positive change in addressing some of the largest challenges that we are facing, such as the climate crisis and healthcare. At the same time, I don’t want to downplay the fact that the risks of AI are very real.
We have enough resources and bright minds to work on all the aspects of both near-term and longer-term potential existential risks. We have the tools, we have the knowhow to safely and responsibly adopt most of machine learning. But we do need sensible governance to create guardrails to keep social norms intact — for example, so that people can’t meddle with the democratic process of election. That means there are certain rules that you will have to follow, certain criteria you will have to meet.
And what are those criteria? What is the equivalent to auditing for a machine-learning system? There must be thoughtful discussion. AI is affecting every industry and every aspect of society. It has far-reaching implications and involves not only technical aspects, but also social, ethical, legal, economic and political considerations. So we need to have diverse perspectives — we need social scientists, political scientists, social workers, researchers, engineers, systems people and lawyers to come together to create something that works for society.
Golnoosh Farnadi, Canada CIFAR AI chair; professor at McGill University; adjunct professor at the University of Montreal and core faculty member at MILA (Quebec Institute for Learning Algorithms)
We have to change the narrative that thinking about ethical AI is going to be harmful for business. We need to have trusted parties, verifiers and auditors to first consider what metrics and standards are needed — and then create them. We have them in the food industry. We have them in the car industry. We have them in medicine. So we need to create this kind of standard for AI systems that will be trusted by the public and change the way companies are deploying systems.
The danger of creating regulations quickly is that they won’t be the right ones — they will be too restrictive or too vague. Considering the dynamic nature of AI, we need dynamic regulations. Standards alone can create a safer environment. We need to take time to test them so we can get a better understanding of AI systems and then create the regulations we need.
Mark Abbott, director of the Tech Stewardship program at MaRS, which helps individuals and organizations develop ways to shape technology for the benefit of all.
In all this dialogue around generative AI, people are calling for pause, they’re calling for regulation. That’s great but fundamentally we need to catch up on our broad stewardship capacity. As a society, we have strong muscles in terms of developing and scaling tech, but we have weak muscles in stewarding it responsibly. And that’s a big problem.
The idea of bringing together different voices to steward technology is a Canadian-born concept co-created by hundreds of leaders from industry, academia, governments, non-profits and professional associations. They’ve come together to look at what it’s going to take to ensure we’re developing technology that is more purposeful, responsible, inclusive and regenerative.
The most apt metaphor is the environmental movement. It’s like we’re awakening to the nature of our relationship with technology. Just like in the environmental movement, it’s not one policy, it’s not one group, it’s not just engineers. And that means that each of us has a role, companies have a role, governments have a role. Everybody has to start expressing more stewardship.
The trick is to understand the technology in terms of its impacts, and the values that are at play. Then you can make better values-based decisions. And you actually take that to action in your day-to-day life. This is especially important for those who have a direct role in creating, scaling and regulating technology. As tech stewards, we want to ensure AI and other technologies are shaping the world we want to see — and not creating one of dystopian scenarios we see when we go to the movies.
MaRS believes “innovation” means advancing Canadian technology for the benefit of all people. Join our mission.
Photos courtesy of Cohere, Vector Institute, MILA; Image source: istock
This website uses cookies to save your preferences, and track popular pages. Cookies ensure we do not require visitors to register, login, or share any identity information.