NuEnergy.ai has secured a U.S. patent that covers key aspects of its Machine Trust Index, which helps communicate sophisticated technical AI assessments in terms that people who govern the use of AI can understand and act upon.
“This patent is not just a certificate,” CEO and co-founder Niraj Bhargava said in a news release. “It is a recognition of the necessity of research and investments to generate valued solutions to a growing problem that has yet to be fully addressed.”
NuEnergy.ai – which has staff in Ottawa, Waterloo, Toronto, Montreal and Vancouver – was launched in 2018 to provide AI-management software and consulting services that help clients set up “guardrails" that mitigate risk and protect trust in an organization. Those guardrails consist of governance plans and the software needed to measure essential AI “trust parameters" such as privacy, ethics, transparency and biases.
Given AI’s dependence on data sets, its growing use has sparked a global discussion about privacy, security, ethics, biases, cultural sensitivities and human rights. The goal, according to governance specialists, is to ensure that AI technologies are understandable, transparent and ethical.
“(NuEnergy.ai’s guardrails) act like a traffic light system: green means it is within acceptable organization boundaries, red means it is outside and should not be used, and yellow means it is acceptable with specific parameters and monitoring,” Bhargava told Tech News. “We work closely with our clients to establish these guardrails and continuously measure AI activities against them.”
The new patent – Methods and Systems for the Measurement of Relative Trustworthiness for Technology Enhanced with AI Learning Algorithms – is critical in determining a Machine Trust Index score, which forms an integral part of NuEnergy.ai's proprietary Machine Trust Platform software.
The platform is an enterprise-grade SaaS solution enabling AI activity monitoring and insights via a governance dashboard. It assesses trust parameters such as privacy, ethics, transparency, and bias, mitigating AI drift risks. It integrates global standards like the federal government’s Algorithmic Impact Assessment (AIA) and Generative AI Guidelines, and can be tailored to incorporate additional governance standards and frameworks, the company says.
“Our clients can subscribe to the platform and assess their AI activities, identify necessary mitigations, and ensure compliance with established guardrails,” Bhargava said.
Bhargava and four co-inventors are credited on the patent: Fred Speckeen, Dr. Evan W. Steeg, Jorge Deligiannis and Dr. Gaston Gonnet.
“We take pride in contributing to Canada's responsible AI ecosystem and reinforcing our leadership in this crucial domain,” Bhargava said. “This patent underscores the unique value we bring to the table, and we are excited about the opportunities this patent opens up for us and our mission.”
This website uses cookies to save your preferences, and track popular pages. Cookies ensure we do not require visitors to register, login, or share any identity information.