David Van Bruwaene was pursuing his PhD in philosophy at Cornell when he developed a passion for linguistics and natural language processing (NLP), the subfield of AI concerned with allowing machines to understand human language. After leaving academia to join VISR, an AI startup focused on applying NLP to detect cyberbullying on social media, Van Bruwaene says that he experienced firsthand the challenge of ensuring AI developers and business decision makers remain on the same page throughout the AI development process.
"Talks of AI regulation have accelerated at an unprecedented pace, and AI safety is a concern to just about everyone," Van Bruwaene told TechCrunch in an email interview. "The problem is that AI projects can go on forever without a clear path to production, because no one person can say for sure the AI is safe and compliant 'enough' to go to production."
After meeting Fion Lee-Madan, formerly an enterprise systems solutions architect at companies like Sapient, Intuit and ATG, Van Bruwaene convinced her to help co-develop a platform -- Fairly AI -- to assist organizations in managing the risks around their AI systems.
Incubated at Accenture's FinTech Innovation Lab during the pandemic, Fairly AI went on to become a fully fledged company and was accepted into the Techstars accelerator in Toronto earlier this year.
"Fairly enables data scientists and policy experts to accelerate AI adoption while minimizing risks by applying policies and controls to in-house and third-party AI systems," Van Bruwaene, who serves as Fairly's CEO, said. "Fairly is on a mission to enable the safe, secure and compliant adoption of AI."
Fairly, which is competing in this year's Startup Battlefield 200 at TechCrunch's annual Disrupt conference in San Francisco, aims to build a "marketplace" for AI policies and controls from partners, including the Partnership on AI and not-for-profit Responsible AI Institute. As Van Bruwaene explains, Fairly takes frameworks and governance, risk and compliance processes from other industries, mainly the financial services industry, and adapts and extends them to AI.
Deployed on-premises or in a private cloud, Fairly's platform benchmarks a company's AI models and datasets against internal and external policies, as well as relevant standards and regulations. The platform can run manually or continuously, integrated with existing CI/CD pipelines via a single line of code.
The goal is to make it easier for laypeople to understand risk and compliance issues that might be associated with an AI system, Van Bruwaene says -- and to mitigate new risks as they emerge.
He points to Zillow Offers as one example of an algorithm gone amiss due to a governance shortfall. In 2022, Zillow was forced to shut down Zillow Offers, which offered homeowners cash for their properties in as little as two days, after the algorithm behind the service caused Zillow to buy thousands of houses without factoring in repairs with the skyrocketing costs of materials and labor.
Fairly AI's monitoring dashboard, which keeps track of the status of AI models and datasets. Image Credits: Fairly AI
The advent of generative AI such as ChatGPT has further complicated risk management. A McKinsey survey this year found that just 21% of those using AI at work said their employers have put policies in place for how employees should -- and shouldn't -- use AI. And only 32% said their company was working to mitigate the risk of inaccurate information from AI, while 38% said their organization was taking steps to deal with the related cybersecurity threats.
"Our target audience for Fairly told us that they don’t want to see all the graphs and charts common in existing machine learning monitoring and observability tools, because they don’t understand them," Van Bruwaene said. "Our technology allows us to translate compliance requirements into engineering requirements to produce 'traffic light' signals for decision makers."
Fairly's launch comes at an inflection point for the AI industry. Policymakers, particularly overseas, are contemplating ways to standardize AI regulation to prevent harms that might arise from their deployment.
The EU is inching closer to enacting the AI Act, which aims to introduce a unified regulatory and legal framework for AI systems. Last year, the White House released the blueprint for an "AI Bill of Rights," a list of proposed principles to guide how to design and use AI tech. And China has imposed restrictions on the ways in which generative AI, like AI art tools, can be deployed on the public web.
Stanford University's 2023 AI Index shows that 37 bills related to AI were passed into law throughout the world in 2022. It's a lot for companies employing AI to keep apprised of, Van Bruwaene rightly points out.
"Companies need to use a tool like Fairly to help set up policies and controls upfront, so that AI developers know the goal post. It's a business decision ultimately," he said. "By providing true explainability using both qualitative and quantitative controls, Fairly can help accelerate the compliance process."
Fairly is pre-revenue, but Van Bruwaene claims that the company has already completed "revenue-generating" pilots with co-design partners in four different countries. "We've expanded our beta program to a dozen customers, and we're in talks with some government agencies -- and have already started a pilot with at least one of them," he added.
The growth comes despite the expanding number of players in the AI risk and compliance management space. There's Credo AI, for example, which offers a framework to give companies a window into their AI governance. Holistic AI evaluates machine learning models for fairness and compliance, similar to Fairly. And Monitaur is developing an AI governance solution for the insurance sector.
Allied Market Research projects that the market for AI trust, risk and security management products will reach $7.4 billion by 2032, up from $1.7 billion in 2022. But Van Bruwaene isn't concerned about the competition -- or at least gives the impression that he isn't.
"Since we started during the pandemic, we were building as a remote team from day one," he said. "In Canada, we have scientific research and experimental development credits, so our R&D burn rates are almost 40% less than our U.S.-based competitors."
To support its customer acquisition efforts, Fairly recently closed a $1.7 million pre-seed round led by Flying Fish Partners with participation from Loyal VC, Backstage Capital, XFactor Ventures, NEXT Canada and Techstars Toronto. The bulk of the new funds will be put toward product development, building a go-to-market team and expanding Fairly's staff from 12 employees and contractors to 24 within the next year," Van Bruwaene said.
"Our platform is really trying to help organizations understand their operational risk, model risk and compliance risk, so that they can make an informed 'go/no-go' decision and avoid spending millions on an AI model only to get stuck in a perpetual risk and compliance review cycle," he added.