Making responsible and ethical AI a priority
How four companies are addressing issues of trust and accountability around AI
Too many of the more than 3,000 security software vendors engineer products that work at first, but soon leave customers stuck in 12- or 24-month contracts that have little value, Gosschalk says. “That’s not okay.”
The path to offering a warranty began with service level agreements that let customers walk away from existing contracts if Arkose Lab’s products stopped working as advertised. After years without having a customer walk, Gosschalk came up with the idea for a warranty.
This approach to business has helped Arkose Labs land an enviable roster of tech giants including Amazon, Microsoft, and OpenAI.
Leah is the fastest-growing new product in the company’s history, in part because it lets legal departments help clients work through issues or get contracts approved far faster than before.
While customers’ enthusiasm about Leah is great for business, Misra says the ethical thing to do is to temper expectations with a dose of reality. Rather than feed the excitement, he works to explain what generative AI can and cannot do. “Expectation management is fundamental,” he says.
ContractPodAi started out by selling contract lifecycle services to law firms and legal departments. It then developed Leah to help working lawyers handle all the specific tasks of their profession.
But when Meesho deployed AI to help match products with customers, the technology undermined, rather than promoted, the founders’ goal of democratization. Instead of broadening the array of products consumers would see, the AI algorithms narrowed it. “People would say, ‘Why do I keep seeing the same kinds of products,’” says Barnwal, who is Meesho’s CTO. Too often, the recommendations made it harder for little known merchants to establish themselves online.
For Barnwal and his team, this was a case of AI algorithms not helping the company achieve its ultimate goals. “This was going against our mission of democratizing internet commerce for everyone in India,” says Barnwal.
Meesho devised a new plan, designed to help make sure every new product posted on the site got a reasonable number of page views. This expanded the selection for consumers, and helped merchants get a fair shake. Besides more exposure for their products, Meesho’s systems also delivered feedback on price and quality issues so merchants could course correct. New algorithms factored in customer complaints, return rates, and other factors to determine if the seller had earned further support.
As machine learning and other AI technologies took off in the late 2010s, Icertis organized itself for accountability in the ethical use of AI. For example, its chief counsel was put in charge of making sure the company used only training data it owned or had permission to use. “That’s the first tenet of responsible AI: Make sure you have the rights to use all of the data you use for training,” says Darda.
The chief information security officer was responsible for the two other key aspects of AI ethics, as Darda sees it: coming up with testing regimens to validate the accuracy and relevancy of AI systems’ output to meet customer requirements and making sure that output is delivered to the customer securely and reliably.
These clear lines of responsibility and accountability put the company in a good position to handle the AI ethics audits and other requirements its customers now face. It not only developed processes to quickly satisfy these audits, but also created ways to promote those processes in its marketing outreach.