Roundup
Clearing the path for AI adoption
Cornami
Securing AI data
Tech industry veteran Wally Rhines, the longtime CEO of Mentor Graphics and a former senior executive at Texas Instruments’ semiconductor and data systems groups, joined Cornami Intelligent Computing in 2020, with one overriding goal: help to make data more secure. The fabless semiconductor company is developing a chip to power fully homomorphic encryption, a data protection technique that Rhines says will be essential to the commercialization of AI models.
“As people build their own foundation models, they’ll be able to capture the expertise associated with their business,” Rhines says. As they share that data internally, with customers, or even with other machine learning models, they’ll have to ensure it is protected. “What you’d really like to do is build a machine learning model based on confidential data—medical data for example—and then be able to do encrypted queries to an encrypted model and get encrypted results. Then you can charge the person who queries your model and make data access a business.”
That’s where fully homomorphic encryption comes in. The technique allows computer systems to access and manipulate encrypted data without the need to decrypt it first. “You can perform any mathematical computation on the data,” Rhines says. “Because you never have to decrypt it, it means your data is never exposed to anyone else when it’s in the data center.”
For all its power, fully homomorphic encryption, which the U.S. Department of Defense has called the “holy grail” of encryption, is so computationally intensive that it cannot be handled by traditional chip technology. Cornami has developed the software and hardware to do computations on encrypted data in real time.
“Everyone who wants to keep their data or their data model confidential will eventually move to fully homomorphic encryption,” Rhines says. “It’s the only truly secure way to protect that data, where it’s never exposed, it can never be stolen.”
Firework
Building models based on trusted data
Artificial intelligence has been part of the product roadmap at Firework, a video commerce startup, since the Silicon Valley–based company’s founding in 2017. But the company was guided by two principles: making sure AI adoption would be used not to eliminate employees but to make them more effective, giving them the tools to do their jobs better; and maintaining trust with customers. “We always ask ourselves how we can use AI not to replace humans, but to amplify them,” says Jerry Luk, Firework’s co-founder and president.
Firework empowers businesses to create direct and meaningful connections with interactive video experiences and has deployed AI in a number of its products and services. Firework’s offering includes a service that helps brands create videos based on content taken from product pages and a tool that assists sales associates as they interact with customers, giving them contextually relevant product information and improving the customer experience. Firework has also recently launched AVA, an AI-powered virtual sales assistant that can have “face-to-face” conversations with customers around the clock.
Inaccuracies or hallucinations generated by the large language models would immediately undermine customer trust, Luk says. To avoid such pitfalls, Firework has worked to ensure that all the information fed into those LLMs comes from trusted sources, whether it’s a product database, product descriptions from a client website, or from prior conversations between sales associates and customers. “For an enterprise grade solution, having accurate information is the most important priority,” Luk says.
Firework had been working on strengthening the accuracy of its AI models long before the introduction of ChatGPT. “It’s not about moving fast, but about having the right priorities,” Luk says. “We have been working for almost two years to get to where we are.”
Whatfix
Training engineers in AI to avert a talent shortage
For nearly a decade, Whatfix has been offering a platform that empowers software users to unlock the true potential of SaaS applications across web, desktop, and mobile interfaces. The company’s platform allows customers to create in-application nudges, product tours, onboarding guidance tips, and interactive walkthroughs that aim to help users be more engaged and more productive with the applications they use in their jobs. “Technology cannot be standard,” says Vara Kumar, co-founder and head of R&D and pre-sales. “We believe it has to be specific to a particular individual who is using it. Through a layer on top of the software, we make that software very specific to an individual who is using it.”
Kumar and co-founder and CEO Khadim Batti understood that generative AI created a massive opportunity for Whatfix to supercharge its offerings. But they wondered whether Whatfix would have the talent to take advantage of the opportunity within its business operations. It’s a concern that reverberates across the industry: In our most recent CEO survey, 34% of respondents said a shortage of AI knowledge and skills within their organizations could get in the way of harnessing the technology’s full potential. A year after the launch of ChatGPT, Kumar says Whatfix’s experience suggests the AI talent crunch may not be so dire after all.
“I don't believe deep learning is such a hard skill for engineers to pick up,” Kumar says. “A good software engineer can learn very quickly.”
To unlock the potential of AI within the company, Whatfix has launched an initiative to promote the majority of its engineers to get a deeper understanding on the use of large language models. “With a couple of months of effort, people can really pick up this technology,” Kumar says.
The internal promotion has enabled Whatfix to not only quickly leverage generative AI capabilities into its products but also to develop new sets of features. The customization of a performance review application, for example, may now offer users autocomplete suggestions or prompts based on feedback that colleagues have already entered into a review. Similarly, an internal search for company information through an application may offer ChatGPT-like answers, where the AI summarizes the information it finds in a user-friendly way.
The success Whatfix has had promoting employees to learn and apply AI has made Kumar more confident that not only his company but others across the industry will be able to unlock AI’s potential soon. “I’m excited about the overall productivity improvements and operational efficiency generative AI can bring,” he says. “I’m even more excited about how the future of human and machine interactions will change and how engineers will harness those new capabilities.”
LegalOn Technologies
Testing, training, and human review to maintain accuracy
In just about every analysis of sectors most likely to be disrupted by AI, legal services lands in the top ranks. At the same time, law is one of the fields most sensitive to some of the issues that still plague today’s AI systems, including data protection, confidentiality, and most notably, accuracy. A model that makes things up could lead to malpractice allegations. Already lawyers who have relied on AI that delivered erroneous research or analyses have been fined, fired, or sanctioned.
A keen awareness of these challenges has driven product development at LegalOn Technologies since the company was founded in 2017, says U.S. CEO Daniel Lewis. “In professional settings, it’s really important to build products that have trust,” Lewis says. “To create that trust, you need to do a variety of things on top of ChatGPT or any other generative AI technology.”
LegalOn offers AI contract review software that helps legal teams assess and strengthen contracts before signature, finding and fixing potential gaps and pitfalls. “We do a variety of testing and training to ensure that the results are of high quality and within defined guardrails, so that hallucinations are not taking place,” Lewis says.
Lewis says that while generative AI technologies are improving at a rapid pace, these kinds of guardrails against potential risks are going to be necessary for the foreseeable future. “The technology is going to get better and better,” Lewis says. “It’s hard to predict at what pace. At least for now, it’s very clear that significant work needs to be done on top of these types of models to upgrade them for professionals.”
Equally important, Lewis says, is making sure that AI tools are integrated into professional workflows. “Contracting, for example, doesn’t exist alone,” he says. “The processes before and after you review a contract are daily tasks for lawyers. AI can play a role, but it needs to be designed intuitively. It needs to be connected with all the other activities that lawyers conduct on a daily basis.”