Navigating Vendor Onboarding with an 'AI' Product

How vendor onboarding for "AI' products has evolved over the last two years

Navigating Vendor Onboarding with an 'AI' Product

With DiligenAI, we have been in the market with a product that leverages large language models for almost two years. During this time, we have been through vendor onboarding at several dozen organizations, both large and small. The landscape has evolved significantly, especially when it comes to how companies approach onboarding products that use generative AI. On average, we have seen onboarding times increase by one to two months due to additional reviews triggered by our offering having an AI component.

Organizations now tend to fall into three distinct camps when evaluating generative AI products during the vendor onboarding process:

  • No Change: These companies treat AI-driven products the same way they evaluate any other software.
  • Nuanced Approach: These companies recognize that AI introduces new factors that require additional exploration. They ask thoughtful questions based on an understanding of the technology without over-complicating the process.
  • 'Kitchen Sink' Approach: These companies go to great lengths when evaluating AI products. Any mention of AI triggers an extensive review process involving exhaustive questionnaires and internal review boards.

The most productive conversations happen when companies adopt a balanced, informed approach. To help facilitate this, we proactively provide a clear policy outlining how we use large language models in our service.

What Should Companies Ask?

While many privacy and data security evaluation questions are no different from what a prospective client would ask any software provider, there are a few key areas where AI solutions warrant special attention. If your organization is evaluating a product that leverages generative AI, here are some critical questions you should be asking:

  1. Does the vendor use an external provider for AI models? Most vendors' generative AI features leverage one of the larger frontier model providers such as OpenAI, Google, or Anthropic. While some companies may run open-source language models locally, this is less common. Understanding the underlying provider will help companies assess data handling practices and potential security risks.
  2. If there is an external AI provider, do they retain any client data? Ask for specifics on what data is retained, why it is retained, and for how long.
  3. Does the vendor (or model service provider) use client input or output data for training? Some services may utilize customer data to improve their models unless explicitly configured otherwise. Be clear on what data practices are in place to protect your sensitive information.
  4. Does the vendor (or model service provider) use input or output data for use with other clients? Data sharing across clients can introduce confidentiality risks. Be sure to clarify whether your data is kept isolated or potentially exposed to other customers.
  5. How can content provided by the large language model be tested for accuracy? While generative AI can be highly effective, it is not immune to producing incorrect or misleading information. Understanding how the vendor validates outputs and ensures content quality will help you assess the reliability of the solution.
  6. Is content generated by AI clearly identified for users? It is important for users to know how information was generated in a service. Providing additional context around when information was generated by an AI system will greatly improve user expectations and experience.

By asking these targeted questions, buyers can gain a clearer understanding of how AI technologies are integrated into a vendor's product and what safeguards are in place to protect their data.

As time progresses, and language models and generative AI are integrated into more aspects of software, we expect a more standardized and integrated vendor onboarding process to emerge.