How to Take AI Models from Experiments to Real-World Use

Last year, when the excitement around AI reached its peak, many companies across different industries poured resources into creating their own AI models. However, moving these experimental AI models into actual, everyday use has proven to be a tough challenge for many.

How to Take AI Models from Experiments to Real-World Use

The path from experimenting with machine learning in a controlled setting to making AI systems that work in the real world is filled with hurdles. These include problems like dealing with isolated data, managing complex deployment processes, and handling governance issues.

Despite these challenges, recent data shows a positive shift in the AI world. Reports indicate a significant increase in the number of AI models being moved into production. Specifically, there's been an impressive 1,018% rise in AI models being deployed for real use compared to the previous year. This growth far outpaces the 134% increase in AI experiments, signaling that the AI field is maturing, with more projects making it past the experimental stage.

To understand how companies are successfully scaling their AI models from experiments to production, Techopedia interviewed Naveen Zutshi, the Chief Information Officer (CIO) at Databricks.


About Naveen Zutshi

Naveen Zutshi

Naveen Zutshi is the CIO of Databricks, a company known for its work in AI and data. Before this, he was the CIO at Palo Alto Networks, where he oversaw analytics, AI, applications, infrastructure, and operations. He has also held key roles at Gap and Cisco, contributing to his vast experience in technology and operations.


Key Insights from Naveen Zutshi

Naveen Zutshi shared his thoughts on the challenges and strategies involved in moving AI models from the experimental phase to real-world production.


1. Readiness for Production Deployment

When asked about how long an AI model should stay in the experimental phase before moving to production, Naveen emphasized that it depends on the intended use. For instance, if you're developing an AI product like ChatGPT, the time spent in experimentation can vary based on what you're building. It's important to test the AI's performance against human results, conduct A/B tests, and refine the model based on feedback. Monitoring these factors helps build confidence before deploying the model in a real-world setting.


2. Key Factors for Successful AI Scaling

To successfully scale AI models, Naveen highlighted several key factors:

  • Starting with a Strong Model: Begin with a robust model, whether it's a frontier or open-source model, and build on it.
  • Understanding User Experience: It’s crucial to consider how users will interact with the AI solution and how their behavior might change.
  • Implementing a Strong SDLC Process: A solid software development lifecycle (SDLC) process is essential for training, improving, and managing models and data.
  • Ensuring Data Governance: Data quality and governance are critical. This involves protecting data through access controls and preventing data leakage, which is especially important for businesses.
  • Maintaining Accuracy: Accuracy is vital, especially in business settings where AI outputs need to be reliable. Techniques like retrieval augmented generation can improve accuracy by using existing data.
  • Considering Governance and Ethics: Broader governance aspects, such as AI policies, security, and bias mitigation, should also be factored in for responsible AI deployment.


3. Technical Challenges in Transitioning AI Models

Naveen pointed out three major technical challenges in moving AI models from experiments to production:

  • Data Quality: Access to clean, well-managed data is crucial. Even if you have a lot of data, the key question is how clean it is.
  • Strong SDLC Process: Scaling the models and ensuring the steps are well-executed is important.
  • Change Management: Educating and preparing the user base for the AI models is often overlooked. Users need to feel comfortable using the models and see their benefits.

4. Strategies for Data Quality and Governance

To maintain data quality and ensure proper governance, Naveen recommended:

  • Developing a Robust Data Management Strategy: This includes setting up a centralized repository, implementing access controls, and defining clear data metrics.
  • Collaborating with Stakeholders: Working closely with business stakeholders to identify data champions and stewards is crucial for data cleansing and governance.
  • Viewing Data Quality as an Ongoing Process: Data quality requires continuous attention and should be treated as an ongoing effort.

5. Future Technologies Shaping AI Scaling

Looking ahead, Naveen identified three key areas that will impact AI scaling in enterprises:

  • Advancements in Models: The shift from a few dominant models to a wider range of high-quality options, including open-source large language models (LLMs), will offer businesses more control, flexibility, and cost-effectiveness.
  • Enhanced Model Capabilities: AI is evolving from simple chatbots to more complex models capable of handling intricate tasks, such as autonomously completing multiple steps in processes like booking travel.
  • Data Innovation: New data types, like chain-of-thought data, are emerging, which will help models better understand human reasoning and solve problems more effectively.

6. Balancing Work and Personal Life in a High-Pressure Role

Balancing a demanding tech role with personal life is challenging. Naveen shared that he blends his work with his personal life, finding time for activities he loves, like hiking. Disconnecting, even briefly, allows him to be present with his family and recharge.

Overall, transitioning AI models from experiments to production requires careful planning, strong processes, and ongoing attention to data quality and governance. As AI technology continues to evolve, businesses must stay ahead by adopting emerging tools and strategies.