The data scientist’s perspective: Five practical insights to accelerating innovation with GenAI

Here at CDP, we’ve delivered a range of Generative AI (GenAI) projects that use Large Language Models (LLMs). Each has been a journey of discovery, and sometimes frustration. But ultimately each has reinforced the potential for GenAI to dramatically accelerate innovation.

In an attempt to provide a useful contribution that cuts through the noise, we’ve distilled our learnings into a four-part series on how businesses, data scientists and product owners can leverage GenAI for success with a final perspective from a GenAI-powered ChatBot.

In this third article, we draw from our experiences implementing GenAI from a data scientist’s perspective. To get a high-level view of LLMs check out Part 1. Or for insight from the inside out, check out Part 4 of our series: Five practical insights to accelerating innovation with GenAI.

1. Accuracy

You can rely on LLMs for human like behaviours and creativity: However, you need to apply one or more of the following techniques to ensure a robust accuracy in your work.

Quality Data Sets: The foundation of accuracy lies in the quality of the data sets. By combining an off-the-shelf LLM with carefully curated, high-quality proprietary data, you create a robust foundation for generating precise and reliable content.

Verification Measures with RAG Integration: Implementing a search engine integrated with a Retrieval Augmented Generation (RAG) system and a dedicated fact-checking service adds layers of verification. This ensures that the output aligns with actual source material, fortifying the trustworthiness of the generated content.

Fine-tuning for Precision: Fine-tuning a pre-trained LLM on specific tasks and datasets can yield highly accurate results tailored to particular domains. This approach allows for a more controlled output and can be especially effective in specialized fields where precision is paramount.

Prompt Engineering for Flexibility: On the other hand, employing prompting techniques provides a more flexible way to guide the LLM’s output. By carefully crafting prompts or queries, you can influence the type and style of the generated content, allowing for adaptability across a range of contexts and requirements.

Zero-shot prompting is a way to guide the LLMs to provide the output in a particular manner. One approach is to prompt it to decompose the output into logical steps, which encourages the model to apply that logic in coming to its final output.

Peer-review: LLMs are very good at checking another’s output for accuracy. By using two independent, but similarly trained, LLMs to collaborate it is possible to filter out errors that may emerge from using just the one on its own.

Lastly, keep in mind that the output from LLMs always has the potential to mislead. Design in appropriate guard-rails from the start; but also set expectations for risk with the product owner and business throughout the project.

2. Resources

LLMs require substantial resources to operate as well as to train. Plan this in from the start and recognise the dependencies you may be creating for the business in terms of performance and budgets.

Optimize Compute Power: Recognize that LLMs require substantial compute power for optimal performance. Restricting resources may lead to a reduction in the quality of generated content. Therefore, investing in sufficient computational capabilities is crucial to maintain high standards of output.

Quantization: Convert the floating-point weights and activations of LLMs to lower precision integer or fixed-point values to allow more efficient execution on hardware with limited resources. While this can result in some loss of accuracy a quantized model can be fine-tuned or retrained to regain the lost accuracy. Quantization enables large language models to be deployed efficiently on resource-constrained edge devices by reducing memory bandwidth and compute requirements, while aiming to maintain minimal accuracy loss compared to the original model.

Augmentation Over Creation: Rather than building from scratch, consider augmenting existing models. Billions have already been invested in training and refining LLMs to get them this far. Techniques such as embeddings are well established to augment these models with additional training data, making it more practical to enhance existing resources rather than create your own from scratch. This approach allows for cost-effective improvements while leveraging the extensive groundwork laid by previous investments.

Focus on the objectives for the project, not the technologies. LLMs may not be the best solution for many of the steps in the tool chain. Other techniques may be better suited and make fewer demands on the resources you have available.  Segmenting the logic flow into distinct steps will provide opportunities to reduce and simplify.

3. Modularity of Architecture

Give yourself time to read up, try out the latest advances, iterate, improve and bring into your modular architecture.

In a dynamic landscape, continual learning and research are essential for harnessing evolving technologies. Iterative development ensures adaptability and refinement, keeping your modular architecture at the forefront of progress. Striking a balance between efficiency and thoroughness is key in integrating advancements, ensuring a seamless and effective implementation. The ever-improving technological landscape necessitates staying vigilant and proactive in enhancing your system.

Langchain is a good example. It’s defining strength lies in its emphasis on modularity. It offers a versatile framework for applications driven by LLMs like GPT-3/4, Anthropic or BLOOM. Its components are abstracted for seamless interaction with LLMs and can be employed independently of the Langchain framework. This modular approach extends to advanced use cases, where components can be combined to create sophisticated functions like Generative Questioning (GQA) or summarization. With features like memory persistence and callbacks, Langchain ensures continuity and control across runs, cementing its reputation as a pioneering framework for LLM applications.

4. Independence 

Architectural Autonomy: Design with a focus on architectural independence. By creating a modular framework, you establish a system that is not overly reliant on a specific language model. This ensures adaptability to evolving technologies and allows for seamless integration of future advancements. 

Consider Prompt over training: As described above, a well-engineered prompt can include training material within the context window. This could allow you to operate with untrained LLMs as they are; avoiding the tie-in that training implies.

Avoid Vendor Lock-In: Strive for independence from specific vendors or providers. This entails selecting technologies and components that are compatible with a range of models and platforms. Avoiding vendor lock-in promotes flexibility and prevents potential constraints associated with proprietary solution.

5. Confidentiality & Provenance

It is important to understand the source of the information being used. Issues of confidentiality, copywrite and provenance are important considerations and bring risks that the business needs to address. 

Security: When working with multiple clients, and internal or external teams, it’s crucial to maintain strict confidentiality and prevent any potential conflicts of interest. Where information is being used to fine-tune LLMs, ensure that you have appropriate separation of models to avoid cross-contamination of information.

Provenance: Establish a system for verifying the provenance of data sources. This involves validating the authenticity and reliability of information before integration into the model. By ensuring that data originates from reputable and trustworthy sources, you enhance the overall integrity and credibility of the generated content.

Source: Referencing the original sources is particularly important when applied to knowledge management and news flows where transparency of source is a key step in understanding the validity of the information being crafted. 

Interested in exploring how GenAI can accelerate your innovation?

Come and join us in Cambridge, UK, and Raleigh, NC, where we’ll be running a series of in-person workshops to help clients identify the opportunities (and threats) of GenAI and plan a path to accelerate their innovation.


Find the authors on LinkedIn:

Tim Murdoch

Business Development Lead – Digital

AJ Lahade

Data Scientist

Stephen Zabrecky

Digital Product Lead