The business perspective: Five practical insights to accelerating innovation with GenAI:

Here at CDP, we’ve delivered a range of Generative AI (GenAI) projects that use Large Language Models (LLMs). Each has been a journey of discovery, and sometimes frustration. But ultimately each has reinforced the potential for GenAI to dramatically accelerate innovation.

In an attempt to provide a useful contribution that cuts through the noise, we’ve distilled our learnings into a four-part series on how businesses, data scientists and product owners can leverage GenAI for success with a final perspective from a GenAI-powered ChatBot.

In this article, we start by drawing from our experiences implementing GenAI from a business’ perspective. 

Below is a practical, concise discussion for those considering how to bring GenAI and LLMs into their business. For a detailed description of the technology, simply ask Bing, which uses GPT-4. Or if you prefer a more human description, use Wikipedia. And no, in case you were wondering, LLMs were not used to create these articles. 

1. GenAI is math, not magic

Building profitable business propositions using LLMs is possible with the right approach. But while many will present the magic of AI, it’s important to focus on the math and the facts instead. Consider the demise of Babylon Health in a pre-LLM world – they went from unicorn to bust in months, because they got lost in the magic. 

LLMs use statistics to predict the next in a sequence of words, pixels, sounds etc. The statistics are buried deep within multiple layers of artificial neural networks which cost many millions of dollars to train, but they are numbers nonetheless. They do, however, apply randomness to be more ‘creative’ in their outputs. So, while they are incredibly capable, they are also somewhat unreliable without appropriate guard-rails in place. 

So, what can you expect from an LLM? A ‘human-like, fallible interface’ is a useful way to characterize an off-the-shelf model (as opposed to one that has been trained to do a specific task).  

LLMs interact in a human-like manner; they work with the whole conversation and are highly fluent in multiple languages and data formats. Almost as a side effect, the numbers buried in the networks (that offer the right sequence of words in response to a prompt) also encapsulate the information from the original training material. However, they don’t apply traditional logic to that information. There’s no ‘if this then that’ logic of a traditional expert system, which means the text they produce can be highly fluent and often poetic, but the information they offer is fallible. This tendency to make things up, or to ‘hallucinate’, occurs in around 20% of responses in the case of ChatGPT in its default creative mode.   

The quality of the response is highly dependent on both the prompt and the training data. This means that two new skill sets are emerging in those working with LLMs: 1. data engineers who are able to prepare high-quality structured data for training purposes, both authentic and synthetic; and 2. prompt engineers who are able to construct the requests of LLMs to garner robust and insightful answers. Both skill sets comprise the technical competence and experiential know-how to carry out the work.

2. The applications for GenAI are vast

The applications of LLMs tend to focus on exploiting four core competencies: 

Summary – condense and distill large volumes of text into their most essential points, providing a concise and easily digestible overview. This is particularly useful in applications such as news aggregators or academic research where users need quick insights without having to sift through extensive material.  

Expansion – generate new content based on an initial seed or prompt, adhering to a specific style or format. This capability is beneficial in creative fields such as storytelling or content marketing, where the user needs to develop an initial idea or concept into a fully-fledged narrative or article. 

Inference – draw conclusions from the available information, often utilizing the knowledge and patterns learned during the training of machine learning models. This is crucial in applications that require decision-making or make recommendations, such as medical diagnosis software or financial advisory tools. 

Analysis – examine content to identify patterns, features, and insights, often through statistical or computational methods. This is invaluable in fields such as data science or market research, where understanding trends, sentiments, or anomalies can provide a competitive edge. 

Building on these core competencies, two further areas emerge:  

Translation – convert text from one language to another; it can also involve adapting the style of prose or transforming data into different formats. This makes translation a versatile tool in applications ranging from multilingual customer support systems to data visualization tools.

Knowledge capture – encode or store information in a structured and retrievable manner. This is essential in applications such as knowledge management systems or educational platforms, where the goal is to create a sustainable and easily accessible repository of information for future use.

3. Big Tech is laying the foundations, but small tech is winning

Four-fifths (80%) of the top 100 big tech firms either own or invest in a frontier LLM. Even though the cost of a single training run is around $5m (and many hundreds of runs will be necessary over time), there is undoubtedly strategic value in having a stake in the best LLMs.  

Indeed, Microsoft is working hard to bring this to every office application on your desktop, albeit via a slow roll-out for early adopter organizations to ensure they avoid another Clippy moment. Google has also entered the fray by integrating Bard into its own offerings. 

But perhaps the most notable fact of this hyperbolic take-up is that many of these models are being made available for anyone to use, both within subscription and open-source models. Not only are the exploration costs well within the most modest of R&D budgets, but there is also a lively community of companies (and online experts) providing the tech stack and know-how to make it an easy process. What this means is that pretty much anyone can create something truly new.   

Disruptive innovation has always been achieved by small teams moving fast and breaking things. This is certainly the case in the exploitation of GenAI. 

4. We are beyond ‘the peak of Mount Stupid’

With apologies to Dunning-Kruger, who never actually plotted a peak for this, it does feel as though we are now beyond the peak level of hype when it comes to GenAI. Things are a little quieter; the promises less fanciful; the urgency for change less pressing. The concern over mass job losses has receded. And we are getting used to waiting for Microsoft, Anthropic and others to put their services on general release. Indeed, even the performance of the latest LLMs has plateaued as the demand for resources has grown and the focus has turned from creativity to accuracy.  

The market is also maturing, often in response to a fear of Big Tech’s actions. 

Legal arguments and even industrial action from authors, artists and composers are yet to settle over copyright material being used to train the LLMs. A similar concern has emerged regarding the exposure of confidential information unwittingly and perhaps irresponsibly included in the training data. (Expect to see new clauses in your NDAs soon.)  

Schools, universities and the media are concerned about how to distinguish between real and artificial. A ‘perplexity score’ can be used to identify the author as human or artificial. The lower the score, the more likely the text is artificial. To err really is to be human after all.  

The reality of creating, testing and operating LLMs is starting to sink in. Costs and resources are high. We are not yet at bitcoin levels of energy consumption, but carbon load is a reasonable concern along with business profitability. Techniques are emerging to shrink resources while maintaining performance. 

We envisage some further rolling-back on GenAI claims and pushing right on the timelines over the coming months – even if the call to hit pause on GenAI development from Musk et al has fallen on deaf ears.  

Although we are entering a period of understanding and potential regulation, the hype is still present. Just like bitcoin, there will always be businesses and media hitching otherwise prosaic concepts to the GenAI bandwagon.

5. Now is the time to explore GenAI for innovation

We can already see that GenAI is going to be transformative across market sectors. A third or more of new software code is generated automatically. Consumers prefer talking with LLMs instead of waiting on a human call center. Legal arguments have been generated that have swayed the courts.  

The tools to experiment, investigate and create testable proofs of concept are plentiful and the costs of doing so are modest. Innovation can be incremental, or it can be truly disruptive. The reality for most businesses is that GenAI will drive a bit of both. New tools will roll out to improve specific tasks and add value to the business today. Radical new approaches that change the entire business and reset the competitive landscape take longer. Their path is rarely straight, and exploration and feedback is vital. But the key to both is to get involved.  

Despite what you might hear in the media, there is time to take a considered approach. The barrier is low and getting started now is the best way to reduce the risk of missing out on key commercial opportunities.

Interested in exploring how GenAI can accelerate your innovation?

In Part 2, we’ll share practical insights for the Product Owner in their quest to leverage GenAI. 

Team Expertise

Data Science & Digital Service

Find the authors on LinkedIn:

Tim Murdoch

Business Development Lead – Digital

AJ Lahade

Data Scientist

Stephen Zabrecky

Digital Product Lead