The product owner’s perspective: Five practical insights to accelerating innovation with GenAI

Here at CDP, we’ve delivered a range of Generative AI (GenAI) projects that use Large Language Models (LLMs). Each has been a journey of discovery, and sometimes frustration. But ultimately each has reinforced the potential for GenAI to dramatically accelerate innovation.

In an attempt to provide a useful contribution that cuts through the noise, we’ve distilled our learnings into a four-part series on how businesses, data scientists and product owners can leverage GenAI for success with a final perspective from a GenAI-powered ChatBot.

In this second article, we draw from our experiences implementing GenAI from a product owner’s perspective. To get a high-level view of LLMs check out Part 1 or for a deeper dive in the technology from a data scientist perspective check out Part 3.

Download our four-part series on GenAI

1. Start at the end and work backwards

As with all truly transformative innovation, start by understanding what you are offering your users and work back from them. Ignore the undoubted magic of the technology at this stage – you can rely on that coming later.

You will need to set your success criteria, and this is where to start. Delighting your user base and measuring how they will benefit will do more to drive adoption than any shiny AI tech that might be going on behind the scenes.

Choose your project carefully.

  • Choose an area that you already know well or for which you have a good way of measuring success. This will ensure you see beyond the magic of the black-box and can truly judge the performance and value that LLMs bring.
  • Choose an area where LLMs work to their strengths by taking advantage of at least one of the core competencies they have been shown to do well; summary, expansion, inference and analysis.

2. Don’t forget the basics

Make good use of Service Design techniques to define what success looks like. Map the User Journey and spend time defining the touchpoints and modelling the semantic information architecture.

And then strip it back. Cut away absolutely everything that isn’t vital to the successful outcome you plan for. Don’t let the designers loose until this is done. And consider any investigative work with the technology up to this point as exploratory and should almost certainly be archived.

You’ll then have a clear set of priorities, requirements, information flows and use-cases that everyone understands, and everyone can support. The whole team will be clear about what they are aiming for. Keeping their eye focussed on the prize makes the Product Owner’s primary catch-phrases more effective: “No that is not in scope” and “This is lower priority”.

And if this is starting to sound like the start of any solid digital project – good, it should.

3. Experiment

Give your team as much time as possible to try things out. Build the time into the plan and break the experiments down into small and well-defined steps to learn and iterate.

Look to experiment with the following:

  • How the structure of prompts changes the output.
  • How the different LLMs compare when asked to respond to the same prompt.
  • How to extend the LLMs by adding training to embed your own information data.

Aim to build the experimental steps around the core competencies of Generative AI. And later, bring these together to form an overall solution using your favourite AI automation tool chain.

There will be surprises. There will be frustrations. And there will be changes in the way that you approach the use of the LLMs. Don’t be afraid to pivot on how you use the technology; or indeed ‘if’ you use the technology. But remember the basics, keep your eye on what success looks like and don’t let the team get carried away with ‘shiny object syndrome’.

4. Get lots of feedback

While using AI, remember to share your work with real humans as early as possible: People outside your team who can give you useful feedback. Set up demos within the team to share learnings and put on regular show-and-tell sessions with your target audience. And, as soon as possible, let them try it out – on their own, without you there. They will learn to see beyond the magic, and you will quickly find out what works and what doesn’t.

Your priorities will change – but the fundamental definition of success won’t (hopefully). And don’t forget the importance of plain old testing. The outputs from a LLMs can vary widely with only the smallest changes in training data and prompts. Fortunately, LLMs can come to the rescue here – they are great at evaluating the output from other models through peer- review. Use that capability to help you test. This is also useful for building into the architecture of your solution. Where you have the resources; double up the LLMs to interact and increase the quality of output for a production system.

5. Don’t underestimate the time you need

Don’t underestimate the time it will take to gather, prepare and refine your data. When it comes to data, quality and variety are just as important as quantity. With demographic information, a good distribution of variety is vital to represent your users truly and ethically. And don’t forget to set aside at least 10% randomly selected from the training set so that you can properly test the results.

To save time and increase the training and test data available, explore opportunities to synthesise data to add to your original data set. Also, don’t underestimate the time it will take to test and refine the prompts and LLMs settings to achieve the repeatable outcomes you are looking for. Prompt engineering is an art as well as a skill and takes time to learn.

Finally, know when to stop. It will always be possible to make it a bit better. Be clear about what is good enough and recognise when you get there. The impulse for the team to keep tweaking will never end – it’s simply too absorbing.

Download our four-part series on GenAI

Interested in exploring how GenAI can accelerate your innovation?

Come and join us in Cambridge, UK, and Raleigh, NC, where we’ll be running a series of in-person workshops to help clients identify the opportunities (and threats) of GenAI and plan a path to accelerate their innovation.

Find the authors on LinkedIn:

Tim Murdoch

Business Development Lead – Digital

AJ Lahade

Data Scientist

Stephen Zabrecky

Digital Product Lead