Insights into GenAI data scientist perspective - Cambridge Design Partnership|The data scientist’s perspective|Insights into GenAI product owner's perspective - Cambridge Design Partnership
Find the authors
on LinkedIn:

The product owner’s perspective: Five practical insights to accelerating innovation with GenAI

Here at CDP, we’ve delivered a range of Generative AI (GenAI) projects that use Large Language Models (LLMs). Each has been a journey of discovery, and sometimes frustration. But ultimately each has reinforced the potential for GenAI to dramatically accelerate innovation.

In an attempt to provide a useful contribution that cuts through the noise, we’ve distilled our learnings into a four-part series on how businesses, data scientists and product owners can leverage GenAI for success with a final perspective from a GenAI-powered ChatBot.

In this second article, we draw from our experiences implementing GenAI from a product owner’s perspective. To get a high-level view of LLMs check out Part 1 or for a deeper dive in the technology from a data scientist perspective check out Part 3.

1. Start at the end and work backwards

As with all truly transformative innovation, start by understanding what you are offering your users and work back from them. Ignore the undoubted magic of the technology at this stage – you can rely on that coming later.

You will need to set your success criteria, and this is where to start. Delighting your user base and measuring how they will benefit will do more to drive adoption than any shiny AI tech that might be going on behind the scenes.


Choose your project carefully.

  • Choose an area that you already know well or for which you have a good way of measuring success. This will ensure you see beyond the magic of the black-box and can truly judge the performance and value that LLMs bring.
  • Choose an area where LLMs work to their strengths by taking advantage of at least one of the core competencies they have been shown to do well; summary, expansion, inference and analysis.

2. Don’t forget the basics

Make good use of Service Design techniques to define what success looks like. Map the User Journey and spend time defining the touchpoints and modelling the semantic information architecture.

And then strip it back. Cut away absolutely everything that isn’t vital to the successful outcome you plan for. Don’t let the designers loose until this is done. And consider any investigative work with the technology up to this point as exploratory and should almost certainly be archived.

You’ll then have a clear set of priorities, requirements, information flows and use-cases that everyone understands, and everyone can support. The whole team will be clear about what they are aiming for. Keeping their eye focussed on the prize makes the Product Owner’s primary catch-phrases more effective: “No that is not in scope” and “This is lower priority”.

And if this is starting to sound like the start of any solid digital project – good, it should.

3. Experiment

Give your team as much time as possible to try things out. Build the time into the plan and break the experiments down into small and well-defined steps to learn and iterate.


Look to experiment with the following:

  • How the structure of prompts changes the output.
  • How the different LLMs compare when asked to respond to the same prompt.
  • How to extend the LLMs by adding training to embed your own information data.

Aim to build the experimental steps around the core competencies of Generative AI. And later, bring these together to form an overall solution using your favourite AI automation tool chain.


There will be surprises. There will be frustrations. And there will be changes in the way that you approach the use of the LLMs. Don’t be afraid to pivot on how you use the technology; or indeed ‘if’ you use the technology. But remember the basics, keep your eye on what success looks like and don’t let the team get carried away with ‘shiny object syndrome’.

4. Get lots of feedback

While using AI, remember to share your work with real humans as early as possible: People outside your team who can give you useful feedback. Set up demos within the team to share learnings and put on regular show-and-tell sessions with your target audience. And, as soon as possible, let them try it out – on their own, without you there. They will learn to see beyond the magic, and you will quickly find out what works and what doesn’t.

Your priorities will change – but the fundamental definition of success won’t (hopefully). And don’t forget the importance of plain old testing. The outputs from a LLMs can vary widely with only the smallest changes in training data and prompts. Fortunately, LLMs can come to the rescue here – they are great at evaluating the output from other models through peer- review. Use that capability to help you test. This is also useful for building into the architecture of your solution. Where you have the resources; double up the LLMs to interact and increase the quality of output for a production system.

5. Don’t underestimate the time you need


Don’t underestimate the time it will take to gather, prepare and refine your data. When it comes to data, quality and variety are just as important as quantity. With demographic information, a good distribution of variety is vital to represent your users truly and ethically. And don’t forget to set aside at least 10% randomly selected from the training set so that you can properly test the results.


To save time and increase the training and test data available, explore opportunities to synthesise data to add to your original data set. Also, don’t underestimate the time it will take to test and refine the prompts and LLMs settings to achieve the repeatable outcomes you are looking for. Prompt engineering is an art as well as a skill and takes time to learn.


Finally, know when to stop. It will always be possible to make it a bit better. Be clear about what is good enough and recognise when you get there. The impulse for the team to keep tweaking will never end – it’s simply too absorbing.

Interested in exploring how GenAI can accelerate your innovation?

Come and join us in Cambridge, UK, and Raleigh, NC, where we’ll be running a series of in-person workshops to help clients identify the opportunities (and threats) of GenAI and plan a path to accelerate their innovation.

Trends in Respiratory
Find the authors
on LinkedIn:

Trends in respiratory therapies: why pMDIs hang in the balance of new technology

In May 2023, RDD Europe returned to a real-world conference after years of pandemic-enforced online-only presence. The location was spectacular – Antibes on the Cote d’Azur – with the sparkling Mediterranean Sea providing welcome relief from a dismal British spring.

The industry was well represented by device technology companies, CMOs, academics and pharma companies, and the presentations and workshops provided an engaging blend of research and practical advice.

Even though much of my time over the past ten years has been focused on parenteral device development, my career in combination products started in respiratory devices, working on a variety of dry powder inhaler (DPI) and pressurised metered dose inhaler (pMDI) devices, including the GSK Ellipta inhaler. This year at RDD, as I returned to my roots in this industry, three main themes struck me: preparing for the pMDI cliff edge; moving beyond traditional respiratory diseases; and implementing particle engineering for targeted treatment.

There were also two notable omissions: users and connectivity. More on those later.

Preparing for the cliff edge of pMDI propellants

The shift in pMDIs from using HFC propellants towards gases with a lower global warming potential (GWP) has gained momentum, with California imposing a ban on the sale and distribution of R227ea from the end of 2030, and R134a from the end of 2032, including for medical use. This means the end of the line for the sale of all current pMDI products in California, with other jurisdictions likely to follow suit as the world tries to move to a more sustainable solution.

The transition needs formulators, device designers, scientists, and other disciplines to collaborate to solve the challenges presented by the different physical properties of the new gases. Different thermodynamic and fluid dynamic properties can dramatically alter the plume geometry, droplet size and particle velocity, requiring careful redesign of the fluid pathways to compensate for the differences. These challenges were outlined in evidence presented by Recipharm (1), Proveris and Koura (2), and Healthy Airways LLC (3).

At Cambridge Design Partnership, we are receiving far fewer enquiries for pMDI products than DPIs and soft-mist inhalers. Obviously, an n=1 sample does not have a high degree of certainty, but it reflects a general sentiment among clients to focus future developments away from pMDI platforms.

Moving forward beyond traditional respiratory diseases

Asthma and COPD remain the biggest drivers in device and formulation development, much the same way that diabetes treatment has driven pen injector development. Two drivers that our drug delivery team have seen pushing device design in respiratory and the inhalation market are the need for home treatment, rather than hospital centered treatment; and platforms for biological drugs. The other significant drive is for vaccines that are stable at higher temperatures, which can be delivered without leaving behind copious volumes of blood-contaminated medical waste.

One challenge that comes with these new treatment regimens, beyond formulating drugs that will be stable in powder form, is getting the drug to the correct part of the body and making sure it remains present long enough to be effective. One paper from UCL and the University of Hong Kong (4) highlighted a promising approach to developing therapeutic antibodies against future SARS outbreaks. Some of these developments also require higher dose payloads, or API-only formulations; this presents a substantial challenge to device designers to make sure that the inhalation capabilities of different patient groups can achieve the required dose efficiency.

Aptar and Recipharm also shared their own device innovations to present novel spray and softmist technologies based on a syringe primary container. Targeting rapid treatment to the brain via the olfactory route is a much-neglected treatment option, in part due to the challenges of getting consistent behavior with users. At Cambridge Design Partnership, we’ve been working with a pioneering device company looking to exploit this pathway, and my colleague, Clare Beddoes, will be presenting information on this device development at PODD in October.

Enter: particle engineering for targeted treatment

In addition to the paper from UCL (4), particle engineering to target specific areas in the respiratory and nasal pathway was a topic that several posters and presentations addressed directly. Building on standard jet milling techniques, a paper from Aston University explained how isothermal dry particle coating (iDPC) can be used to create more potent formulations without increasing the volume of powder inhaled by the user (5). A third paper from Hovione and two Portuguese institutions focused on the characterization of different particle manufacturing techniques and how they affect deposition in nasal passages (6).

Closing the gap between the early stages of in vitro and in silico models, and the later stage in vivo performance, continues to receive a lot of attention. As the cost of computing power continues to fall, going into clinical or preclinical trials with greater confidence will accelerate time to market and reduce the cost burden on pharma companies looking to novel treatments.

Don’t forget user capability and connectivity

Two areas of development that received relatively little focus at the conference were human factors engineering (HFE) and connectivity – two concerns that are the subject of a great deal of effort in the parenteral sector. Recipharm presented a poster on the HFE advantages of their novel unit dose nasal spray when compared to a reference device (which bore a striking resemblance to an Aptar Unidose Liquid Nasal Spray). Research institution Solvias presented a paper showing how training users can lead to worse outcomes due to misperception of expertise using a device (7). This counterintuitive result demonstrated that patients with limited one-to-one training with a Handihaler showed more errors in use than patients who only had access to the device and IFU.

While these insights were welcome, our in-house team knows that patients continue to struggle to use inhalers reliably and consistently, leaving even the most effective drug products showing variable results.

These challenges for patient use are also being seen in the parenteral market, which is why we are working so closely with our clients to find better ways to train patients and leverage connectivity to improve adherence to medication regimens. These connectivity solutions are often in direct conflict with cost and sustainability targets and finding a route to square this circle is a challenge with which CDP’s designers and engineers are actively engaging.

See you in Tucson?

RDD 2023 was the first RDD conference I have attended. It was great to reconnect with former colleagues and make new connections across the industry. The conference was very well run, and the standard of papers and presentations ensured there was plenty of fascinating material for industry and academia to engage with. I’ve already blocked out my diary for RDD 2024 in Tucson and I look forward to seeing you there.

References

      1. Albuterol Sulfate Metered Dose Inhaler Feasibility Using an Environment Friendly Propellant HFA152a and Novel Valves (Lei Mao, Sheryl Johnson, Nischal Pant, James Murray, Donald Ellis, Benjamin Zechinati, Johnathan Carr and Victoria Cruttenden)

      1. Comparison of Spray Characteristics of P-134a and Low GWP P-152a pMDIs With and Without Ethanol (Lynn Jordan, Sheryl Johnson, Ramesh Chand, Grant Thurston, Deborah Jones, Vanessa Webster and Sally Stanford)

      1. Accelerated Development of MDIs with Low GWP Propellants in a QbD Era: Practical, Regulatory and Scientific Considerations (Healthy Airways LLC and First Flight Pharma LLC)

      1. Inhaled Antibody Therapies: Enabling Prophylactic Protection against SARS-CoV-2 Infection with a Dual Targeting Powder Formulation (Han Song Saw and Jenny Ka-Wing Lam)

      1. Use of Isothermal Dry Particle Coating (iDPC) for the Development of High Dose Dry Powder Inhalers (Jasdip S. Koner, David A. Wyatt, Amandip S. Gill, Shital Lungare, Rhys Jones and Afzal R. Mohammed)

      1. Benchmarking of Particle Engineering Strategies for Nasal Powder Delivery: Characterization of Nasal Deposition Using the Alberta Idealized Nasal Inlet (Patricia Henriques, Cláudia Costa, António Serôdio, Ana Fortuna, and Slavomíra Doktorovová)

      1. Effect of Capsule-Based Dry Powder Inhaler User Training on In Vitro Performance (Oleksandra Troshyna and Yannick Baschung)

Connect with CDP

For more on how to navigate the evolving respiratory device landscape, from propellant transitions to targeted delivery, contact Cambridge Design Partnership.

Care tech: exploring the latest trends in dementia care
Find the authors
on LinkedIn:

Care tech: exploring the latest trends in dementia care

We are witnessing important advances in the treatment of the most common cause of dementia, Alzheimer’s disease, most noticeably by the emergence of disease-modifying therapeutics. And this trend is only set to continue, with new innovations and technologies promising to help slow the progression of this devastating disease.

However, patients who do not yet have access to these treatments or are in a more advanced stage of the disease will continue to require significant care support. The caregiving sector is already under significant pressure due to the increasing demand for long-term care within aging populations [1]. As the disease progresses, family members, including elderly spouses, are often the main caregiver – but they may be left poorly equipped to do this without the right support.

With the cost of dementia care running to £32,250 per person per annum [2] technology innovators are finding new ways to make resources go further and give dementia patients independence for longer – providing reassurance to the caregiver and peace of mind to family members.

The challenge lies in making these solutions accessible to caregivers and usable for patients. In this article, we take a deep dive into the technologies available to support dementia care and explore emerging trends that are transforming the landscape by using the right technology at the right time.

 

Alzheimer’s disease is a progressive and irreversible neurodegenerative condition that primarily affects the cognitive functions of the brain, particularly memory, thinking and behavior. It is the most common cause of dementia, a broader term for a set of symptoms that impact a person’s ability to live independently.

In the UK, it is estimated that more than 900,000 people live with dementia, and this is projected to double by 2040 [3]. Of the people diagnosed, up to a third live alone [4]. With the aging population outpacing the rate of training and recruiting caregivers, the already significant caregiver shortage is set to increase [5].

Meanwhile, family members are taking on caregiver responsibilities, often with unsustainable and distressing consequences. This is in part because every patient journey is different and the rate of their disease progression can vary widely. Some patients may require discreet support at the early stages of the disease, while others may require constant care. Knowing when and how to intervene to provide the care support needed is crucial.

The care sector is increasingly looking to technology to maximize the impact of the professional and informal caregiver workforce. There is an increasing recognition that caregivers require ongoing support to make their role more manageable, especially following the pandemic.

Assistive technologies rarely exist in isolation. In fact, it is often the combination of these technologies that yields the best results. Here are some of the technologies available to support independent living and managing disease progression.

Personal alarms and safety tracking

Alarms and tracking technologies allow people to call for help if they need it – wherever they are – as well as providing peace of mind for caregivers and family members when they are not there. They are simple to use and can help patients stay independent for longer.

 

Location. GPS trackers such as Mindme, Ubeequee, and Angelsense consist of battery powered or rechargeable wearables that connect to a 24/7 monitoring support center to alert family members and emergency services if a vulnerable adult is outside designated safe zones. Direct-to-consumer devices, such as Medpage, work similarly, but the information links directly to family members and may not have predefined safety zones or raise an alarm. Connectivity is based on broadband and subject to subscription charges.

Alarms and calls. Technologies such as Tunstall’s MyAmie, Oysta, and Saga’s SOS allow patients to raise an alarm for relatives, caregivers or emergency services with the use of a single button. These technologies often come in the form of a pendant worn around the house and are connected to a hub via a radio signal. The patient can also use the hub to raise an alarm. The pendant must be within reach of the hub for it to work. Other technologies, however, work similarly to the GPS tracker and can rely on broadband for wider network reach. These technologies often also incorporate fall detection and GPS.

Fall detection. Wearables such as Buddi, Telecare, and Careline are designed specifically for dementia care. These use inertia measurement units, gyroscopes, and pressure sensors to detect falls and automatically send messages to caregivers, family members, and first-aid responders. These devices are often accompanied by an alarm button for the user and GPS tracking. Many of these technologies can also be connected to a 24/7 monitoring support team.

Reminders and medication adherence. There are a variety of technologies in this category which allow caregivers to set reminders for patients to take medication, drink water, eat, or  remember appointments or social events. Memory aid kits available include the MemRabel care alarm clock with a large screen, connected to a Pivotell Vibratime rechargeable wrist watch that vibrates for reminders. These can be in photo, video or audio format.

The challenge many of these technologies face is that they depend on a caregiver to ensure the patient remembers to engage with and wear the device, charge it when necessary, and crucially, press the button if in distress. In the case of some technologies, they must also be within reach of a hub.

These technologies are good for the early stages of the disease, but as cognitive decline continues, patients will rely more on caregivers to support them, thus limiting their advantages.

In other words, the longevity of these technologies can become incompatible with the patient’s journey, and this is one of the key hurdles to consider when designing and adopting technology in dementia care.

Remote monitoring

This is a fast-growing area for dementia care. Remote monitoring technologies share information on the patient’s daily living patterns with caregivers and family members. The purpose is to provide peace of mind to family members and enable caregivers to make informed care decisions in the short and long term.

Common functions include:

  • Movement monitoring. Generally delivered by several passive infrared (PIR) sensors installed around the house, and pressure mats in beds and sofas, connected to a hub.
  • House occupancy. Sensors on external doors to monitor whether an individual has left the house.
  • Appliance usage. Monitored by connected sensors placed between the mains inlet and the device plug.
  • Fall detection. Cameras or mmWave radar sensors to detect when an individual has had a fall, without the need for a wearable.

Many of these functions can be delivered by single systems, e.g. Taking Care Home Alert, with the more sophisticated fall detection systems generally targeted at professional care provider users, e.g. Hikvision and Vayyar Care.

It is also common for families to create their own solutions, especially when they feel no existing single solution works for them. This includes the use of consumer tech, such as smartphones, video doorbells, smart home speakers, and cameras around the house. Video doorbells, for example, can be valuable in preventing scams, while smart home speakers can set reminders, automate house functions, or call a relative. However, the use of cameras around the house does pose privacy concerns which need to be considered.

Although the overall objective is to monitor daily independent living, the information often requires interpretation by the caregiver. This can often be facilitated through a dashboard, although the information can be disjointed, and assessment of patterns may not be clear-cut.

Innovator Matt Ash from Supersense Technologies, however, believes we can do more to obtain valuable insights and monitor disease progression efficiently and noninvasively.

 

“There is a real need for technologies that support caregivers in their role and provide them with the confidence to take a break, knowing their loved one is safe. Though there are some credible assistive technologies out there, the unique needs of families living with dementia are not well served. Projects like the Longitude Prize on Dementia are investing in radical thinking to generate solutions with families living with dementia.”

 

Talking about some of the latest advancements being tested, Ash continues:

 

“Everyone’s journey with dementia is different. Right now, we are working on leveraging recent consumer developments in sensor technology, machine learning, and user experience to create personalized assistive systems that can evolve with the needs of an individual with dementia and their caregivers. It’s an incredible opportunity to provide the community with supporting technologies that serve their needs.”

 

If we want to empower those with dementia to live independently, maximize the impact of caregivers, and provide peace of mind to family members, we must enable the right type of intervention at the right time. Someone with early Alzheimer’s disease may feel overwhelmed or suspicious of new technology, while a person in later stages may be too vulnerable to learn how to use it.

The future of dementia care will center around collecting the right data and extracting the right insights from it to enable better care choices. By allowing technology to provide information on the progression rate of the disease for a particular patient, we can start building a profile of care by recognizing changes in patterns to a baseline. Emerging technologies such as remote monitoring platforms can support this and guide the longevity of other technological interventions to ensure that they align with the individual patient’s journey. At the heart of these technologies, privacy must be a top priority, which may include the use of AI and other methods to allow for patterns to be recognized quickly and with minimal need of human intervention.

We are entering a new era of therapeutics for Alzheimer’s disease, but there is still much to do, particularly in care. Although the use of technology can ultimately support patients, caregivers and family members, it is often incompatible with the individual’s stage of the disease, or inaccessible to caregivers. But as new technologies emerge, data and AI can unlock new insights to support a personalized care plan that scopes each patient to their individual needs – allowing caregivers and families to provide the best care at the right time.


References
  1. E. adult social care insight. The size and structure of the adult social care sector and workforce in England. Technical report, Skills for Care, Workforce Intelligence, 2023.
  2. Alzheimer’s Society, How much does dementia care cost? https://www.alzheimers.org.uk/blog/how-much-does-dementia-care-cost
  3. L. B.-A. A. R. Raphael Wittenberg, Bo Hu. Projections of older people with dementia and costs of dementia care in the United Kingdom, 2019–2040. Technical report, Care Policy and Evaluation Centre, London School of Economics and Political Science, 2019.
  4. B. W. Claudia Miranda-Castillo and M. Orrell. People with dementia living alone: what are their needs and what kind of support are they receiving? International Psychogeriatrics, 2010.
  5. E. adult social care insight. The size and structure of the adult social care sector and workforce in England. Technical report, Skills for Care, Workforce Intelligence, 2023.

 

Connect with CDP

For more on how to accelerate patient-centred innovation in dementia care technology and device design, contact Cambridge Design Partnership. 

Neurodegenerative conditions|||||
Find the authors
on LinkedIn:

Neurodegenerative conditions: turning a corner to better treatment?

Pace is accelerating for tackling neurodegenerative diseases. Can we unlock better treatment? Can we reach a cure?

Ageing populations face neurodegenerative conditions, such as Alzheimer’s Disease, Parkinson’s Disease, Motor Neurone Disease, Multiple Sclerosis, and others. These impact an estimated 60 million people worldwide, equivalent to the current UK population.

Whilst each condition has different mechanisms of neurodegeneration, they all have something in common: prognosis is bleak, treatment is limited, and there is no cure.

However, after decades of research, there has been a series of breakthroughs. Here, we focus on two areas of progress: how treatments have moved on and hope for the future.

The rise of RNA-based therapeutics 

The effective development of RNA-based vaccines during the COVID-19 outbreak catapulted RNA-based therapeutics into the spotlight. Whilst theoretical knowledge of RNA therapy has existed for over 30 years, the bulk of associated FDA approval for treatments involving the nervous system has occurred in the last decade(1).

A major advantage of RNA-based therapy over conventional small molecule and protein-based approaches is its high specificity and precision, resulting in a more targeted approach to treating disease with specific gene mutations or overexpression.

However, to devise effective RNA-based therapeutics, the genetic hallmarks of the neurodegenerative disease of interest must be known.

Motor Neurone Disease (MND) is one such condition where specific mutations in the SOD1 gene have been identified and in this case, in two per cent of diagnosed cases.

A recent breakthrough in phase three clinical trials targeted this gene using the drug Tofersen. Tofersen, developed by Biogen, directly interferes with the faulty overproduction of SOD1. After six months, patients had a reduction in SOD1 levels, and after 12 months the same patients reported better mobility and lung function(2,3). Although patients with SOD1 mutations only represent two per cent of those living with MND, these trials provide ‘proof of concept’ that similar gene therapy-based approaches may help other forms of the disease.

Another pioneering strategy, developed by Atalanta Therapeutics and Genentech, focuses on a technology called branched siRNA (small interference RNA). This is a type of molecule that helps regulate gene expression by binding to a complementary messenger RNA, which in turn can encode the gene of interest.

Branched siRNA uses novel RNA interference nucleotide technology to suppress the activity of genes that function abnormally, such as mutations. This slows the progression of the disease or stops it altogether.

It is hoped this approach can be applied across multiple neurodegenerative diseases, including Parkinson’s Disease, Huntington’s Disease and Alzheimer’s Disease.

Although testing is still in the pre-clinical stage, the branched siRNA platform aims to enable RNA interference to be deployed as a therapeutic approach throughout the brain and spinal cord. This overcomes the long-standing challenge of achieving adequate distribution within the central nervous system (CNS) to ensure the therapeutic agent reaches the nervous tissue(4,5).

Progress in non-RNA therapeutics 

Non-RNA therapeutics for neurodegenerative conditions also continue to progress. Examples include the monoclonal antibody Donanemab, developed by Eli Lilly. Phase three clinical trials showed it to slow clinical decline by 35% in patients with Alzheimer’s Disease, compared to a placebo(6).

Effective delivery remains a major challenge  

One of the main challenges in developing RNA therapeutics, and therapeutics for the brain in general, remains the efficiency of its delivery to the target tissue.

To treat neurodegenerative conditions, the therapeutic agent aims to reach the CNS. The presence of the blood-brain barrier (BBB), a cell-formed wall separating the bloodstream and the CNS, makes it difficult to deliver drugs. The BBB’s almost impermeable characteristics allow very few molecules to cross and make systemic drug delivery less efficacious.

There are two common approaches to overcome this: re-engineering the therapeutic agent to make it compatible with BBB permeability or bypassing the BBB altogether.

Re-engineering the therapeutic agent

This typically involves chemical modification of the drug (e.g., from water-soluble to lipid-soluble molecules) to enable passive diffusion through the BBB. Another approach is to design drug carriers that mimic the structure of endogenous molecules (e.g., monosaccharides, hormones) to activate carrier-mediated transport or nanocarriers(7,8). Both approaches add complexity to manufacturing.

Another cross-BBB approach is Focused Ultrasound (FUS), where high-intensity sound waves temporarily disrupt the BBB to enable drug-loaded microbubbles to enter the CNS9.

Bypassing the blood-brain barrier 

Bypassing the BBB can save time and effort in formulation by using a range of therapeutic agents not constricted by size or BBB compatibility. Of its three most common types of delivery: intraparenchymal, intranasal, and cerebrospinal fluid (CSF) delivery; the latter is often the favored approach, due to lower clinical complexity10.

 
 

Evaluating CSF delivery routes 

CSF delivery most commonly include intrathecal (IT) or intraventricular (ICV) routes.

IT involves an injection either on the lumbar or a cisterna magna region to deliver the drug and let CSF pulsatile flow support the distribution of the therapeutic agent in the brain and spinal cord.

ICV is more invasive. It involves two surgical interventions, one to place a catheter connecting the cerebral ventricles to the injection port at the top of the skull and one to remove the catheter.

To date, ICV has two approved drugs (Rituxan for CNS Lymphoma, and Brineura for Neuronal Ceroid Lipofuscinoses type two). IT lumbar injection has one (Spiranza for Spinal Muscular Atrophy) and plenty more in clinical and pre-clinical stages across a spectrum of neurodegenerative and neurological diseases(11). Irrespective of the approach, the trend is clear: less invasive, lower dosage, and targeted delivery is the way to go.

In the race to show safety and efficacy with either invasive or non-invasive approaches, all solutions will have to be patient-centered.

A new dawn for the treatment of neurodegenerative diseases  

The complexities of neurodegeneration have long frustrated scientists and clinicians alike, despite decades dedicated to studying its diseases, aetiologies, and treatments. However, we are making more rapid and more significant progress.

We have some way to go, but we mustn’t overlook the magnitude of these milestones. New therapeutics and delivery techniques are paving the way to more effective and efficient treatment.

By increasing our understanding of genetic hallmarks of the diseases, and using tools such as AI in drug discovery, we can unlock faster pathways to RNA-based treatments. Similarly, by finding innovative ways of demonstrating the safety and efficacy of delivery methods, such as modeling, we can edge closer to less invasive procedures and lower dosages to minimize potential side effects.

We need more research, more awareness, earlier diagnosis, and a better understanding of risk factors to enable prevention and earlier intervention.

But we are now getting closer to better treatment and one day finding a cure.

 


References 
  1. http://nectar.northampton.ac.uk/16015/1/Anthony_Karen_RNAB_2022_RNA_based_therapeutics_for_neurological_diseases.pdf
  2. https://www.sheffield.ac.uk/neuroscience-institute/news/promising-mnd-drug-helps-slow-disease-progression-and-benefits-patients-physically
  3. https://www.nejm.org/doi/full/10.1056/NEJMoa2204705
  4. https://www.gene.com/stories/pioneering-novel-therapeutics-in-neuroscience
  5. https://www.nature.com/articles/s41587-019-0205-0
  6. https://clinicaltrials.gov/ct2/show/NCT04437511?term=TRAILBLAZER-ALZ&cond=Alzheimer+Disease&draw=2&rank=3
  7. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8905930/
  8. https://ijponline.biomedcentral.com/articles/10.1186/s13052-018-0563-0#:~:text=Modification%20of%20the%20drug%20to,capable%20of%20crossing%20the%20BBB.
  9. https://clinicaltrials.gov/ct2/show/NCT03321487
  10. https://www.frontiersin.org/articles/10.3389/fnagi.2019.00373/full
  11. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9305158/

 

Connect with CDP

For more on how to advance RNA therapeutics and targeted CNS drug delivery for neurodegenerative diseases, contact Cambridge Design Partnership.

||
By Cambridge Design Partnership

Developing future drug delivery systems

WHITE PAPER

Developing future drug delivery systems

BY CLARE BEDDOES, AMY KING & JAMES HARMER

Here at CDP, we don’t think of the packaging as simply the container for the drug. We consider the whole delivery experience, from the device itself through to the accompanying materials such, as the instructions for use (IFU). While several factors must be considered in the selection of the right device and packaging materials for injectable drugs, including the drug rheology, how the drug is administered, and by whom and where, this document takes a high-level view of how specific consumer trends may impact the experience of self-injection devices.

In this paper, we explore:

  • The futuring methodologies needed to stretch strategic thinking
  • How consumer needs – and packaging – are changing 
  • The consumerization of healthcare
  • Improving drug delivery systems that benefit the patient, the planet and the manufacturer

Download the white paper

 

respiratory drug delivery|
Find the authors
on LinkedIn:

Key trends in respiratory drug delivery

It was great to be back in person for the Drug Delivery to the Lungs conference in Edinburgh recently. Here, we share insights on three major themes from the event and a trend we think will reshape the future of respiratory drug delivery in the next 10-20 years.

Sustainable pMDIs

The shift in pMDIs from using HFC propellants towards less polluting gases has gained momentum with California imposing a ban on the sale and distribution of R227ea from the end of 2030 and R134a from the end of 2032, including medical use. This provides an end-of-the-line for the sale of all current pMDI products in California.

The transition needs formulators, device designers, scientists, and other disciplines to collaborate to solve the challenges presented by the different physical properties of the new gases. The assessment of all types of inhalers from a sustainability perspective has advanced, too, with life cycle analysis (LCA) and carbon credits schemes being discussed – our sustainability team provides reviews and recommendations for a range of medical devices to help our clients improve their devices and provide evidence to back up their green credentials.

Usability for adherence

Time and again, studies show that it’s challenging to measure asthma and COPD patients’ adherence to their medication. Medication adherence appears much lower than for other diseases – estimates range from 22-78% adherence, compared to 70% for diabetes.

Low adherence needs to be addressed by making devices easier to use and tailoring them to the patient’s needs. Reducing user steps is key to make using the device easier, but patient feedback and tailoring to specific needs are necessary, too – something connected inhalers could help solve through digital reminders appropriate to the patient’s needs. This is just one of the ways that CDP Mosaic, our digital ecosystem catalyst, can be used. Independently verifying that increased adherence is due to connected or smart inhalers is difficult to prove – something the industry is investigating.

Modelling of drug delivery

Several talks at this year’s event covered modelling, with in-silico methods advancing in capability and popularity over the last 10 years. Topics covered included constructing a full airway model to assess drug deposition under different breathing profiles and using maths with physiological signals to detect disease and drug-induced changes. Posters demonstrated an even wider range of possible models, including our own.

Our modelling and simulation teams produce models for clients that highlight potential robustness issues with mechanical components and digital sensing techniques at early stages to determine suitable technologies for medical devices.

Learning from the past, looking to the future

Federico Lavorini, Professor and Consultant in Respiratory Medicine at the Department of Clinical and Experimental Medicine, Careggi University Hospital, Florence, Italy, gave an excellent summary of drug delivery over the last 100 years, including innovations where design has reduced user error.

Further talks considered what pharma could learn from other markets, especially as we move from ‘sick care’ to ‘health care’ – where technology identifies and treats conditions before they become symptomatic. Our Drug Delivery and Insight & Strategy teams work closely together to understand upcoming trends and draw on insights into consumer expectations from the consumer and digital markets for our clients.

Biologic treatments are coming to respiratory drug delivery and are likely to use Soft Mist Inhalers (SMIs) and Dry Powder Inhalers (DPIs) for delivery, with current trends looking to lean heavily on DPIs. This is likely to lead to the development of new, higher-performance DPIs to provide the best efficiency delivering these high-cost treatments to the patient. We have dramatically increased the performance of DPI engines for our clients through our science-based approach to increase fine particle fraction for their devices.

How we can help

Our team are experienced in all stages of the development of drug delivery devices for a wide range of scenarios and applications in the medical industry, with a dedicated team working in these areas. Here at CDP, we have these specialists all under one roof to partner with you to bring your device to market and can also draw on the learnings of our colleagues in consumer markets to guide on relevant future consumer expectations.

IVD roadmap for the UK|
By Cambridge Design Partnership

A Strategic Technology Roadmap for the UK In Vitro Diagnostics Industry

WHITE PAPER

A Strategic Technology Roadmap for the UK In Vitro Diagnostics Industry

A major new report for industry leaders, government, and health tech companies

The UK in vitro diagnostics (IVD) industry has the potential to help boost UK economic growth and make the UK a global leader in the industry while improving health in the UK and for people worldwide. A new strategy, applied over the next 10 years, can see the industry transformed.

The Roadmap, researched and written by Cambridge Design Partnership, in partnership with CPI, the Association of British HealthTech Industries (ABHI), and funded by Innovate UK, defines the key technologies and strategies that can place the UK at the forefront of this industry.

Download the Roadmap

 
 

 

web_featured_clinical-manufacturing-1440-1
By Cambridge Design Partnership

Clinical Manufacturing

We manufacture Class II and III medical devices – mechanical and electronic, durable and consumable – for our global clients. So, we understand the barriers they face getting their product from design to manufacture for clinical trials.

We’ve invested in the advanced clinical manufacturing facilities, domain expertise, and compliance to overcome our clients’ challenges, from fulfilling the volumes to managing the complexity of the set-ups they need. We’re capable of producing up to 100,000 devices under ISO 13485:2016 certified QMS by Intertek Medical Notified Body, harnessing leading-edge facilities such as ISO Class 7 cleanrooms and our purpose-built 26,000 sq ft UK manufacturing center.

  • Class II and III medical devices
  • Capacity for up to 100,000 devices
  • 26,000 sq ft UK manufacturing center
  • QMS ISO 13485:2016 by Intertek Medical Notified Body

Our expertise extends across aseptic filling and sterilization, which we deliver through collaboration with our proven partners, as well as performing device assembly, labeling, and logistics in-house.

We regularly conduct design verification testing, including developing bespoke test methods, which we subsequently validate, managing the entire validation process for our clients. We create the documentation for design history and technical files, and support with the regulatory submission.

Our clients trust us to advance their Class II and III medical devices, from design to clinical manufacture. These devices create a fast track to user studies and clinical trials, then onwards to the clinic and regulatory submission. Our clinical manufacturing capability, shaped by years of practical experience and harnessed for the world’s leading healthcare companies, is proof of the purpose that unites us: to improve lives through innovation.

  • Assembly
  • Aseptic filling & sterilization
  • Labeling
  • Logistics
  • Design verification testing
  • Packaging
  • Technical and design history file assembly
I28A0319-ADJ-700w
Clinical-Manufacturing-I28A0355-ADH-700w
I28A0374-ADJ-700w
Designing more sustainable electronics|||
By Cambridge Design Partnership

Designing more sustainable electronics

From phones to laptops, home devices to watches, electronic devices – particularly smart devices – have become part of people’s lives, enabling better communication and access to information and making their day-to-day easier.

But the increasing adoption of technology comes at an environmental cost. Electronic devices often have a significant carbon footprint because of the energy-intensive processes needed to produce printed circuit boards (PCBs) and integrated circuits.

Electronics production relies on mining and extracting dozens of different materials, including critical raw materials (economically important materials at high risk of supply shortage, such as lithium or titanium). Extracting these materials has a range of sustainability impacts, including the leakage of toxic chemicals such as cyanide into the environment, high levels of water use, and human rights abuses in the case of ‘conflict minerals’ such as gold and tantalum.

Waste electronic products, or e-waste, is the fastest-growing waste stream in the world, with over 53 million tonnes of e-waste produced in 2019. Most e-waste is disposed of incorrectly, ending up at waste dumps in developing countries. Hazardous chemicals, such as lead or mercury, that may be present in electronic components can leak into the environment, harming local ecosystems and damaging the health of people who live and work in the dumps.

Product sustainability has focused on the circular economy, particularly recycling. But there are fundamental limits to the impact recycling can have on electronics. Only 17% of e-waste is collected for recycling and, even if it’s collected, recovering materials from e-waste is particularly challenging.

Electronics contain trace amounts of rare metals, which are complex and expensive to separate. Only the most abundant materials, such as copper and gold, can be economically retrieved during e-waste recycling, and even if all e-waste was recycled in this way, the material recovered still wouldn’t be enough to meet the growing demands of the industry.

One way to tackle the environmental challenges presented by electronics is to remove the need for them in the first place, for example by detecting a temperature change using a color-changing chemical rather than a sensor. But, in some instances, electronics are necessary, so how can designers reduce the impact of the products they create?

Our sustainability team assessed a range of technologies and design techniques to determine their potential for reducing the environmental impact of electronic products and how difficult they are to implement. This article outlines a few approaches we’ve used in recent projects at CDP.

web_graph_designing-more-sustainable-electronics

Reducing complexity through connectivity

One of the best ways to reduce an electronic device’s environmental impact is by minimizing the electronics’ complexity, thereby reducing the number of integrated circuits needed as well as the surrounding passive components (resistors, capacitors and so on), connecting tracks, and PCB area.

An easy, effective way to do this is by pairing a product with a user’s existing device to provide the smart capability. Methods range from a simple QR code or NFC chip to a Bluetooth connection for transferring more complex data.

As well as reducing the electronics in the product, this allows for a degree of futureproofing, as software updates can be used to keep the product up to date. This idea isn’t new but is starting to be used more in applications from smart packaging to medical devices.

Important to note: Behind many of these software solutions are large data centers that need powering and should be considered in the product’s environmental impact.

Informed decision-making: Life Cycle Assessment (LCA)

Designers can optimize component choices and circuit designs during detailed design to reduce the overall impact of a product.

We recently used LCA to estimate the additional carbon footprint of adding an electronic module to a medical device. This step allowed our team to identify where to focus on reducing the impact of the design, such as replacing integrated circuits with a solution based on lower-impact passive components and optimizing the layout to minimize the total area of PCB required.

We identified several solutions that together had the potential to reduce the total carbon footprint of the product by up to 25% without compromising functionality. In many cases, this optimization also generates cost savings.

Optimizing electronics through additive manufacturing

Over the past two decades, additive manufacturing (such as 3D printing) has seen a surge in use in mechanical prototyping and manufacture, and its applications in the electronics sector are now starting to grow. In the context of PCBs, additive manufacturing refers to selectively adding conductive material to the areas required, as opposed to a more traditional approach which starts with a layer of copper and selectively etches away the areas where it isn’t needed.

These technologies can improve a product’s carbon footprint through reduced material usage and less energy-intensive manufacturing processes. A report published by the ECOtronics project found, “Changing from subtractive manufacturing (etching) to additive manufacturing (printing) has the potential to reduce environmental impacts by more than 50% across all impact categories.”

One additive manufacturing method is laser direct structuring (LDS), which allows you to construct circuits on the surface of device components. With this approach, you can remove the PCB entirely, dramatically cutting down on the material required.

These technologies present opportunities to fit electronics into new form factors, print onto a wide array of rigid or flexible substrates (the non-conductive part of the circuit board the metal circuit is added to) and increase the customizability of the design, all while reducing the product’s environmental impact.

As we’ve highlighted before, sustainability initiatives should always consider context, which is vital for electronics. In the absence of cost-effective recycling processes, designers must prioritize approaches that reduce the materials and energy required to produce electronics. As electronics continue to play a leading role in our lives, future designs should reduce our reliance on critical raw materials and consider how circular approaches to design can extend product lifetimes and prevent harm to people and the environment.

References

Connect with CDP

For more on how to reduce the environmental impact of your electronics through smarter design choices, contact Cambridge Design Partnership.

How to boil your egg perfectly every time
By Cambridge Design Partnership

How to boil your egg perfectly every time – according to simulation

Search ‘how to boil an egg’ on Google, and you get over three billion results, some telling you to put the egg in cold water after boiling to preserve the runny yolk. Intrigued, we decided to investigate the science behind this advice.

Rather than heading straight to our lab for experimentation, we used computer simulation to calculate and model the movement of heat and temperature through the egg and surrounding fluid. Simulation lets us predict data at times that would be impractical or expensive in actual experiments.

Modeling the heat flow in a boiling egg could be a surprisingly tricky problem. An egg consists of a solid shell holding the white and yolk, initially in a liquid state but solidifying as the cooking continues. Being natural products, the exact properties and sizes of eggs vary.

To simplify the problem, we found technical publications that describe the average dimensions and thermal properties of the shell, white, and yolk for a typical egg. We decided to define these properties at a temperature of 60°C, which is around the point the yolk starts to solidify. Using computer-aided-design software, we created the geometry of the egg, and defined a body of fluid to surround it. This fluid body represents the boiling water in a saucepan during the first cooking stage. Afterward, the fluid body can be used to mimic cool-down in air or a bowl of 10°C cold water. We decided that the eggs would start the process from room temperature in all cases.

We ran the simulation using powerful software, Ansys Fluent. The software was initially developed for understanding problems such as the flow of air over planes or heat in a chemical plant, but it can be applied to domestic problems such as the humble boiled egg. To allow the simulation to run quickly on an ordinary computer, we took advantage of the fact an egg shape is a body-of-revolution and looks the same however it’s rotated around its axis. This lets us model it as an axisymmetric body that the computer considers two-dimensional. This reduces the number of calculations and gives us the answer quicker and more cheaply than simulating the real-life, three-dimensional shape.

As an example of the simulation results, Figure 1 shows the temperature distribution on a slice along the egg’s axis after cooking in boiling water for six minutes. The material towards the outside has heated up close to the temperature of the water. However, the central region corresponding to the yolk is still around 50°C, corresponding to a runny egg.

Figure 1: Temperature distribution on a slice across the egg after six minutes of immersion in boiling water.

Figure 2 shows a side-by-side comparison of subsequently cooling the egg in air or 10°C water for five minutes (five minutes being our estimate of the time it takes to finish eating our first dippy egg and move on to the second). When cooled in air, the central region of the egg continues to increase to 70°C, removing the prospect of a runny egg, even though the outer region and shell have decreased in temperature. In contrast, after cooling in water, the central region stays unchanged at 50°C while the shell has decreased close to 10°C. Leaving your perfect dippy egg in air risks ruining the runny yolk – but cooling it in water may save it.

Egg-article-in-line-image-fig-2-A
Egg-article-in-line-image-fig-2-B

Figure 2: Temperature distribution on a slice through the egg following cooking
and five minutes of cooling in (a) air and (b) water.

As well as modeling the overall temperature in the egg, we extracted the data for two specific points – at the center and the edge of the egg – and plotted them on a graph (Figure 3) to see how they differed. The data showed that the yolk’s temperature lags that at the shell. This is because the thermal diffusivity of the white and yolk are relatively low. Thermal diffusivity is a measure of how quickly heat can move through a material. So, it takes a while for the yolk to heat up, but once it does, it keeps cooking, absorbing heat from the rest of the egg material. It’s slow to respond to changes in the surrounding water (or air). The temperature just inside the shell responds much more quickly to changes, though, since the path the heat needs to travel from the surrounding fluid is considerably shorter, and the thermal diffusivity of the shell markedly higher.

How to boil your egg perfectly every time|||||||
Figure 3: Temperature profiles with time at the center point of the yolk (circles) and adjacent to the shell (crosses)

With the aid of some considered simplifications, we think this simulation analysis has proven the cookery expert right: cooling eggs down in cold water really does preserve the runny yolk. However, whenever you analyze a problem for the first time, it’s important to compare results against an experimental benchmark, so you can confirm the realism of the assumptions and simplifications in a computer simulation. We took three eggs and boiled each for six minutes in a lab beaker. One was opened straight away, and the other two after cooling in cold water or in air for five minutes. As predicted by our computer simulation, the yolks ranged from runny to fully cooked. And the best thing about this experiment? Everyone got an egg cooked precisely to their liking at the end.

 

Connect with CDP

Contact us to find out more about our capabilities and how we use science to understand and improve everyday products.