New frontiers in implantable neuromodulation therapies||Medical Therapy article|New frontiers in implantable neuromodulation therapies|||
By Cambridge Design Partnership

New frontiers in implantable neuromodulation therapies

Neuromodulation, where electrical signals in a patient’s nervous system are modified or stimulated to deliver a therapeutic effect, continues to be an exciting and evolving space within the healthcare sector.

There are many drivers contributing to its advancement. Ongoing clinical neuroscience research fueling new possibilities in neuromodulation therapies, the invention of new technologies, and the development of new product formats to meet unmet needs, are all notable factors.

Additionally, there has been increased acceptance and presence of established therapies for implantable devices – such as deep brain stimulation for Parkinson’s disease, spinal cord stimulation for chronic back and leg pain, and vagus nerve stimulation for epilepsy and depression – with Medtronic, Boston Scientific, Abbott, Nevro and others leading the industry.

In all, these factors have made the electrical-based neuromodulation space to become one of the fastest-growing medical device sectors, with market size expected to rise from $6.09 billion in 2021 to $14 billion by 20301.

The diversity of solutions is evident, with Figure 1 illustrating the current landscape of established and emerging implantable neuromodulation therapies.

Fig 1. Selection of established and emerging electrical neuromodulation technologies and their indications.

In this first article of a two-part series, we look at a few notable emerging therapies to illustrate how the implantable neuromodulation space is rapidly developing.

Bladder control: beyond sacral nerve stimulation

Addressing continence issues is a growing area in the healthcare sector, where neuromodulation is seeking to play a significant role in specific therapies.

Implant-based stimulation of the sacral nerve has relatively recently established itself as a way of addressing incontinence with the presence of Medtronic’s Interstim and Axonics’ product range. Alongside the sacral nerve, other nerves are being considered for implantable stimulation to address similar conditions and to respond to specific unmet clinical and patient needs.

One alternative is tibial nerve stimulation, which has a history of effectiveness for certain cases in its non-implantable form: percutaneous tibial nerve stimulation (PTNS). The implant-based approach seeks to address a patient and clinician inconvenience of PTNS, i.e., the need for repeated stimulation sessions and user steps2.

An example of such is the BlueWind Revi, which is part implantable (the electrode is placed near the tibial nerve) and, for minimizing invasive procedures, part wearable (a through-body power source). The device stimulates the tibial nerve which is connected to the sacral nerve plexus, containing the efferent and afferent nerve fibers that control the bladder and are responsible for bladder function. Here, the electrical impulses aim to modify the compromised activity of the detrusor muscle in patients with overactive bladder3. The company has recently achieved clinical results on their pivotal trial evaluating safety and efficacy (still under review by the FDA)4.

Similarly, Medtronic is seeking to develop an implantable tibial nerve stimulation system for incontinence which is currently undergoing clinical trials5.

Another nerve for addressing incontinence is the pudendal nerve. Amber Therapeutics is currently developing an implantable closed-loop therapy, Amber-UI,  for urge and mixed urinary incontinence. The therapy involves implanting electrodes that can sense, interpret, adapt and respond to individual patient signals, such as muscle contraction, in an attempt to restore normal bladder function. By accessing the pudendal nerve, it aims to treat both urge and stress incontinence episodes for the first time, not possible with existing neuromodulation devices, thereby expanding the overall addressable market. First-in-human clinical studies are expected to conclude by the end of 2023.

Emerging Vagus Nerve Stimulation (VNS) therapies

Along with established therapies for epilepsy and depression, VNS is also being explored for conditions such as Rheumatoid Arthritis (RA) to displace injectable and oral medication.

SetPoint Medical is currently evaluating a novel VNS treatment that activates the ‘inflammatory reflex’ pathway (neurophysiological mechanism by which the central nervous system regulates the immune system) that may decrease the type of excess inflammation that is the underlying cause of RA. Its multivitamin pill-sized MicroRegulator platform is currently an investigational device.

SetPoint Medical is progressing clinical trials not only for RA, but also for Crohn’s disease, and furthermore exploring the therapeutic effect, in animal models, to treat multiple sclerosis with VNS therapy.

Implantable VNS therapy is also being explored for other conditions such as sepsis, lung injury, stroke, traumatic brain injury (TBI), obesity, diabetes, pain management and cardiovascular conditions7. One example of cardio-based therapies include low stimulation of the vagus nerve to liberate the body’s own neurochemicals to improve heart function.

New pain indications

Neuromodulation has worked well in establishing itself to address specific intractable pain of the trunk and/or limbs and for diabetic nerve damage – both conditions treated by implanting electrodes in the epidural space using spinal cord stimulation. In light of this success and available product types, pain specialists are continually seeking solutions from neuromodulation to address different causes for different parts of the body.

This impetus was clearly illustrated in panel sessions and discussions with clinicians attending the American Society of Pain and Neuroscience 2023 conference in Miami. We heard testimonials of how specialists, using available stimulators, succeeded in treating a variety of new pain sources and anatomical locations in the wrist, joints, abdominal region and in one case, at the neck to relieve a patient’s sensation of being choked.

This dynamic led to some clinicians proposing that the future of neuromodulation should also consider the treatment of pain associated with oncology treatments, given the improved extended lives seen in cancer patients. This exploration and success could pave the way for the creation of more established therapies – which would be welcome given the prevalence of chronic pain in the general population and the initiative to deliver non-opioid alternatives.

Novel developments for spinal cord injuries

Along with surgical, drug and stem cell therapies, neuromodulation has also entered the frame for addressing spinal cord injuries.

ONWARD has seen success with its partial and fully implantable versions of its ARC Therapy™ product range, where electrodes are implanted in the epidural space to stimulate the lower portion of the spinal cord affected by the injury that fails to (properly) communicate with the brain. By stimulating these lower nerves, the system aims to help restore and optimize their functioning in connection with the brain. ONWARD indicated that for their ARCIM product, one study demonstrated the ability for long-paralyzed people to stand and walk again with little or no assistance using this therapy.

ONWARD’s products have been granted Breakthrough Device Designation status for a range of indications such as improving upper and lower limb function; bladder control and blood pressure regulation; and alleviation of spasticity in patients with such injuries8.

Also in ONWARD’s pipeline is a plan to integrate an implanted Brain Computer Interface (BCI) which senses the patient’s brain signals relating to the intent of leg/joint movement. In turn, these signals are wirelessly sent to its spinal cord stimulator which can activate nerves which are poorly connected to the brain due to injury. This aims to create a “digital bridge” between the brain and poorly connected nerves to enable and improve the patient’s walking ability. Much research and iteration is anticipated; however, this ambition is indicative of how neuromodulation can be innovative and transformational to people’s lives.

The road ahead for neuromodulation

The above examples only skim the surface of emerging therapies; neurostimulation, neuro-adaptive therapies and BCI technologies are attracting significant research and investment to create new therapies by leveraging the body’s physiological pathways.

We foresee continued progress in materials science, engineering, device design and biomedical research into neuro-physiological understanding of the human body to fuel the foundations for new, highly functional and patient-centered neuromodulation platforms.

We also foresee exciting developments in how targeting different nerves can potentially tackle similar medical conditions while the same nerve can be used to address various indications.

In our next article, we will explore the varied technology drivers and their considerations that are leading to the creation of new, innovative neuromodulation implants.


References 
  1. Strategic Market Research website https://www.strategicmarketresearch.com/market-report/neuromodulation-devices-market visited on 12/07/2023
  2. DOI: 10.1186/1471-2490-13-61
  3. DOI: 10.2147/RRU.S231954
  4. Clinical Study Results of the BlueWind System for Patients with Overactive Bladder Featured at the 2023 AUA Annual Meeting. https://www.prnewswire.com/news-releases/clinical-study-results-of-the-bluewind-system-for-patients-with-overactive-bladder-featured-at-the-2023-aua-annual-meeting-301811486.html 
  5. Evaluation of Implantable Tibial Neuromodulation Pivotal Study https://classic.clinicaltrials.gov/ct2/show/NCT05226286
  6. DOI: 10.1016/j.xjtc.2022.03.007
  7. DOI: 10.2147/JIR.S163248
  8. Website Onwards https://www.onwd.com/ visited 12/07/2023
How data and AI are changing bioprocessing
By Cambridge Design Partnership

How data and AI are changing bioprocessing – and why it’s needed

After numerous insightful talks and engaging conversations with industry leaders at this year’s BioProcess International, the key theme was clear: data, data and more data. 

Data has always been important, but now it is being collected to model current processes, understand how they work, and improve them. This is a trend that is only likely to accelerate in the future as AI becomes part of everyday life – both in and outside of work

Using data-based modeling to optimize well-established industrial processes 

There are many traditional processes that are used in the manufacture of antibodies, mRNA vaccines and cellular therapies. Companies are now collecting extensive data from these processes and using modeling to create their ‘digital twin’. 

The processes modeled range from relatively simple tasks such as optimization of freezing/thawing product intermediates, freeze-drying and automated buffer preparation, to more complex procedures such as bioreactor scale-up. Although these used to be manual ‘craft’ processes run by a combination of experience and pre-existing data, there is now a trend for them to be tested and optimized using in silico methods.

Using modeling to improve purification methods

Bioprocessing is used to create many therapeutic products, from molecules such as protein, DNA and RNA to much larger entities such as viruses and eukaryotic cells. Their production has many different steps that often require extensive purification before the next step can proceed. Common purification methods include clarification, chromatography, ultrafiltration/diafiltration and sterile filtration. 

These methods were typically used in an empirical way based on experience with similar products. Now however, use of modeling has led to a much more detailed understanding of how these separation/purification methods work. It allows the prediction of when column/membrane capacity is reached, and when “breakthrough” of contaminants is likely to occur. It has also led to the development of alternatives to standard resin-based column chromatography such as the incorporation of new reactive chemical groups on membrane filters that can then act like traditional resin-based columns.

Benefits of Process Analytical Technology (PAT)

PAT refers to on-line/at-line measurement of critical product quality and performance attributes so that real-time direct data collection can be used to control and optimize manufacturing processes.

PAT is being augmented by a much wider range of analytical techniques than before and now includes many different types of spectroscopy including variable path length, Fourier-transform infrared, Raman and Dynamic Light Scattering, as well as Nuclear Magnetic Resonance. The use of PAT for direct data collection that links to immediate process control is only likely to accelerate.

Inexorable rise of disposable closed cell processing systems

In addition to the data theme, it was clear to see that the number of automated closed cell handling and processing systems – from cell selection to expansion and harvesting – is rapidly increasing. Companies aim to offer end-to-end solutions to traditionally manual processes, either by offering modular components or a single complete system. 

The options for choosing automated disposable bioreactors/cell expansion systems are also increasing, with many players recently entering the market. It is clear why this option is advantageous; traditional stainless-steel bioreactors are complex, expensive, and laborious to clean and maintain.

Just how large these systems can grow is shown by ThermoFisher’s 5000L disposable Dynadrive bioreactor, which is offered as a fast-to-install option compared to stainless-steel alternatives. However, the environmental impact of the disposable route is a long-term concern and is expected to be a point of contentious discussion over the coming years.

Bioprocessing technology is developing (but not fast enough for demand) 

The technological developments described above are certainly needed as advances in eukaryotic culturing methods are allowing higher and higher cell densities to be realized, which makes purification more challenging. Furthermore, the pipeline for products that use these technologies is growing at a dizzying rate with over 1,500 cell and gene therapy and 700 mRNA trials listed on the US Clinical Trials site. New higher throughput processing techniques will need to be developed to accommodate this demand. 

The industry clearly recognizes this and companies were very open in sharing their results at BioProcess International – both good and bad! They are also keen to work with the process equipment manufacturers to optimize performance. Overall, improvements have been made, but there is a long way to go. 

Performance can be improved by a virtuous circle of data generation, data modeling and innovative design and engineering – something we at CDP are already doing to help our clients succeed.

If you would like to see how innovative engineering and automation could help increase bioprocessing throughput, please get in touch.


Insights into GenAI data scientist perspective - Cambridge Design Partnership|The data scientist’s perspective|Insights into GenAI product owner's perspective - Cambridge Design Partnership
By Cambridge Design Partnership

The product owner’s perspective: Five practical insights to accelerating innovation with GenAI

Here at CDP, we’ve delivered a range of Generative AI (GenAI) projects that use Large Language Models (LLMs). Each has been a journey of discovery, and sometimes frustration. But ultimately each has reinforced the potential for GenAI to dramatically accelerate innovation.

In an attempt to provide a useful contribution that cuts through the noise, we’ve distilled our learnings into a four-part series on how businesses, data scientists and product owners can leverage GenAI for success with a final perspective from a GenAI-powered ChatBot.

In this second article, we draw from our experiences implementing GenAI from a product owner’s perspective. To get a high-level view of LLMs check out Part 1 or for a deeper dive in the technology from a data scientist perspective check out Part 3.

1. Start at the end and work backwards

As with all truly transformative innovation, start by understanding what you are offering your users and work back from them. Ignore the undoubted magic of the technology at this stage – you can rely on that coming later.

You will need to set your success criteria, and this is where to start. Delighting your user base and measuring how they will benefit will do more to drive adoption than any shiny AI tech that might be going on behind the scenes.


Choose your project carefully.

  • Choose an area that you already know well or for which you have a good way of measuring success. This will ensure you see beyond the magic of the black-box and can truly judge the performance and value that LLMs bring.
  • Choose an area where LLMs work to their strengths by taking advantage of at least one of the core competencies they have been shown to do well; summary, expansion, inference and analysis.

2. Don’t forget the basics

Make good use of Service Design techniques to define what success looks like. Map the User Journey and spend time defining the touchpoints and modelling the semantic information architecture.

And then strip it back. Cut away absolutely everything that isn’t vital to the successful outcome you plan for. Don’t let the designers loose until this is done. And consider any investigative work with the technology up to this point as exploratory and should almost certainly be archived.

You’ll then have a clear set of priorities, requirements, information flows and use-cases that everyone understands, and everyone can support. The whole team will be clear about what they are aiming for. Keeping their eye focussed on the prize makes the Product Owner’s primary catch-phrases more effective: “No that is not in scope” and “This is lower priority”.

And if this is starting to sound like the start of any solid digital project – good, it should.

3. Experiment

Give your team as much time as possible to try things out. Build the time into the plan and break the experiments down into small and well-defined steps to learn and iterate.


Look to experiment with the following:

  • How the structure of prompts changes the output.
  • How the different LLMs compare when asked to respond to the same prompt.
  • How to extend the LLMs by adding training to embed your own information data.

Aim to build the experimental steps around the core competencies of Generative AI. And later, bring these together to form an overall solution using your favourite AI automation tool chain.


There will be surprises. There will be frustrations. And there will be changes in the way that you approach the use of the LLMs. Don’t be afraid to pivot on how you use the technology; or indeed ‘if’ you use the technology. But remember the basics, keep your eye on what success looks like and don’t let the team get carried away with ‘shiny object syndrome’.

4. Get lots of feedback

While using AI, remember to share your work with real humans as early as possible: People outside your team who can give you useful feedback. Set up demos within the team to share learnings and put on regular show-and-tell sessions with your target audience. And, as soon as possible, let them try it out – on their own, without you there. They will learn to see beyond the magic, and you will quickly find out what works and what doesn’t.

Your priorities will change – but the fundamental definition of success won’t (hopefully). And don’t forget the importance of plain old testing. The outputs from a LLMs can vary widely with only the smallest changes in training data and prompts. Fortunately, LLMs can come to the rescue here – they are great at evaluating the output from other models through peer- review. Use that capability to help you test. This is also useful for building into the architecture of your solution. Where you have the resources; double up the LLMs to interact and increase the quality of output for a production system.

5. Don’t underestimate the time you need


Don’t underestimate the time it will take to gather, prepare and refine your data. When it comes to data, quality and variety are just as important as quantity. With demographic information, a good distribution of variety is vital to represent your users truly and ethically. And don’t forget to set aside at least 10% randomly selected from the training set so that you can properly test the results.


To save time and increase the training and test data available, explore opportunities to synthesise data to add to your original data set. Also, don’t underestimate the time it will take to test and refine the prompts and LLMs settings to achieve the repeatable outcomes you are looking for. Prompt engineering is an art as well as a skill and takes time to learn.


Finally, know when to stop. It will always be possible to make it a bit better. Be clear about what is good enough and recognise when you get there. The impulse for the team to keep tweaking will never end – it’s simply too absorbing.

Interested in exploring how GenAI can accelerate your innovation?

Come and join us in Cambridge, UK, and Raleigh, NC, where we’ll be running a series of in-person workshops to help clients identify the opportunities (and threats) of GenAI and plan a path to accelerate their innovation.

Trends in Respiratory
By Cambridge Design Partnership

Trends in respiratory therapies: why pMDIs hang in the balance of new technology

In May 2023, RDD Europe returned to a real-world conference after years of pandemic-enforced online-only presence. The location was spectacular – Antibes on the Cote d’Azur – with the sparkling Mediterranean Sea providing welcome relief from a dismal British spring. 

The industry was well represented by device technology companies, CMOs, academics and pharma companies, and the presentations and workshops provided an engaging blend of research and practical advice.

Even though much of my time over the past ten years has been focused on parenteral device development, my career in combination products started in respiratory devices, working on a variety of dry powder inhaler (DPI) and pressurised metered dose inhaler (pMDI) devices, including the GSK Ellipta inhaler. This year at RDD, as I returned to my roots in this industry, three main themes struck me: preparing for the pMDI cliff edge; moving beyond traditional respiratory diseases; and implementing particle engineering for targeted treatment.

There were also two notable omissions: users and connectivity. More on those later.

Preparing for the cliff edge of pMDI propellants

The shift in pMDIs from using HFC propellants towards gases with a lower global warming potential (GWP) has gained momentum, with California imposing a ban on the sale and distribution of R227ea from the end of 2030, and R134a from the end of 2032, including for medical use. This means the end of the line for the sale of all current pMDI products in California, with other jurisdictions likely to follow suit as the world tries to move to a more sustainable solution.

The transition needs formulators, device designers, scientists, and other disciplines to collaborate to solve the challenges presented by the different physical properties of the new gases. Different thermodynamic and fluid dynamic properties can dramatically alter the plume geometry, droplet size and particle velocity, requiring careful redesign of the fluid pathways to compensate for the differences. These challenges were outlined in evidence presented by Recipharm (1), Proveris and Koura (2), and Healthy Airways LLC (3).

At Cambridge Design Partnership, we are receiving far fewer enquiries for pMDI products than DPIs and soft-mist inhalers. Obviously, an n=1 sample does not have a high degree of certainty, but it reflects a general sentiment among clients to focus future developments away from pMDI platforms.

Moving forward beyond traditional respiratory diseases

Asthma and COPD remain the biggest drivers in device and formulation development, much the same way that diabetes treatment has driven pen injector development. Two drivers that our drug delivery team have seen pushing device design in respiratory and the inhalation market are the need for home treatment, rather than hospital centered treatment; and platforms for biological drugs. The other significant drive is for vaccines that are stable at higher temperatures, which can be delivered without leaving behind copious volumes of blood-contaminated medical waste.

One challenge that comes with these new treatment regimens, beyond formulating drugs that will be stable in powder form, is getting the drug to the correct part of the body and making sure it remains present long enough to be effective. One paper from UCL and the University of Hong Kong (4) highlighted a promising approach to developing therapeutic antibodies against future SARS outbreaks. Some of these developments also require higher dose payloads, or API-only formulations; this presents a substantial challenge to device designers to make sure that the inhalation capabilities of different patient groups can achieve the required dose efficiency.

Aptar and Recipharm also shared their own device innovations to present novel spray and softmist technologies based on a syringe primary container. Targeting rapid treatment to the brain via the olfactory route is a much-neglected treatment option, in part due to the challenges of getting consistent behavior with users. At Cambridge Design Partnership, we’ve been working with a pioneering device company looking to exploit this pathway, and my colleague, Clare Beddoes, will be presenting information on this device development at PODD in October.

Enter: particle engineering for targeted treatment

In addition to the paper from UCL (4), particle engineering to target specific areas in the respiratory and nasal pathway was a topic that several posters and presentations addressed directly. Building on standard jet milling techniques, a paper from Aston University explained how isothermal dry particle coating (iDPC) can be used to create more potent formulations without increasing the volume of powder inhaled by the user (5). A third paper from Hovione and two Portuguese institutions focused on the characterization of different particle manufacturing techniques and how they affect deposition in nasal passages (6).

Closing the gap between the early stages of in vitro and in silico models, and the later stage in vivo performance, continues to receive a lot of attention. As the cost of computing power continues to fall, going into clinical or preclinical trials with greater confidence will accelerate time to market and reduce the cost burden on pharma companies looking to novel treatments.

Don’t forget user capability and connectivity

Two areas of development that received relatively little focus at the conference were human factors engineering (HFE) and connectivity – two concerns that are the subject of a great deal of effort in the parenteral sector. Recipharm presented a poster on the HFE advantages of their novel unit dose nasal spray when compared to a reference device (which bore a striking resemblance to an Aptar Unidose Liquid Nasal Spray). Research institution Solvias presented a paper showing how training users can lead to worse outcomes due to misperception of expertise using a device (7). This counterintuitive result demonstrated that patients with limited one-to-one training with a Handihaler showed more errors in use than patients who only had access to the device and IFU. 

While these insights were welcome, our in-house team knows that patients continue to struggle to use inhalers reliably and consistently, leaving even the most effective drug products showing variable results.

These challenges for patient use are also being seen in the parenteral market, which is why we are working so closely with our clients to find better ways to train patients and leverage connectivity to improve adherence to medication regimens. These connectivity solutions are often in direct conflict with cost and sustainability targets and finding a route to square this circle is a challenge with which CDP’s designers and engineers are actively engaging.

See you in Tucson?

RDD 2023 was the first RDD conference I have attended. It was great to reconnect with former colleagues and make new connections across the industry. The conference was very well run, and the standard of papers and presentations ensured there was plenty of fascinating material for industry and academia to engage with. I’ve already blocked out my diary for RDD 2024 in Tucson and I look forward to seeing you there.


References
  1. Albuterol Sulfate Metered Dose Inhaler Feasibility Using an Environment Friendly Propellant HFA152a and Novel Valves (Lei Mao, Sheryl Johnson, Nischal Pant, James Murray, Donald Ellis, Benjamin Zechinati, Johnathan Carr and Victoria Cruttenden)
  2. Comparison of Spray Characteristics of P-134a and Low GWP P-152a pMDIs With and Without Ethanol (Lynn Jordan, Sheryl Johnson, Ramesh Chand, Grant Thurston, Deborah Jones, Vanessa Webster and Sally Stanford)
  3. Accelerated Development of MDIs with Low GWP Propellants in a QbD Era: Practical, Regulatory and Scientific Considerations (Healthy Airways LLC and First Flight Pharma LLC)
  4. Inhaled Antibody Therapies: Enabling Prophylactic Protection against SARS-CoV-2 Infection with a Dual Targeting Powder Formulation (Han Song Saw and Jenny Ka-Wing Lam)
  5. Use of Isothermal Dry Particle Coating (iDPC) for the Development of High Dose Dry Powder Inhalers (Jasdip S. Koner, David A. Wyatt, Amandip S. Gill, Shital Lungare, Rhys Jones and Afzal R. Mohammed)
  6. Benchmarking of Particle Engineering Strategies for Nasal Powder Delivery: Characterization of Nasal Deposition Using the Alberta Idealized Nasal Inlet (Patricia Henriques, Cláudia Costa, António Serôdio, Ana Fortuna, and Slavomíra Doktorovová)
  7. Effect of Capsule-Based Dry Powder Inhaler User Training on In Vitro Performance (Oleksandra Troshyna and Yannick Baschung)

 

 

 

 

Care tech: exploring the latest trends in dementia care
By Cambridge Design Partnership

Care tech: exploring the latest trends in dementia care

We are witnessing important advances in the treatment of the most common cause of dementia, Alzheimer’s disease, most noticeably by the emergence of disease-modifying therapeutics. And this trend is only set to continue, with new innovations and technologies promising to help slow the progression of this devastating disease.

However, patients who do not yet have access to these treatments or are in a more advanced stage of the disease will continue to require significant care support. The caregiving sector is already under significant pressure due to the increasing demand for long-term care within aging populations [1]. As the disease progresses, family members, including elderly spouses, are often the main caregiver – but they may be left poorly equipped to do this without the right support.

With the cost of dementia care running to £32,250 per person per annum [2] technology innovators are finding new ways to make resources go further and give dementia patients independence for longer – providing reassurance to the caregiver and peace of mind to family members.

The challenge lies in making these solutions accessible to caregivers and usable for patients. In this article, we take a deep dive into the technologies available to support dementia care and explore emerging trends that are transforming the landscape by using the right technology at the right time.

 

Dementia care: the current landscape

Alzheimer’s disease is a progressive and irreversible neurodegenerative condition that primarily affects the cognitive functions of the brain, particularly memory, thinking and behavior. It is the most common cause of dementia, a broader term for a set of symptoms that impact a person’s ability to live independently.

In the UK, it is estimated that more than 900,000 people live with dementia, and this is projected to double by 2040 [3]. Of the people diagnosed, up to a third live alone [4]. With the aging population outpacing the rate of training and recruiting caregivers, the already significant caregiver shortage is set to increase [5].

Meanwhile, family members are taking on caregiver responsibilities, often with unsustainable and distressing consequences. This is in part because every patient journey is different and the rate of their disease progression can vary widely. Some patients may require discreet support at the early stages of the disease, while others may require constant care. Knowing when and how to intervene to provide the care support needed is crucial.

The care sector is increasingly looking to technology to maximize the impact of the professional and informal caregiver workforce. There is an increasing recognition that caregivers require ongoing support to make their role more manageable, especially following the pandemic.

An overview of innovations

Assistive technologies rarely exist in isolation. In fact, it is often the combination of these technologies that yields the best results. Here are some of the technologies available to support independent living and managing disease progression.

Personal alarms and safety tracking

Alarms and tracking technologies allow people to call for help if they need it – wherever they are – as well as providing peace of mind for caregivers and family members when they are not there. They are simple to use and can help patients stay independent for longer.

Location. GPS trackers such as Mindme, Ubeequee, and Angelsense consist of battery powered or rechargeable wearables that connect to a 24/7 monitoring support center to alert family members and emergency services if a vulnerable adult is outside designated safe zones. Direct-to-consumer devices, such as Medpage, work similarly, but the information links directly to family members and may not have predefined safety zones or raise an alarm. Connectivity is based on broadband and subject to subscription charges.

Alarms and calls. Technologies such as Tunstall’s MyAmie, Oysta, and Saga’s SOS allow patients to raise an alarm for relatives, caregivers or emergency services with the use of a single button. These technologies often come in the form of a pendant worn around the house and are connected to a hub via a radio signal. The patient can also use the hub to raise an alarm. The pendant must be within reach of the hub for it to work. Other technologies, however, work similarly to the GPS tracker and can rely on broadband for wider network reach. These technologies often also incorporate fall detection and GPS.

Fall detection. Wearables such as Buddi, Telecare, and Careline are designed specifically for dementia care. These use inertia measurement units, gyroscopes, and pressure sensors to detect falls and automatically send messages to caregivers, family members, and first-aid responders. These devices are often accompanied by an alarm button for the user and GPS tracking. Many of these technologies can also be connected to a 24/7 monitoring support team.

Reminders and medication adherence. There are a variety of technologies in this category which allow caregivers to set reminders for patients to take medication, drink water, eat, or  remember appointments or social events. Memory aid kits available include the MemRabel care alarm clock with a large screen, connected to a Pivotell Vibratime rechargeable wrist watch that vibrates for reminders. These can be in photo, video or audio format.

The challenge many of these technologies face is that they depend on a caregiver to ensure the patient remembers to engage with and wear the device, charge it when necessary, and crucially, press the button if in distress. In the case of some technologies, they must also be within reach of a hub.

These technologies are good for the early stages of the disease, but as cognitive decline continues, patients will rely more on caregivers to support them, thus limiting their advantages.

In other words, the longevity of these technologies can become incompatible with the patient’s journey, and this is one of the key hurdles to consider when designing and adopting technology in dementia care.

Remote monitoring

This is a fast-growing area for dementia care. Remote monitoring technologies share information on the patient’s daily living patterns with caregivers and family members. The purpose is to provide peace of mind to family members and enable caregivers to make informed care decisions in the short and long term.

Common functions include:

  • Movement monitoring. Generally delivered by several passive infrared (PIR) sensors installed around the house, and pressure mats in beds and sofas, connected to a hub.
  • House occupancy. Sensors on external doors to monitor whether an individual has left the house.
  • Appliance usage. Monitored by connected sensors placed between the mains inlet and the device plug.
  • Fall detection. Cameras or mmWave radar sensors to detect when an individual has had a fall, without the need for a wearable.

Many of these functions can be delivered by single systems, e.g. Taking Care Home Alert, with the more sophisticated fall detection systems generally targeted at professional care provider users, e.g. Hikvision and Vayyar Care.

It is also common for families to create their own solutions, especially when they feel no existing single solution works for them. This includes the use of consumer tech, such as smartphones, video doorbells, smart home speakers, and cameras around the house. Video doorbells, for example, can be valuable in preventing scams, while smart home speakers can set reminders, automate house functions, or call a relative. However, the use of cameras around the house does pose privacy concerns which need to be considered.

Although the overall objective is to monitor daily independent living, the information often requires interpretation by the caregiver. This can often be facilitated through a dashboard, although the information can be disjointed, and assessment of patterns may not be clear-cut.

Innovator Matt Ash from Supersense Technologies, however, believes we can do more to obtain valuable insights and monitor disease progression efficiently and noninvasively.

 

“There is a real need for technologies that support caregivers in their role and provide them with the confidence to take a break, knowing their loved one is safe. Though there are some credible assistive technologies out there, the unique needs of families living with dementia are not well served. Projects like the Longitude Prize on Dementia are investing in radical thinking to generate solutions with families living with dementia.”

 

Talking about some of the latest advancements being tested, Ash continues:

 

“Everyone’s journey with dementia is different. Right now, we are working on leveraging recent consumer developments in sensor technology, machine learning, and user experience to create personalized assistive systems that can evolve with the needs of an individual with dementia and their caregivers. It’s an incredible opportunity to provide the community with supporting technologies that serve their needs.”

 

Adopting the right intervention at the right time

If we want to empower those with dementia to live independently, maximize the impact of caregivers, and provide peace of mind to family members, we must enable the right type of intervention at the right time. Someone with early Alzheimer’s disease may feel overwhelmed or suspicious of new technology, while a person in later stages may be too vulnerable to learn how to use it.

The future of dementia care will center around collecting the right data and extracting the right insights from it to enable better care choices. By allowing technology to provide information on the progression rate of the disease for a particular patient, we can start building a profile of care by recognizing changes in patterns to a baseline. Emerging technologies such as remote monitoring platforms can support this and guide the longevity of other technological interventions to ensure that they align with the individual patient’s journey. At the heart of these technologies, privacy must be a top priority, which may include the use of AI and other methods to allow for patterns to be recognized quickly and with minimal need of human intervention.

We are entering a new era of therapeutics for Alzheimer’s disease, but there is still much to do, particularly in care. Although the use of technology can ultimately support patients, caregivers and family members, it is often incompatible with the individual’s stage of the disease, or inaccessible to caregivers. But as new technologies emerge, data and AI can unlock new insights to support a personalized care plan that scopes each patient to their individual needs – allowing caregivers and families to provide the best care at the right time.


References
  1. E. adult social care insight. The size and structure of the adult social care sector and workforce in England. Technical report, Skills for Care, Workforce Intelligence, 2023.
  2. Alzheimer’s Society, How much does dementia care cost? https://www.alzheimers.org.uk/blog/how-much-does-dementia-care-cost
  3. L. B.-A. A. R. Raphael Wittenberg, Bo Hu. Projections of older people with dementia and costs of dementia care in the United Kingdom, 2019–2040. Technical report, Care Policy and Evaluation Centre, London School of Economics and Political Science, 2019.
  4. B. W. Claudia Miranda-Castillo and M. Orrell. People with dementia living alone: what are their needs and what kind of support are they receiving? International Psychogeriatrics, 2010.
  5. E. adult social care insight. The size and structure of the adult social care sector and workforce in England. Technical report, Skills for Care, Workforce Intelligence, 2023.

 

 

 

 

Could oligonucleotide manufacturing advances redefine therapy
By Cambridge Design Partnership

Could oligonucleotide manufacturing advances redefine therapy? 

Oligonucleotides have the potential to address some of the most devastating diseases that remain stubbornly resistant to treatment. These include neurodegenerative, vascular, respiratory, and oncological illnesses. As exciting as this branch of science is, the oligo industry is still in its commercial infancy. Large-scale oligonucleotide manufacturing is not straightforward, and various challenges need addressing.

To understand these, Alejandra and Carla, two Consultant biomedical engineers at Cambridge Design Partnership (CDP), were invited to take part in the Innovation in Oligonucleotide Manufacturing Symposium hosted by CPI at their new facilities in Glasgow. After an intensive day of discussion between key stakeholders from industry, academia, government, and the regulatory sector, we present the main takeaways. For this to make sense, let’s start from the beginning.

How do oligonucleotides work?

Oligonucleotides are short DNA or RNA molecules, typically around 20  nucleotides (basic building blocks of nucleic acids) in length. They can modulate gene expression, the process by which information included in a gene informs the assembly of a protein molecule. They do this by binding to pre-mRNA and mRNA, the carriers of genetic information before the mature mRNA is translated into proteins. Because mRNAs carry code for all cellular proteins, oligonucleotides could be effective for targets and diseases not treatable by current drugs1.

What is their importance as therapeutic agents? 

Oligonucleotide therapeutics prevent or modulate the expression of almost any gene as part of highly targeted treatment. They aim to target the genetic basis of the disease rather than the symptoms. Compared to conventional therapies, oligonucleotides have a higher specificity with reduced side effects. They can target specific molecules that are currently difficult to target, such as RNA. Several oligonucleotide therapeutics are already on the market, with Novartis Pharmaceutical’s Vitravene, for treating cytomegalovirus retinitis in immunocompromised patients, being the first to be approved by the FDA in 1998. 

The list of diseases that oligonucleotides can target is ever-growing, with the market valued at USD 5.19 billion in 2020 and expected to rise to USD 26.09 billion by 20302

How are oligonucleotides manufactured? 

Oligonucleotides are synthesized chemically, where nucleotides are added stepwise, resulting in a growing chain. Each nucleotide is subjected to a series of chemical reactions to create a stable component allowing the chain to grow. 

The two different types of oligonucleotide manufacturing are solid-phase and liquid-phase synthesis. Solid-phase oligonucleotide synthesis is carried out on a solid insoluble object, such as polystyrene beads, placed in columns that enable all reagents and solvents to pass through freely.  

In liquid-phase synthesis, the oligonucleotides are grown on soluble polymeric support within a homogeneous media; the polymer-bound product is commonly recovered from the reaction mixture by precipitation, thus allowing the rapid elimination of excess reagent and soluble by-products.  

Solid phase allows high throughput synthesis and purification, with liquid phase taking longer to synthesize the oligonucleotides. However, liquid-phase has the advantage of being performed on a larger scale and typically being less expensive than solid-phase synthesis. Once the desired oligonucleotide has been synthesized, the material can be passed to the next processing steps, including purification, concentration and, commonly, lyophilization. 

What are the main challenges in the process? 

Oligonucleotide manufacturing is a complex process with many limitations, especially in scalability. The major problems researchers face are currently due to high expenses regarding the raw materials for oligonucleotide synthesis, a lack of funding for oligonucleotide therapies, and a shortage of skilled resources in the oligonucleotide synthesis field. These problems create substantial bottlenecks in the research required for therapeutic oligonucleotides and, ultimately, the clinical use of these therapies. 

Key takeaways on the manufacturing of oligonucleotides  

  • Moving towards liquid-phase oligonucleotide synthesis. Solid-phase oligonucleotide synthesis is a great tool for rapidly making lots of oligos in the lab. However, it has drawbacks when manufacturing hundreds of kg or even multi-ton quantities per year, which might be the case for emergent nucleotide products targeting more common diseases3.

    The major problems include:
    • As the oligo grows, the space for the fresh nucleotides to diffuse and react gets tight, leading to incomplete couplings. This results in an altered sequence of monomers and incorrect genetic information in the final product, which must be removed by extensive and expensive processes.  
    • It is hard to scale up the solid beds (insoluble particles to which the oligonucleotide is bound during synthesis).
    • The synthesis and purification steps generate large amounts of organic and aqueous waste.

 

  • Liquid-phase synthesis stands as a promising approach to increase the yield of the overall process while allowing the production of large amounts of oligonucleotides in, potentially, a more sustainable manner4.

 

  • New alternatives to current purification methods are under investigation. Promising approaches to simplifying the purification steps show good results in the investigational phase5. Examples are membrane-sieving technology and biocatalytic processes used for phase separation. In the biocatalytic process, oligonucleotides are synthesized in a single operation, with fewer impurities and by-product production, and in aqueous media. All these are promising features that target the current limitations of existing synthesis methods3.

 

  • New approaches come with new challenges: The development of novel and alternative technologies offers opportunities to address some of the limitations of solid-phase synthesis while also creating new challenges. For instance, using nanofiltration membranes to support the synthesis of oligonucleotides in liquid phase can present issues such as membrane stability and fouling. Another concern regarding the enzymatic approach is the availability of raw material with the right purity.  
    If we consider the bigger picture, another novel approach in the pharmaceutical industry is the adoption of digital manufacturing technologies. However, this up-and-coming tool may come with its own challenges due to lack of pharmaceutical manufacturing expertise and the high cost of initial funds. 

 

  • Raw materials suppliers are already working towards reducing the gap. Strategies to reduce the prices of chemicals and deliver sustainable solutions are already underway. For instance, Honeywell US, a major supplier of the raw material required for oligonucleotide production, recycles solvents and assigns dedicated chemical drums to individual businesses to avoid cross-contamination.

Big wins for early pioneers 

At CDP, we see every challenge as an opportunity, and we are pleased to know that governments and large industries have already recognized these problems.  Major efforts to accelerate research in the UK have been launched, not only as funding from governmental innovation agencies but also from pharmaceutical companies. In addition, the 18 oligonucleotide therapies already approved by the US Food and Drug Administration (FDA) for clinical use are leading the way6.  

There is a need for rapid adoption of next-generation processes that reduce risk, cut costs and save time while enabling on-demand therapies for every patient. However, regulatory-wise, standards in this industry are yet to be established. The risk around safety and efficacy remains a significant concern: How do we ensure we have the right sequence in each molecule? How do these molecules behave for a specific treatment? And what is the risk for the patient? These are just a few questions that still need to be addressed. 

The event at CPI highlighted the importance of bringing experts together to shape the path and accelerate innovation. Understanding the challenges in the oligonucleotide space and planning around them will allow us to drive successful manufacturing at scale. The moment to build the future is now!


References 
  1. Kole R, Krainer AR, Altman S. Nat Rev Drug Discov. 2012 Jan 20;11(2):125-40. doi: 10.1038/nrd3625. 
  2. Allied Market Research, Oligonucleotide Synthesis Market report, Code A08356, July 2021  
  3. Sarah Lovelock, “Biocatalytic approaches to therapeutic oligonucleotide manufacture” in “Enzyme Engineering XXVI”, Andy Bommarius, Georgia Institute of Technology, USA; Vesna Mitchell, Codexis, USA; Doug Fuerst, GSK, USA Eds, ECI Symposium Series, (2022). https://dc.engconfintl.org/enzyme_xxvi/37. Abstract: https://dc.engconfintl.org/cgi/viewcontent.cgi?filename=0&article=1034&context=enzyme_xxvi&type=additional  
  4. J. Org. Chem. 2021, 86, 1, 49–61 Publication Date: November 30, 2020 https://doi.org/10.1021/acs.joc.0c02291 
  5. Dousis A, Ravichandran K, Hobert EM, Moore MJ, Rabideau AE. Nat Biotechnol. 2023 Apr;41(4):560-568. doi: 10.1038/s41587-022-01525-6. 
  6. Martin Egli, Muthiah Manoharan, Nucleic Acids Research, Volume 51, Issue 6, 11 April 2023, Pages 2529–2573.  
environmental sustainability||||||||||
By Cambridge Design Partnership

Are we there yet? An honest progress report on our environmental sustainability

“We’re all on a mission to achieve sustainability, working together to build a better business for people and planet”. While it may be true, statements like this don’t offer much insight into what we’re actually doing about our environmental impact. Instead, we’d like to provide an honest assessment of where we are now, what actions we’ve taken so far, and where we’re focusing our efforts in the future.

Sustainability communications are often full of cliché. In their excellent research report ‘Words that work’, creative communications consultancy Radley Yeldar analyzed the websites of 50 of the Forbes 100 most valuable brands and found the same words and phrases repeatedly cropping up. One of the most popular was the notion of a ‘sustainability journey’.

We can see how this happens – in fact, in the first draft of this article, we followed this same path. We want to talk about the progress we’ve made, which we’re proud of, but we know we’ve got a long way to go. We’ve got plans that we want to share – how do we communicate this process while it’s happening? There’s an obvious metaphor!

Part of the reason brands lapse into cliché, says Radley Yeldar, is fear of criticism if they’re brutally honest. So, we’ll try to take their advice, and be brave. Here goes.

An honest assessment

CDP is an Employee-Owned company. A little over a year ago, a group of employee-owners, supported by the management team, started an initiative to measure our performance against the B Impact Assessment, a widely used framework for all-round sustainability impact. Overall, we were very happy with how we measured up – many of the policies, actions and outcomes the assessment checks for were already established.

However, one of the reasons to go through this process was to identify any gaps in our performance. There was one area we decided to focus on, because frankly it was a little behind many other parts of the assessment – our work to improve our environmental sustainability.

What makes us want to improve?

Beyond a desire to have the most positive impact we can, there were three compelling reasons for us to take action:

  1. We’re delighted that more and more of the global brands we work with are committing to ambitious environmental sustainability targets – we want to help them achieve these goals and give them the confidence that we are just as committed to having a positive environmental impact.
  2. We work hard to reflect the needs and priorities of our employee-owners – our only shareholder and biggest asset. Surveys and engagement events have made clear that environmental sustainability is important to them.
  3. We have recently transitioned to a ‘large’ company under UK law, which entails new reporting requirements – the perfect time, therefore, to embed new measurement and reporting systems across the company.

Making a change through our client work

There are two ways that we can have an impact on the environment – through our business operations, and through the innovation, design, and development work we do on behalf of our clients.

Whilst we feel it’s important to minimize the environmental impact of our own operations, helping our clients to ‘improve lives through innovation’ (our purpose) allows us to contribute to environmental and social benefits at a scale well beyond that which we can achieve alone. As an example, a quick calculation showed that the annual production of a particular dry powder inhaler – a typical project we might deliver for a client – was responsible for more than 50 times our annual carbon footprint. Or, put another way, if we helped a client to reduce the carbon impact of that product by just 2%, we would save the equivalent of CDP’s annual carbon footprint.

Recognizing this, we’ve invested in growing our capability in sustainability and cleantech, helping our clients reduce their environmental impact and develop new technologies, including:

  • Developing packaging design and sustainability guidelines for one of the world’s largest consumer packaged goods companies
  • Helping multiple blue chip clients transition from fossil-fuel-derived plastic packaging to alternatives such as paper. Highlights include a patented, first-of-its-kind, single-mold paper bottle for Pulpex
  • Performing a life cycle assessment to benchmark the environmental impacts of a connected autoinjector, and using this to drive design changes that minimize these impacts
  • Designing and developing a pop-up solar car park and electric vehicle charging hub for 3ti Energy Hubs, which won Best New Product at The Electric Vehicle Innovation & Excellence Awards (EVIEs)
  • Winning a hackathon run by Cambridge Institute for Sustainability Leadership (CISL) and British Antarctic Survey (BAS) to help BAS achieve net zero at their Rothera research station in Antarctica

Changes in our own operations

In the last year, we’ve also made significant progress on how our business operations impact the environment; much of this was enabled through moving to a new purpose-built facility, which involved over three years of rigorous planning and attention to detail:

Net zero HQ

Our new UK headquarters at Bourn Quarter is built to be net zero over its lifetime. It doesn’t rely on fossil fuels for heating and is designed to high standards of energy efficiency. On-site power generation includes 1,500m2 of rooftop solar panels across our Innovation Centre and Pilot Production Centre buildings – that’s an area larger than five tennis courts!

Supply network with shared values

Recognizing that much of our impact occurs through our suppliers, we’re starting to factor environmental impact into our supplier selection. In the last year, we’ve brought in Wilson Vale as our catering partner – their central operations are certified carbon neutral, and at Bourn Quarter they serve seasonal food and take steps to minimize food waste. They calculate how many people are typically on-site on certain days and incorporate any leftovers into the following day’s meals – for example, as an option in the salad bar.

Measuring what matters

Our science and engineering teams know that accurate data is crucial to optimizing any process. So, we’ve set up systems to monitor our energy use, carbon emissions, and waste – the areas of greatest impact from our operations. Electricity consumption data from our first few months in Bourn Quarter will allow us to optimize our heating and lighting usage. We’re also collecting data on our recycling, food and general waste streams, to generate insights that will support future improvements.

Awareness and engagement

We’ve worked hard to bring our entire organization with us, so that everyone feels ready and empowered to help identify and solve problems. This type of unified effort reflects the culture of our company, rather than a passion project for a small group of champions working in isolation.

We’re achieving this through regular all-company ‘town hall’, updates, interactive ‘lunch and learns’, and immersive Climate Fresk sessions – three- to four-hour workshops which explore the fundamental science behind climate change.

Are we there yet?

Whilst we’re proud of what we’ve achieved so far, it’s just the start of an ongoing process to manage and improve our environmental impact, and we’ve got a lot more work to do! As Peter Drucker famously put it, “you can’t improve what you don’t measure” – quantifying our carbon, waste and water impact is the foundation for both transparent reporting and further progress. We’re looking forward to using the measurement and analysis systems we’ve established to benchmark our performance and assess the effect of improvements we make. We plan to publish our first impact report later this year – and we’ll be aiming for openness, honesty, and a minimum of sustainability cliché!

If you have similar ambitions and would like to discuss this in more detail – particularly if you’re close to Cambridge (UK) or Raleigh, North Carolina (USA) – please get in contact.

Designing more sustainable electronics|||
By Cambridge Design Partnership

Designing more sustainable electronics

From phones to laptops, home devices to watches, electronic devices – particularly smart devices – have become part of people’s lives, enabling better communication and access to information and making their day-to-day easier.

But the increasing adoption of technology comes at an environmental cost. Electronic devices often have a significant carbon footprint because of the energy-intensive processes needed to produce printed circuit boards (PCBs) and integrated circuits.

Electronics production relies on mining and extracting dozens of different materials, including critical raw materials (economically important materials at high risk of supply shortage, such as lithium or titanium). Extracting these materials has a range of sustainability impacts, including the leakage of toxic chemicals such as cyanide into the environment, high levels of water use, and human rights abuses in the case of ‘conflict minerals’ such as gold and tantalum.

Waste electronic products, or e-waste, is the fastest-growing waste stream in the world, with over 53 million tonnes of e-waste produced in 2019. Most e-waste is disposed of incorrectly, ending up at waste dumps in developing countries. Hazardous chemicals, such as lead or mercury, that may be present in electronic components can leak into the environment, harming local ecosystems and damaging the health of people who live and work in the dumps.

Product sustainability has focused on the circular economy, particularly recycling. But there are fundamental limits to the impact recycling can have on electronics. Only 17% of e-waste is collected for recycling and, even if it’s collected, recovering materials from e-waste is particularly challenging.

Electronics contain trace amounts of rare metals, which are complex and expensive to separate. Only the most abundant materials, such as copper and gold, can be economically retrieved during e-waste recycling, and even if all e-waste was recycled in this way, the material recovered still wouldn’t be enough to meet the growing demands of the industry.

One way to tackle the environmental challenges presented by electronics is to remove the need for them in the first place, for example by detecting a temperature change using a color-changing chemical rather than a sensor. But, in some instances, electronics are necessary, so how can designers reduce the impact of the products they create?

Our sustainability team assessed a range of technologies and design techniques to determine their potential for reducing the environmental impact of electronic products and how difficult they are to implement. This article outlines a few approaches we’ve used in recent projects at CDP.

Reducing complexity through connectivity

One of the best ways to reduce an electronic device’s environmental impact is by minimizing the electronics’ complexity, thereby reducing the number of integrated circuits needed as well as the surrounding passive components (resistors, capacitors and so on), connecting tracks, and PCB area. 

An easy, effective way to do this is by pairing a product with a user’s existing device to provide the smart capability. Methods range from a simple QR code or NFC chip to a Bluetooth connection for transferring more complex data. 

As well as reducing the electronics in the product, this allows for a degree of futureproofing, as software updates can be used to keep the product up to date. This idea isn’t new but is starting to be used more in applications from smart packaging to medical devices. 

Important to note: Behind many of these software solutions are large data centers that need powering and should be considered in the product’s environmental impact.

Informed decision-making: Life Cycle Assessment (LCA)

Designers can optimize component choices and circuit designs during detailed design to reduce the overall impact of a product.

We recently used LCA to estimate the additional carbon footprint of adding an electronic module to a medical device. This step allowed our team to identify where to focus on reducing the impact of the design, such as replacing integrated circuits with a solution based on lower-impact passive components and optimizing the layout to minimize the total area of PCB required.

We identified several solutions that together had the potential to reduce the total carbon footprint of the product by up to 25% without compromising functionality. In many cases, this optimization also generates cost savings.

Optimizing electronics through additive manufacturing

Over the past two decades, additive manufacturing (such as 3D printing) has seen a surge in use in mechanical prototyping and manufacture, and its applications in the electronics sector are now starting to grow. In the context of PCBs, additive manufacturing refers to selectively adding conductive material to the areas required, as opposed to a more traditional approach which starts with a layer of copper and selectively etches away the areas where it isn’t needed.

These technologies can improve a product’s carbon footprint through reduced material usage and less energy-intensive manufacturing processes. A report published by the ECOtronics project found, “Changing from subtractive manufacturing (etching) to additive manufacturing (printing) has the potential to reduce environmental impacts by more than 50% across all impact categories.” 

One additive manufacturing method is laser direct structuring (LDS), which allows you to construct circuits on the surface of device components. With this approach, you can remove the PCB entirely, dramatically cutting down on the material required. 

These technologies present opportunities to fit electronics into new form factors, print onto a wide array of rigid or flexible substrates (the non-conductive part of the circuit board the metal circuit is added to) and increase the customizability of the design, all while reducing the product’s environmental impact.

As we’ve highlighted before, sustainability initiatives should always consider context, which is vital for electronics. In the absence of cost-effective recycling processes, designers must prioritize approaches that reduce the materials and energy required to produce electronics. As electronics continue to play a leading role in our lives, future designs should reduce our reliance on critical raw materials and consider how circular approaches to design can extend product lifetimes and prevent harm to people and the environment.

References

How to boil your egg perfectly every time
By Cambridge Design Partnership

How to boil your egg perfectly every time – according to simulation

Search ‘how to boil an egg’ on Google, and you get over three billion results, some telling you to put the egg in cold water after boiling to preserve the runny yolk. Intrigued, we decided to investigate the science behind this advice.

Rather than heading straight to our lab for experimentation, we used computer simulation to calculate and model the movement of heat and temperature through the egg and surrounding fluid. Simulation lets us predict data at times that would be impractical or expensive in actual experiments.

Modeling the heat flow in a boiling egg could be a surprisingly tricky problem. An egg consists of a solid shell holding the white and yolk, initially in a liquid state but solidifying as the cooking continues. Being natural products, the exact properties and sizes of eggs vary.

To simplify the problem, we found technical publications that describe the average dimensions and thermal properties of the shell, white, and yolk for a typical egg. We decided to define these properties at a temperature of 60°C, which is around the point the yolk starts to solidify. Using computer-aided-design software, we created the geometry of the egg, and defined a body of fluid to surround it. This fluid body represents the boiling water in a saucepan during the first cooking stage. Afterward, the fluid body can be used to mimic cool-down in air or a bowl of 10°C cold water. We decided that the eggs would start the process from room temperature in all cases.

We ran the simulation using powerful software, Ansys Fluent. The software was initially developed for understanding problems such as the flow of air over planes or heat in a chemical plant, but it can be applied to domestic problems such as the humble boiled egg. To allow the simulation to run quickly on an ordinary computer, we took advantage of the fact an egg shape is a body-of-revolution and looks the same however it’s rotated around its axis. This lets us model it as an axisymmetric body that the computer considers two-dimensional. This reduces the number of calculations and gives us the answer quicker and more cheaply than simulating the real-life, three-dimensional shape.

As an example of the simulation results, Figure 1 shows the temperature distribution on a slice along the egg’s axis after cooking in boiling water for six minutes. The material towards the outside has heated up close to the temperature of the water. However, the central region corresponding to the yolk is still around 50°C, corresponding to a runny egg.

Figure 1: Temperature distribution on a slice across the egg after six minutes of immersion in boiling water.

Figure 2 shows a side-by-side comparison of subsequently cooling the egg in air or 10°C water for five minutes (five minutes being our estimate of the time it takes to finish eating our first dippy egg and move on to the second). When cooled in air, the central region of the egg continues to increase to 70°C, removing the prospect of a runny egg, even though the outer region and shell have decreased in temperature. In contrast, after cooling in water, the central region stays unchanged at 50°C while the shell has decreased close to 10°C. Leaving your perfect dippy egg in air risks ruining the runny yolk – but cooling it in water may save it.

 

Figure 2: Temperature distribution on a slice through the egg following cooking and five minutes of cooling in (a) air and (b) water.

As well as modeling the overall temperature in the egg, we extracted the data for two specific points – at the center and the edge of the egg – and plotted them on a graph (Figure 3) to see how they differed. The data showed that the yolk’s temperature lags that at the shell. This is because the thermal diffusivity of the white and yolk are relatively low. Thermal diffusivity is a measure of how quickly heat can move through a material. So, it takes a while for the yolk to heat up, but once it does, it keeps cooking, absorbing heat from the rest of the egg material. It’s slow to respond to changes in the surrounding water (or air). The temperature just inside the shell responds much more quickly to changes, though, since the path the heat needs to travel from the surrounding fluid is considerably shorter, and the thermal diffusivity of the shell markedly higher.

Figure 3: Temperature profiles with time at the center point of the yolk (circles) and adjacent to the shell (crosses)

With the aid of some considered simplifications, we think this simulation analysis has proven the cookery expert right: cooling eggs down in cold water really does preserve the runny yolk. However, whenever you analyze a problem for the first time, it’s important to compare results against an experimental benchmark, so you can confirm the realism of the assumptions and simplifications in a computer simulation. We took three eggs and boiled each for six minutes in a lab beaker. One was opened straight away, and the other two after cooling in cold water or in air for five minutes. As predicted by our computer simulation, the yolks ranged from runny to fully cooked. And the best thing about this experiment? Everyone got an egg cooked precisely to their liking at the end.

Our engineers and designers are enthusiastic about using science to understand and improve the processes and products we use every day. Get in touch with emma.lindsey@cambridge-design.com to learn more about our work in kitchen technology or simulation.

||||||||||||||||
By Cambridge Design Partnership

Reducing the carbon footprint and plastic waste of LFTs: Evidence-based opportunities

Billions of lateral flow tests have been used worldwide during the COVID-19 pandemic – over two billion have been provided in the UK alone. Debate has raged on social media about why the tests need to use so much single-use plastic and how they could be made more ‘sustainable’. The test strip caseworks is a particular source of dismay – why so much plastic to house such a tiny test strip?

With the UK government ending the free distribution of lateral flow tests for the general public – citing a transition from emergency response to longer-term management of the pandemic – now is the ideal time to look more closely at the sustainability of these lateral flow tests, and to seek the data to demystify some of the emotional assumptions being made. 

Familiarity with lateral flow testing has certainly increased, as has confidence in their clinical performance. It’s expected that lateral flow devices will be more present in our daily lives post-pandemic – not just for COVID-19 and pregnancy testing but to diagnose diseases such as seasonal influenza and sexually transmitted infections – all from the comfort of the home. 

We’ve carried out a high-level assessment to quantify the approximate environmental impact of lateral flow tests and identify evidence-based suggestions for improving their environmental sustainability

Why do COVID-19 lateral flow tests contain lots of single-use plastic in the first place? 

The emergence of COVID-19 was a global emergency, and vast quantities of lateral flow tests were needed urgently. Once developers could produce the right immunoassay chemistry to detect the virus (SARS-CoV-2), it required implementation in a low-cost, low-risk device, that has a mature supply chain – with proven, readily available materials that wouldn’t compromise analytical or clinical performance.     

This meant using existing plastic casework designs to retain and protect the nitrocellulose test strip. Plastic is robust, low cost, lightweight, easy to transport, and easily printed for QR codes and LOT numbers. Critically, it’s a consistent material proven for the highest volume manufacturing and won’t interfere with the immunoassay chemistry. 

From a performance, cost, and manufacturing perspective, redesigning the product with new materials would have been high risk. Material changes may also have needed significant R&D costs, new capital equipment as well as additional cost and effort needed to demonstrate equivalence and achieve regulatory approval – risking the ability to provide sufficient numbers of high-quality tests, at speed during the pandemic.  

Our results: The sustainability of lateral flow tests 

But how serious an environmental impact do these tests have? To find out, we broke down a test into its constituent components and weighed them to calculate the approximate environmental impact, using standard emissions factors to calculate the carbon footprint of a single test.

We focused on carbon footprint (the carbon dioxide and other greenhouse gases emitted during manufacture, transport, and disposal of the tests) and plastic waste (waste that would persist indefinitely if released into the environment) – the two issues that have attracted the most attention around lateral flow tests. A more comprehensive study should consider a broader range of environmental impacts, for example, the use of scarce resources and emission of other pollutants to avoid unintended consequences of any product changes. 

Our results reveal: 

  • The components needed to conduct the test account for around half of the carbon footprint and around two-thirds of the plastic waste. Packaging makes up most of the rest – as is often the case, a surprisingly high proportion of the total environmental impact. 
  • The test strip caseworks, which attracts the most comment online, is responsible for around 30% of the carbon footprint and 40% of the plastic waste. While it’s the most significant single contributor to the environmental impacts we evaluated, the large number of other small parts is also significant. Focusing on the caseworks therefore might not be the best strategy for improving the sustainability of the tests overall. 

Lateral flow tests a minor piece of UK healthcare’s environmental impact 

To put these numbers into context, we can compare the environmental impact of the two billion COVID-19 lateral flow tests distributed in the UK with the UK healthcare system’s overall environmental impact. We estimate the UK’s lateral flow tests have a carbon footprint equivalent to around 0.5% of the total NHS carbon footprint. This isn’t a trivial amount, but it’s also not the largest single contributor to the impact of the UK health system.

It’s also worth considering the positive environmental impact of a user-administered test on the health system. Conducting a test at home can eliminate the need for an individual to visit a test site, GP’s surgery, or hospital (assuming the clinical performance of the lateral flow test is adequate). Based on estimates from the Sustainable Healthcare Coalition, one lateral flow test has around 5% of the carbon footprint of a single GP appointment and produces a similarly low percentage of non-degradable (plastic) waste. 

And that’s before we consider travel. We estimate one lateral flow test has the same carbon footprint as driving 350 metres in an average UK car. So, if you’re driving yourself to a test site or GP surgery some distance away, at-home lateral flow tests compare even more favorably.

If a lateral flow test prevents an individual from transmitting COVID-19 to a vulnerable person, there’s a public health benefit – as well as an environmental benefit – to keeping people out of the hospital. We can all see the discarded waste from home tests, but the less visible impact from energy- and material-intensive medical interventions is often significantly higher. 

These approximate figures demonstrate why building an evidence base is vital during product development targeting sustainability objectives – because the results can be unexpected and non-intuitive.  

Quick ways to optimize today’s lateral flow tests  

Just because waste from lateral flow tests might not be the most urgent sustainability issue for UK healthcare, that doesn’t mean we can’t and shouldn’t do something about it.  

We used the ‘avoid/shift/improve’ model to find potential quick wins for lateral flow tests. These reduce the carbon footprint of each test by nearly a third and the plastic waste by almost a quarter – without impacting the fundamentals of how the test works. 

They include: 

  • Eliminate waste bags. There’s a case for quickly isolating contaminated waste (even given COVID-19 also spreads from infected individuals through the air), but the bags account for around 5% of the carbon footprint of the test. It’s not clear how widely used they are in a domestic setting – there may be a risk-based justification for not including them in the test kit. 
  • Package all the test strips in a single foil pouch. Using a single re-sealable pouch to protect the tests from ambient humidity (rather than individually packing each test in a pouch with desiccant) is common in packs of lateral flow tests designed for use by healthcare professionals. However, once opened, the stability lifetime of the remaining tests is affected. 
  • Reduce the size of paper instructions. These are important for the effectiveness of the tests and are a regulatory requirement, but account for 5% of the carbon footprint of a test – could they be reduced in size? 
  • Eliminate the cardboard sleeve. This packaging isn’t essential to the safe and effective functioning of the test, and it seems likely that the functions it does provide could be achieved with less material.
  • Prefill the extraction tubes with buffer solution. This is already done in some test kits, although manufacturers need to be conscious of moisture loss and the effect on shelf life. However, the separate plastic vial used in the test kit we studied accounts for around 5% of the carbon footprint and plastic waste.
  • Increase the size of the pack from seven to ten tests. This would mean less package waste per individual test. Including ten tests in one pack instead of seven reduces the carbon footprint by around 5% (depending on how many other optimizations are done at the same time). Perhaps a pack of seven tests was originally designed to cover a week of daily testing – but is that how tests are being used in practice? 

Redesign of the test strip caseworks 

Looking to the longer-term gets us into product redesign – creating a new generation of the product with sustainability in mind. Doing this can take significant investment since, for medical devices, it’s likely to require new regulatory approval, which is a lengthy and costly process.  

A popular idea circulating for lateral flow tests is to minimize the plastic test strip caseworks (without compromising the essential functions of providing a stable platform, and protecting the nitrocellulose test strip). It might be possible to halve the caseworks mass and reduce the overall carbon footprint and plastic waste by 15-20%. This would require significant investment in R&D, production tooling, and regulatory approval hoops to jump through – but could be worthwhile if future demand for tests stays high.

Longer-term options 

If we consider that the world may require billions more lateral flow tests over the coming decade, a more comprehensive redesign becomes commercially viable. This could involve stripping the design back to the fundamental requirements for a lateral flow test – flowing a sample through the test strip in a way that is controlled and free from contamination. Current designs take advantage of established components to collect, buffer, and dose the sample – but, at this production volume, it may be worthwhile designing a system from the ground up that is optimized for cost, usability, performance, and sustainability. 

Sustainability as a brand differentiator  

It’s clear there’s scope to optimize lateral flow tests to reduce their environmental impact – and a systematic analysis reveals options beyond those that might jump out to someone when they use the tests. But it’s essential to put the impact of lateral flow tests in the context of the wider healthcare system, to focus resources where they can have the most environmental impact – and to recognize that, sometimes, the plastic waste people can see helps to avoid more serious, but less visible consequences.  

On the other hand, while visible plastic waste from lateral flow tests may not be the most pressing environmental issue facing the healthcare industry, it highlights the growing influence consumer opinion is likely to have as diagnosis and treatment shift from hospitals to homes. And as lateral flow tests become (in the UK, at least) a product people buy with their own money, choosing from a range of options, there may be a competitive advantage for businesses that take note and optimize their products for sustainability. 

Featuring analysis conducted by Katie Williams, Mechanical Engineer


References

  • Prime Minister sets out plan for living with COVID [Internet]. GOV.UK. 2022 [cited 1 April 2022]. Available from: https://www.gov.uk/government/news/prime-minister-sets-out-plan-for-living-with-covid
  • The Sustainable Healthcare Coalition. Care Pathways Calculator. [Internet]. Sustainable Healthcare Coalition. 2022 [cited 1 April 2022]. Available from: https://shcoalition.org/