UK: +44 (0)1223 264428
USA: +1 (650) 798 5134
by Jeremy Kooyman
To deal with demand for the globally respected AAMI Human Factors for Medical Devices course, we worked with AAMI and Pure Insight to bring it to Europe for the second time in less than a year. Course leaders Bob North and Ed Israelski hosted the three-day course in London with delegates from global medtech companies representing a range of disciplines including engineering, human factors/usability, project management, and regulatory planning. More information on the course content is available from a previous blog entry written by one of our Usability team members, Karl Hewson. Karl and other members of our Usability and Medical Device teams were in attendance to meet delegates and discuss their human factors and usability challenges associated with bringing devices to market.
Early in my mechanical engineering career I was fortunate enough to be challenged by several projects plagued with usability problems, and since then I’ve tried to remain cognisant of the challenges that user interfaces can provide. Attending the Human Factors for Medical Devices course provided an opportunity to step outside the comfortable mechanical engineering realm of tolerance stacks and verification testing, and I’d like to share with you my biggest lesson.
One of my favourite aspects of mechanical engineering is the amount of data at your disposal. Thermal coefficients, spring rates, Young’s modulus; they’re all pieces of information we can draw upon during the design phase to ensure that when in the test lab, everything will go according to plan. And in that respect, human factors and usability is no different. It’s just the mechanism by which you gather this data that changes.
Sure, there are reference books full of anthropometric information about hand size, force application ability by age bracket etc., and guidance for font size and viewing distance, but there’s a point where you just can’t rely on reference information. You need to run a test to confirm that you’re still heading in the right direction.
For mechanical engineering this is achieved through pilot testing, where you’re evaluating individual aspects of a design to make sure they’re going to function accordingly. For human factors and usability, this would be a formative study where you’re testing usability aspects of either a partial or complete design with actual users to make sure that the assumptions you’ve made up until that point are valid. The outcome in both of these circumstances is renewed faith in your design moving forwards and you can use this information to inform risk management activities, clarify primary operating functions, enable early stage design changes, and explore the kind and extent of training necessary for users to operate a device. The difference is that the nature of the data is different; Pilot testing yielding hard, unmovable quantitative data, and formative studies returning soft, qualitative data, and user profiles.
Later in the project phase, where a similar type of testing occurs, the design can be validated with the same quantitative/qualitative differences resulting from the validation testing and summative studies ensuring that technical and usability requirements have been met.
Be it through environment, education, or experience, I’ve always viewed qualitative data with scepticism. It’s not a cold hard number that can be measured repeatedly with a calibrated instrument, and the uncertainty surrounding it gives pause. But it doesn’t bother the FDA. And that was arguably the single most important lesson I learned during the three day course. You’re expected to weave your usability observations into a seamless argument in favour of the safety and efficacy of your medical device. They’re not after statistical power, p-values, or confidence intervals. They want to know that your device is safe and effective to use by your target users.
One of the course highlights are the Q&A sessions with Ron Kaye, leader of the ODE’s Human Factors Pre-Market Evaluation Team, where any attendee can grab the microphone and get some airtime with the FDA. During these sessions attendees often seek clarification and reassurance from Ron that their human factors plans are appropriate for their devices. Ron’s responses are peppered with phrases like “Well, it depends…” And from his perspective, it often does.
The best weapon against this ambiguity is the data from your formative and summative studies. You’re telling the FDA that “no, it doesn’t depend” because of the list of data and observations at your disposal.
You’re making an argument in your favour and the more data you have at your disposal, the better you’re able to explain critical task failures and obvious use difficulties or more importantly prove the absence thereof.
So test early and test often. Formative studies, with a target of 8 users per user group, are relatively inexpensive compared with failing a late stage summative study with roughly double the number of participants.
We’ve also learned that updates to AAMI/IEC 62366 and the final version of the FDA’s Human Factors Guidance are expected in the near future, potentially as soon as May, with both documents currently out for final review.
Cambridge Design Partnership is sponsoring the Human Factors for Medical Devices course again in October 2015, along with our partners Pure Insight who have helped plan and organize the past two European iterations of this course.
If you’d like to talk to us about human factors for medical devices, please contact our human factors/usability specialists or our medical device team: email@example.com or call 01223 264428
The next European AAMI Human Factors for Medical Devices course is 6-8th October 2015 in Dublin. For further information click here.
Demonstrating an innovative 'internet of things' solution to leak detection
Time-critical software development for swim tracking
Stay up to date with all our work and our latest news by signing up to our newsletter.