Dr Chantelle Kiernan, scientific advisor, Life Sciences IDA Ireland, explains the requirements needed for medtech companies to flourish under new EU AI regulations.
The use of artificial intelligence (AI) is fast-growing across all industries, but healthcare has seen a boom with areas such as medical technology, devices, apps and informatics harnessing its benefits to improve patient care. For instance, AI can be used to interrogate and find patterns in large volumes of data and at the same time be used to help monitor patient care through remote devices and apps. Until recently there has been little in the way of regulation. However, two new EU Regulations, the EU Clinical Trials regulation, and the EU Medical Device regulation, have come into force.
These regulations aim to tackle the possible risks to people – from lower-level need for transparency when talking to a chatbot to some high-risk systems that threaten the safety of people or society.
How the regulations affect medical device and app companies
The COVID-19 pandemic has accelerated the application of AI. For example, in the U.S. Northwestern University researchers developed an AI platform that detects COVID-19 by analysing X-ray images of the lungs. Called DeepCOVID-XR, the machine-learning algorithm outperformed a team of specialised thoracic radiologists — spotting COVID-19 in X-rays about 10 times faster.
As healthcare concerns matters of life and death, it is understandable that standards should be higher. Innovation in healthcare is fast-moving and constantly pushes at the boundaries of the possible. This can bring significant benefits for people with chronic or undiagnosed illness, but it has to be applied correctly.
The changes will have a global reach. AI developers around the world will need to comply with the new regulations if they wish to access the EU market; with significant ramifications and fines if companies do not comply with the new obligations.
The regulation aims to put human oversight and ultimate control over AI systems within four categorisations:
- Minimal risk: free use of applications such as AI enabled video games or spam filters
- Low risk (AI systems with specific transparency obligations, for example providers making clear that users are talking with a chat bot)
- High Risk – where AI technology is used in ways that could provide a risk to health and safety, e.g., critical infrastructure like transport, or in safety components of products such as AI application in robot-assisted surgery
- Unacceptable risk – AI systems considered a clear threat to the safety and livelihood and rights of people will be banned.
Medtech companies will need to show they have complied with a number of new obligations such as ensuring adequate risk assessments, validating the quality of the data used to train AI systems and ensuring the robustness and cyber security of AI systems as well as logging of activities to ensure traceability of results.
The regulations also ensure that providers continue to monitor performance of the AI system and report any serious breaches, correcting or recalling any systems that do not conform.
Support companies will need
For innovation to flourish and provide the next generation of healthtech that can help change lives, it is vital that the life sciences community works together to create a strong ethics culture. A strong research infrastructure with top level support will be critical for companies who will need to abide by the new regulations.
In Ireland, work is already underway within the global cluster of multi-national companies who have invested in the area to make use of the benefits of being part of the successful Irish eco-system.
While not over-regulated, Ireland has a strong legal and regulatory landscape and Irish regulators across different sectors have a good reputation for excellence. With an exceptional level of collaboration between industry, academia, state agencies and regulatory authorities, Ireland ranks among the top 10 most innovative countries in Europe and top 15 globally.
Ireland also stands out as having the highest ratio of artificial intelligence (AI) talent in the European Union as a result of the strong historical partnership with international technology leaders. Mapping AI intensity has shown Ireland to be among five other countries (Finland, Cyprus, Luxembourg, Sweden, and the Netherlands) that are leading the EU in attracting or developing AI talent. Ireland stands out, with a ratio far above the others in this group.
Such a high ratio could be linked to the many leading multinational technology companies that have a base in Ireland. Strong networks that include IDA Ireland, which support FDI investments, help drive this dynamic R&D sector. Ireland also has per capita the highest number of EurAI (European Association for Artificial Intelligence) fellows, supported by a large ecosystem of technology and scientific research centres including one of the largest data analytics centres in Europe, Insight.
The country’s first National Research Ethics Committees in the areas of “Clinical Trials of Investigational Medical Products” (NREC-CT) and also “Clinical Investigations of Medical Devices” (NREC-MD) have been established with responsibility for reviewing the ethics underpinning research proposals in these important areas of health research.
These new committees will review applications submitted through the National Office for Research Ethics Committee to streamline the process of the ethics review in areas under their remit. This will continue to ensure a strong infrastructure for health research which can only increase the capability of organisations within the country to continue with world class research leading to the creation of new medicines, further bolstering the national ecosystem.
The new EU legislation is vital for AI to continue its contribution to healthcare and for the public to have trust in AI. However, companies involved have to grasp the nettle and it is crucial that they are given the support they need to successfully abide by the new regulations.