Monday, August 22, 2022
HomeEuropean NewsWell being safety is non-negotiable within the AI Act negotiations – EURACTIV.com

Well being safety is non-negotiable within the AI Act negotiations – EURACTIV.com



A health-centric method to the Synthetic Intelligence (AI) Act is important for the safety of well being and basic rights of European residents, write Hannah van Kolfschooten and Janneke van Oirschot. 

Hannah van Kolfschooten, LL.M. is a PhD Researcher on the Legislation Centre for Well being and Life, College of Amsterdam, engaged on well being AI and sufferers’ rights. 

Janneke van Oirschot, M.Sc. is a Analysis Officer engaged on AI and medicines at unbiased non-profit organisation Well being Motion Worldwide (HAI).

The European Fee’s proposal for an Synthetic Intelligence (AI) Act has been the subject of a heated debate since its publication in April 2021. Civil society organisations imagine the proposal falls quick on basic rights safety, business is fearful it would stifle innovation, and governments concern penalties for nationwide safety. We critique the AI Act for neglecting the dangers well being AI pose to sufferers’ well being and basic rights.

The 3,000 amendments to the Act tabled by political teams within the European Parliament say rather a lot about how controversial regulation of AI actually is. This summer time, the Parliament’s co-rapporteurs begin the negotiation course of with compromise amendments. Our message to MEPs, who might want to vote on the amendments is the next: Make well being non-negotiable. A health-centric method to the AI Act is important for the safety of well being and basic rights of European residents, particularly the rights to entry to healthcare, non-discrimination and privateness.  

AI is the simulation of human intelligence by machines. AI methods are software-based applied sciences that use sure data-driven approaches to unravel particular issues. What all AI methods have in widespread, is that they recognise patterns in huge quantities of information.

AI within the well being sector isn’t like AI in another sector and deserves particular consideration as a result of (1) folks’s well being is at stake, (2) individuals are in a weak place when in want of healthcare, (3) the gathering of well being information has dramatically elevated in current instances and (4) well being information is traditionally plagued by bias. Due to these traits, well being AI faces distinctive dangers that have to be particularly addressed within the AI Act. 

Take illness outbreak surveillance for instance. Many individuals with flu-like signs use Google for self-diagnosis. AI can use this information to observe and predict the unfold of infectious illnesses. This may be helpful for public well being officers to make choices about infectious illness management and distribute care assets.

However how correct are these AI methods when the mannequin relies on subjective consumer information? Restricted regulation of the standard of well being AI will result in mistrust in public well being and healthcare, breeding hesitancy in entry to healthcare. What’s extra, elevated use and sharing of well being information threatens privateness and information safety rights.

One other instance is using AI for medical diagnostics. AI can be utilized to determine pores and skin most cancers in photographs of pores and skin lesions, after being educated on the premise of hundreds of photographs of “wholesome” and cancerous pores and skin lesions. However what occurs when picture datasets had been non-representative, incomplete or of low-quality?

Biases within the coaching information can result in discrimination and particular person damage and even dying. Particularly racial bias could result in incorrect diagnoses and deepen present socio-economic inequality, one thing that isn’t taken under consideration in present regulation on medical know-how. Moreover, lack of transparency and explainability threatens sufferers’ rights to data and knowledgeable consent to medical therapy.

These are simply a few illustrations of the dangers of AI utilization for well being, one of the common sectors for AI deployment within the European Union. But, the AI Act doesn’t particularly handle well being AI and doesn’t present options for its key dangers. It could possibly’t be confused sufficient that well being should be prioritised when MEPs negotiate their amendments over the approaching months, with some tabled amendments that deserve specific assist. 

Foremost, given its intensive threat, essential AI makes use of in well being and healthcare must be marked as high-risk, which can guarantee extra stringent regulatory necessities. 

Second, high-risk AI ought to bear a basic rights influence evaluation, which takes under consideration dangers to human well being. Additionally technical documentation of well being AI ought to embrace an evaluation of its dangers for well being, security and basic rights. 

Lastly, AI methods that drawback teams primarily based on well being standing must be prohibited fully. 

Equally, we name on MEPs to strongly oppose amendments that take away well being AI from the present listing of ‘high-risk AI makes use of’ or add further necessities for AI methods to be marked high-risk. 

It’s excessive time to tackle a health-centric method to the AI Act. It’s price reiterating: well being safety is non-negotiable within the AI Act negotiations. 



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments