top of page
Search
Writer's pictureGreg Martin

The Social Contract

Updated: Jun 22, 2022



My definition of AI is that it's any computer system that replicates aspects of the human mind. Humans are able to follow instructions, recognise objects and phenomena, make sequential decisions aimed at a specific predetermined objective and finally, we're able to set our own goals and objectives as conscious beings with free will (or at least the illusion of free will).


The first three of those "aspects of mind" are easily replicated by computers today. The last one, computers that are "self-aware" doesn't exist yet but is certainly being worked on. The generic term for these computers is Artificial General Intelligence (AGI) and have been the subject of stark warnings, from the likes of Stephen Hawking and Elon Musk, of a pending existential crisis. I'm reminded of the adage, "don't be afraid of the first AI that can pass the Turing test, be afraid of the first AI that pretends not to."

The promise of AI in healthcare is twofold. Firstly, AIs will alleviate medical personal of some of the cognitive load that currently translates into clinical error. And secondly, AIs could be used to address the substantial shortfall in medical human resources in low-and-middle income countries.


Supervised learning

Supervised learning is an example of machine learning that is already well established as an aid to doctors in a variety of clinical settings. Using an Artificial Neural Network (which incidentally, is exactly what it sounds like - a configuration of interconnected nodes), AIs are able to recognise objects and phenomena in images. Most clinical disciplines include some form of decisions making as a function of object recognition. Radiology, ophthalmology, pathology, microbiology, dermatology, cardiology, neurology, to highlight the obvious examples. AIs are quickly demonstrating a higher level of diagnostic prowess than clinicians... and where they lag behind their human counterparts, they are quickly catching up and will no doubt soon take the lead.


Reinforcement learning

The potential for AIs to make sequential decisions to optimise human health or respond, in real time, to changing clinical parameters is rather exciting. This is best illustrated by self-driving vehicles. In order to train an AI to drive a car, i.e. navigate a complex set of circumstances, it has to learn a certain kind of "behaviour" through a series of rewards and punishments for successes and failings. Through countless iterations and the objective to maximise the "reward function" the AI learns to make a series of decisions to accomplish a predetermined goal. One can imagine an AI moni


toring the vital signs of a patient in Intensive Care and taking real time decisions to adjust oxygen, fluids and medicines to optimise the patients’ health.


The social contract

Companies developing AIs in the healthcare space are offering their services at low (or no) cost to the state or the end user. Each of them eager to provide a service at extremely low cost to the state or the end user. Why the enthusiasm? Well, for AIs, data is oxygen.

Google search is must better at providing high quality search results because we all use it. Each time we click on a search result, we're feeding the AI an additional datapoint that is used to improve and refine the search algorithm. The more we use Google, the better is becomes, the more we use it.... etc. The same will apply in the healthcare space.

Healthcare data is sticky. It's not like climate data or trading data (where AIs have had tremendous success). To use healthcare data requires of companies to jump through multiple hoops. Training AIs is therefore difficult. As certain companies get contracts to provide services, they'll get something else... access to data. Those AIs that win contracts will very quickly become substantially better than their competition. Orders of magnitude better. The competition will, of necessity, drop out of the race. Very quickly, the winners will have a pseudo-monopoly and these companies who were providing a free or cheap service to humanity will become price setters. The promise of AI coming to the rescue and solving the medical human resource problem in poor countries, could evaporations in front of our eyes.

Is there a lesson to learn from how humanity has entered into a social contract with big pharma? Twenty years ago today we saw the Doha Declaration that included agreement that that the Trade Related Aspects of Intellectual Property (TRIPS) agreement of the World Trade Organisation be interpreted to include Public Health objectives where needed. Intellectual property and the need to recoup development costs are not to supersede the states obligations to respond to public health emergencies. The application of the Doha interpretation has been remarkable. Millions of people have had access to ARVs to treat HIV for example. In recent times we've seen the use of the Doha provisions to extend access to treatment for COVID-19 to poorer countries.


Do we need a similar social contract with "big AI"? Perhaps. We can't use the TRIPS agreement and its provisions because the problem here isn't one of intellectual property (for which we create a time-limited monopoly) but rather of


market dominance. On the one hand the promise of exceptional profits are driving investment and innovation (which we want). And on the other hand, having healthcare at the behest of the private sector in perpetuity is far from ideal. Creative thinking is needed on this one.


457 views0 comments

Recent Posts

See All

Comentarios


bottom of page