BIOMEDevice Expo
Medical narration is my expertise, which encompasses Medical Education, Medical Legal, BioTech and Pharma, and Medical Device content. So I jumped at the chance to attend the BIOMEDevice Expo in Boston and learned about new gadgets and gizmos changing the face of healthcare delivery. The dizzying array of new technologies blew my mind:
- Surgical robots are performing micro-surgeries.
- Next-generation surgical implants with novel designs and manufacturing methods.
- 3D bioprinting, a popular tissue engineering technology, uses low endotoxin gelatins as bioink (think gummies!) to create cellular characteristics and functionalities for various biomedical applications.
- Dermadry devices treat excessive sweating at home.
- Hi-tech motors operate prosthetic limbs.
- Nanotech Plastics are beyond cool looking!
- Coating technology allows guidewires, catheters, stents, mesh, and more to glide easily in the body.
- Advanced fabrics, technologies, and machine learning create medical devices that fit people’s lifestyles– literally and figuratively, like this woman’s bra.
The MedEd Component of the Expo was Terrific too.
Erika Cheung, the Theranos whistleblower, gave the keynote address about the importance of ethics in product development and business practices, whether dealing with employees, investors, board members, or the public. Later, a group of 5 physicians and thought leaders (primarily women) spoke about artificial intelligence and machine learning in the Medtech industry. Their comments confirmed my suspicions about a systemic problem we’re facing.
Who’s Teaching the Computers and What are Their Biases?
There have been critical advances in Artificial Intelligence regarding healthcare in the last few years. AI software can now collect data from lab reports and wearables, such as heart and glucose levels, suggest treatments, and even diagnose medical problems early. Doctors can look at all the data and recommend medications and treatments, sometimes catching medical issues before they become serious concerns. AI optimizes operations at healthcare systems and clinics by allowing patients to make appointments, reorder prescriptions, and even talk to their doctors directly. Transcribing devices have helped doctors keep notes from in-person meetings and surgeries, ultimately allowing them to spend more time with their patients and less time with administrative tasks. Advancements in these technologies will continue, with promising outcomes, but on closer inspection, they also bring worrisome pitfalls.
Physician, Health Thyself!
Studies suggest the algorithms in AI software bring out the biases of race, sex, gender identity, disability, and socioeconomic class that already exist in the health system and might even amplify them. It’s not a surprise that the groups that the algorithm excludes are those that have faced inequality in medicine for centuries. People collect data, and that data gets programmed into a computer, so any flaws in data collection will end up as part of the algorithm. Those researching the healthcare data have inherent biases, so it’s essential to address and repair them for algorithms to represent the population better.
The History of Medicine
When we look at medicine historically, there’s always been a gender bias in research and treatment. Medical books from the past used the white male body for their drawings and information. A few factors make this so, and we can forgive some early doctors’ thinking when the science of medicine was in its infancy.
Women Have Been Second-Class Citizens for Centuries
The fact is early medical research excluded women both in practice and analysis. Since doctors were predominantly male, there was timidity in examining the female body, which was considered improper. Illness and diseases affecting women were often attributed to hysteria or the “wandering womb,” as Plato reasoned. They thought the womb was the only difference between the female and male bodies; and assumed that we experienced all illnesses the same as men. There weren’t studies performed on women. Medical professionals wanted to protect the female reproductive organs, excluding them from clinical trials for fear the chemicals would affect the women’s reproductive future. Subsequently, women were left out of trial studies for heart disease and even some breast cancer research, leading to misunderstanding in conditions that mainly affect women, like endometriosis. Shockingly this was the continued practice until as late as 2016, when the National Institutes of Health (NIH) mandated that any approved research needed to include women.
People of Color Aren’t Treated the Same – in Research or Practice
Critical scientific studies have long excluded people of color, so the data does not represent them. Yet, algorithms look at data to determine the type of healthcare that patients can receive. In the case of Black Americans, this is problematic since statistics about their population are missing. For example, African Americans are four times more likely to have kidney failure; but existing AI algorithms for transplant list placements put Black patients below their White counterparts. Therefore an algorithm with a lack of diverse data prevents black patients from potentially receiving an organ transplant, despite all other factors of their health being the same as White patients on the same transplant list.
Health Insurance companies are likely to reject extra-care resources for Black patients because the data implemented in risk-prediction algorithms use personal information like race and income to determine health costs and premiums. However, Black Americans have less access to health services and resources and make up one of the highest poverty rates in America. If an algorithm uses previous healthcare spending to predict future risks, it does not consider the lack of access to health services. Research shows that Black Americans have higher death rates due to cancer, pregnancy, and a higher risk of diabetes. So the tall and short of it is that the insurance algorithms skew the costs of treatments, thinking Black patients need less care without considering that they cannot access quality care because of socioeconomic issues.
Similarly, Native Americans have higher rates of underlying issues such as obesity and diabetes, making them more likely to die of Covid-19. Because of the pandemic, the life expectancy of Native Americans has significantly dropped, closer to that of White Americans in 1944. One significant component of the health crisis in these communities is because of poverty and lack of funding from the government. 2 Million Native Americans receive care from the federally funded Indian Health Service, yet the program recipients receive much less funding than those on Medicare and Medicaid. More than likely, these and many other factors have warped the medical care data for this population, misrepresenting the community in existing algorithms.
Medical Bias Extends Across the Board
Medical bias doesn’t stop at gender and racial lines but crosses into gender identity, ableism, and classist divides. So, what steps can we take to correct it? First, the data collected must be representative of all members of society. Clinical and research trials must be more diverse to present more accurate data to implement into AI devices’ algorithms. In this way, doctors can make more educated and reliable decisions about their patient’s treatments, diagnosis predictions, and overall care.
Second, doctors and researchers must address their own biases to augment their process and development of algorithms and codes. Pinpointing and addressing existing biases will improve future algorithms and technologies. We must share research and principles responsibly to resolve these bias issues while protecting patients’ privacy.
Legislation is also needed to protect patients against medical biases. Likewise, researchers and medical professionals should enter class action lawsuits to prove data bias, encouraging private companies to change practices and prevent biases. We need the support of congressional leaders to put protections in place for the inclusion of diverse populations to be included in data research.
Health Innovations Start at Square One
Dr. Trishan Panch, an Instructor at the Harvard T.H. Chan School of Public Health and winner of Harvard’s Public Health Innovation Award, says, “an algorithm is merely a series of steps—a recipe and an exercise plan are as much of an algorithm as a complex model. At the core of any health system challenges, including algorithmic bias, lies a question of values: what health care outcomes are societally important and why? How much money should go towards health care, and who should benefit from improved outcomes? It’s as much an issue of society as it is about algorithms.” AI technology will be able to revolutionize healthcare and diagnostics in the future. Still, for society to have positive benefits, it must represent the diversity that makes up our entire population.