Recent breakthroughs in technology are opening up new opportunities to address long-standing health challenges. Neuralink, a brain-chip startup, could help restore vision for the visually impaired.1 Synchron, a brain-computer interface (BCI) company, reported that a patient with amyotrophic lateral sclerosis (ALS) leveraged their technology and was able to turn lights on and off, make calls, and watch television without having to use his hands or voice.2
Emerging health technologies leverage technology to address human health. These technologies may include BCIs, health wearables, and internet-connected healthcare devices. Emerging health technologies may also leverage artificial intelligence (AI) to improve patient outcomes. Many emerging health technologies remain largely theoretical, such as BCIs, while others are more established and commonplace, such as fitness tracking watches.
These technologies have the potential to revolutionize healthcare and accessibility-related concerns, but these advancements are not without risk. To maximize the value and minimize the harms associated with emerging health technologies, it is critical to address ethical, privacy, and societal concerns to ensure that these technologies help rather than hurt humanity.
Benefits of Emerging Health Technologies
Emerging health technologies are especially promising in the area of accessibility. While not currently widespread, certain BCI implementations could enable individuals with limited mobility to live more independently. For example, Synchron’s BCI can integrate with Amazon Alexa, allowing users to control smart speakers using their thoughts. This technology can enable those with limited speech or hand mobility to benefit from the smart speaker’s capabilities.3 Emerging health technologies may also allow individuals with speech impairments to communicate with others through text-to-speech technology. All these innovations have the potential to revolutionize accessibility.
Emerging health technologies may also augment physician capabilities. AI is already being used in diagnostics, enabling doctors to have their diagnoses double checked, which can potentially lead to more accurate diagnoses.4 AI can also be used to detect adverse drug interactions, helping to improve patient safety.5
Additionally, the tracking capabilities of emerging health technologies may encourage people to make healthier choices. Those who wear fitness trackers exercise an extra 50 minutes per week and take an extra 1,200 steps daily.6 Sleep tracking information automatically collected by wearables can be shared with healthcare providers to help diagnose potential health problems. Wearable devices may even be able to predict when the user is getting sick with a cold or the flu.7 The potential for the earlier detection of illnesses could lead to people getting less sick through proactive treatments.
Challenges of Emerging Health Technologies
Technology, healthcare, and ethics are not necessarily at odds, but many individuals and enterprises on the frontline of emerging healthcare technologies are not primarily healthcare experts. As a result, concerns about ethics and healthcare may not be reflected in emerging technology development. One notable example concerns animal welfare. In 2019 and 2020, several monkeys died after receiving Neuralink implants.8 Neuralink employees raised concerns about eschewing traditional testing, i.e., testing one factor at a time in an animal study before moving on to additional testing. However, Neuralink CEO Elon Musk’s desire to work faster requires testing that does not address previously identified issues, which could result in experiments that cause the deaths of more animals.9 Even more concerning is the ex-Neuralink employee who says Musk’s claim that the monkeys that died were already close to death and did not die as a result of the implant is “ridiculous,” if not a “straight fabrication.”10 Traditional healthcare testing occurs the way it does for accuracy and ethical reasons, and some enterprises are foregoing healthcare ethics and honesty in the interest of being the first on the market. While rushing to the market may be acceptable in certain industries, the consequences are dangerous if the same occurs with health-related devices.
Direct-to-consumer healthcare technologies, including fitness trackers and smart scales, pose significant privacy risk. Most healthcare-related privacy laws, such as the US Health Insurance Portability and Accountability Act (HIPAA), do not apply to these types of devices. As a result, manufacturers of these products may get away with excessive data collection and improper use of that data. Mozilla Foundation, a global non-profit, reviewed the privacy policy of a fitness watch geared toward children and vulnerable adults that had no privacy policy (for the watch or the associated app).11 The watch can collect heart rate, blood pressure, and fall information,12 all of which can indicate personal information about an individual. However, the watch and the data it collects are not subject to HIPAA, as the scope of HIPAA does not include wearable devices. Due to the granular and highly sensitive nature of the data collected by health-related wearables, any compromise of this data can have significant consequences for users.
Additionally, the harms that result from data leakage and breaches of these devices can be significant, as the data they collect is health related. Even more concerning is the possibility of malicious actors hacking devices that offer life-saving services. Vulnerabilities in healthcare technology can be exploited, resulting in deadly consequences.13 Though these connected devices can save lives and simplify healthcare management, vulnerabilities can be exploited, and the consequences can be deadly.
AI-powered emerging health technologies rely heavily on data, but low-quality data can translate to negative health outcomes. For example, women are 50% more likely than men to be misdiagnosed after a heart attack, largely because the symptoms women exhibit are different than what many men exhibit14 Health-related data, including the data utilized to train AI models for healthcare, may not disaggregate data, i.e., separate it by demographics and evaluate any differences. If AI models are trained on data that is not disaggregated, it could result in disproportionate harm to some groups. For example, an AI-powered questionnaire at a hospital screening for heart attacks may overlook women if the data it is trained on does not disaggregate data, leading to more serious health complications for women.
Given some of the flaws associated with these technologies, people must understand that these advances will be most efficient if paired with human oversight. Emerging health technologies are not a panacea for problems in the healthcare industry. Fifty-one percent of people surveyed who believe racial or ethnic bias is a problem in health and medicine believe that the use of AI will improve these problems15 even though AI used for healthcare would be trained on existing data that may contain racial or ethnic bias. AI products can only be as good as the data on which they are trained, and human involvement in addressing bias and disparities is critical to prevent harm.
A growing reliance on healthcare technology may also perpetuate systemic inequality. The digital divide is a term to describe the gap between demographics and regions when it comes to accessing technology.16 While emerging healthcare technology may promise to make people healthier, there are many individuals who may not have access to this technology. This could lead to certain populations having worse health outcomes and a lower quality of life simply because they cannot access and benefit from these tools.
Responsibly Moving Forward
Despite the myriad challenges associated with emerging healthcare technologies, enterprises that can responsibly develop and deploy them can enjoy their benefits while minimizing harm. The first step for enterprises developing and deploying these technologies is the establishment of organizational ethical standards if they do not already have them in place. This is critical to guide how emerging healthcare technologies are used and evaluated. While compliance is important, it is not synonymous with ethics. Compliance efforts often lag behind technologies’ capabilities and should be treated as the bare minimum. This is especially important for devices that are related to healthcare, such as fitness trackers, but may not be impacted by healthcare-specific laws and regulations.
Enterprises—especially those that are not established healthcare organizations—should ensure the development and deployment of emerging health-related technologies involves stakeholders who are familiar with healthcare ethics. This includes healthcare experts, research and development professionals, ethicists, philosophy experts, privacy experts, and security experts. These experts can ensure that experiments, technological parameters, and use cases align with industry best practices and standards.
Emerging healthcare technologies, especially those leveraging AI, must ensure that they operate with strong data governance practices in place. This involves better quality data and data lineage. High-quality data can result in better patient outcomes, while data lineage can help ensure that privacy objectives and patient confidentiality are prioritized in technology development.
Given the serious harms that can result from privacy and security incidents, it is critical to practice security by design and privacy by design when developing emerging health-related technology. This can help mitigate the consequences in the event of an incident.
Privacy by design includes prioritizing user experience (UX) and user interface (UI) considerations when developing these technologies. They should ultimately be designed for the end user. Part of this involves transparency: It should be clear how these technologies will be used, what data is collected, and how that data will be used. Settings should be privacy-preserving by default, but the process of reviewing and modifying privacy settings should be easy.
While security by design and privacy by design are important, it is imperative that these professionals also value and understand the importance of ethics; ethics should ultimately be the driver of their work. Unfortunately, the industry does not appear to adequately prioritize these traits. Only 15% of survey respondents indicate that honesty is an important soft-skill trait for security professionals.17
Conclusion
Emerging healthcare technologies have the potential to democratize healthcare and empower individuals to take control of their own health. However, this potential can only be realized if the technology is designed and deployed ethically.
This is an inflection point for emerging technology in healthcare: If it is used responsibly, it could revolutionize healthcare and lead to a healthier world. But if rushed to deployment and developed without concern for ethics, it could result in the ubiquity of a surveillance technology that has access to health information, widening social disparities and leading to worse health outcomes for certain populations.
Endnotes
1 Reuters, “Musk's Neuralink gets FDA's breakthrough device tag for 'Blindsight' implant,” 18 September 2024
2 Whittaker, M.; “‘Alexa, hook up my brain’: Synchron hits major ‘medical milestone,’” Forbes, 1 October 2024
3 Mullin, E.; “This Brain Implant Lets People Control Amazon Alexa With Their Minds,” Wired, 16 September 2024
4 American Hospital Association, “How AI Is Improving Diagnostics, Decision-Making and Care”
5 American Hospital Association, “How AI Is Improving Diagnostics, Decision-Making and Care”
6 Corliss, J.; “Do fitness trackers really help people move more?,” Harvard Health Publishing, 1 June 2022
7 Ko, E.; E. Glazier; “Studies indicate fitness trackers can predict illness,” UCLA Health, 30 October 2020
8 Landymore, F.; “Terrible Things Happened to Monkeys After Getting Neuralink Implants, According to Veterinary Records,” NeoScope, 22 September 2023
9 Levy, R.; “Exclusive: Musk’s Neuralink faces federal probe, employee backlash over animal tests,” Reuters, 6 December 2022
10 Mehrotra, D.; “The Gruesome Story of How Neuralink’s Monkeys Actually Died,” Wired, 20 September 2023
11 Mozilla Foundation, “Angel Watch,” 1 November 2023
12 Angel Watch
13 Hay Newman, L.; “These Hackers Made an App That Kills to Prove a Point,” Wired, 16 July 2019
14 Society of Women Engineers, “Media: Invisible Women: Exposing Data Bias in a World Designed for Men,” 5 May 2020
15 Tyson, A.; Pasquini, G.; et al.; “60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care,” Pew Research Center, 22 February 2023
16 Terrell Hanna, K.; “digital divide,” TechTarget
17 ISACA©, “State of Cybersecurity 2024,” 1 October 2024
Safia Kazi, AIGP, CIPT
Is a privacy professional practices principal at ISACA. In this role, she focuses on the development of ISACA’s privacy-related resources, including books, white papers, and review manuals. Kazi has worked at ISACA for 10 years, previously working on the ISACA Journal and developing the award-winning ISACA Podcast.
Collin Beder, CET, CSX-P, Security+
Is an emerging technology practices principal at ISACA. In this role, he focuses on the development of ISACA’s emerging technology-related resources, including books, white papers, and review manuals, as well as performance-based exam development. Beder has worked at ISACA for 4 years, authored the Artificial Intelligence: A Primer on Machine Learning, Deep Learning and Neural Networks book, and developed hands-on performance-based labs and exams.