technology Archives - Lown Institute https://lowninstitute.org/tag/technology/ Mon, 22 May 2023 17:10:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.1 https://lowninstitute.org/wp-content/uploads/2019/07/lown-icon-140x140.jpg technology Archives - Lown Institute https://lowninstitute.org/tag/technology/ 32 32 Could AI really replace human doctors? https://lowninstitute.org/could-ai-really-replace-human-doctors/?utm_source=rss&utm_medium=rss&utm_campaign=could-ai-really-replace-human-doctors Mon, 22 May 2023 15:18:27 +0000 https://lowninstitute.org/?p=12627 A recent study suggests that artificial intelligence chatbots are able to respond effectively to patient questions and may even perform better in certain ways than human physicians. What does this say about the flaws of the current healthcare system, and should doctors be concerned?

The post Could AI really replace human doctors? appeared first on Lown Institute.

]]>
An article published last month in JAMA Internal Medicine sparked debate as its findings revealed that AI chatbot responses to patient questions were better in quality and empathy scores. The difference in perceived empathy between AI and humans was particularly stark, with AI demonstrating “empathetic” or “very empathetic” responses at a rate nearly 10 times that of human doctors. Does this indicate that AI would be better at doctoring than humans?

Empathy is key to healing…but is devalued in our healthcare system

No matter the technological advancements made, AI will never be able to fully imitate human connection. There is something unique about the trusting relationship between patient and provider, about person-to-person contact, that is innate to healing. A popular refrain states that the first evidence of civilization was a fractured femur that had healed, demonstrating that at some point, at least one human had taken care of another one until they had healed. Society is built around empathy and compassion for our fellow human beings.

“The art of medicine is a process for nurturing a special human relationship that champions a partnership for healing.”

– Dr. Bernard Lown

Most healthcare workers enter the field to care for those in need. But the system we have now makes it difficult to practice medicine in a way that fosters connection. As Jennifer Lycette, a rural community hematologist/oncologist from Oregon, notes in her STAT opinion piece, the pressure placed on physicians to get through as many patients as possible, as fast as possible, is not conducive to compassionate care. The pressure to be as “efficient” as possible has resulted in less time with patients and more time documenting. The burnout in some hospitals has gotten bad enough to push medical residents to unionize.

Time pressure pushes physicians to go-go-go. The lack of quality time with patients has documented negative impacts on physician well-being, empathy, and patient outcomes; could it be that AI performed better than doctors because of a systemic flaw and not an individual one? 

AI could support, not replace, human healthcare

It’s worth noting that the JAMA IM study is not completely comparable to real-life circumstances. Researchers could not ethically feed real electronic medical records into AI without violating HIPAA, so patient questions were chosen from a Reddit forum. This does not diminish the validity of the questions but could influence how human physicians answered them. Online culture, particularly Reddit, does not prioritize empathy and the humans responding may have followed online communication norms rather than professional communication norms. Human respondents were also not familiar with the entire medical history of the patients and may have had better results if they were seeing them in real life. 

This study suggests that AI at least has the potential to support quality, empathetic care. Already, AI is being used to streamline administrative tasks, answer patient questions, and for machine learning; its likely that in the near future there will be more AI scribes and virtual nursing assistants. As the technology continues developing, AI will be used to supplement care but it can’t replace doctors. The art of healing is a human one.

The post Could AI really replace human doctors? appeared first on Lown Institute.

]]>
Imagining the future of quality in medicine: Dr. Vikas Saini honored with Avedis Donabedian International Award https://lowninstitute.org/imagining-the-future-of-quality-in-medicine-dr-vikas-saini-honored-with-avedis-donabedian-international-award/?utm_source=rss&utm_medium=rss&utm_campaign=imagining-the-future-of-quality-in-medicine-dr-vikas-saini-honored-with-avedis-donabedian-international-award Fri, 05 May 2023 14:32:21 +0000 https://lowninstitute.org/?p=12514 Last week, Lown president Dr. Vikas Saini was presented with the Donabedian International Award. In his acceptance remarks, Dr. Saini shared his vision for a future of medicine that uses new technologies for socially responsible goals, while still keeping empathy and the human connection in medicine at the forefront.

The post Imagining the future of quality in medicine: Dr. Vikas Saini honored with Avedis Donabedian International Award appeared first on Lown Institute.

]]>

Systems awareness and systems design are important for health professionals, but are not enough…..Ultimately, the secret of quality is love.

Dr. Avedis Donabedian

Last week, Lown president Dr. Vikas Saini was presented with the Donabedian International Award. 

Dr. Avedis Donabedian was a revolutionary physician who is credited with creating the field of healthcare quality and outcomes research. His model for measuring quality, the Donabedian model, has shaped the way healthcare quality is conceptualized. Like our founder Dr. Bernard Lown, Dr. Donabedian was a physician activist who advocated for compassion in healthcare. 

In his acceptance remarks, Dr. Saini shared his vision for a future of medicine that uses new technologies for socially responsible goals, while still keeping empathy and the human connection in medicine at the forefront.  View the video or read excerpts from the transcript of his speech below.

On May 3, 2023, Dr. Vikas Saini was honored with the Avedis Donabedian International Award.

Dr. Saini on the art of healing: “Across the millennia a shaman accompanied us whenever we had an illness, whether serious or minor, reminding us of our frailty and transience in this world. Healers have always been honored–for healing if successful, but mostly for being present as a trusted companion on an unwelcome journey.”


Dr. Saini on AI: “Intelligent machines could unburden us of the tedious calculations of clinical effectiveness and cost utilities. More than that, they could democratize expertise and radically reduce the division of labor between knowledge workers and manual ones.  Most importantly, they offer the promise of democratizing healthcare policy itself by helping non-specialists understand complex issues, set priorities and make trade-offs. But the barriers are enormous. AI models trained on backward-looking datasets will reproduce biases and reinforce obsolete paradigms. The massive capital required increases the risk of monopolies of the few.  Critically, machines have no values; they do not care about people. It is therefore urgent that all of us engage in a debate on the role of AI in reshaping health care.”


Dr. Saini on the future: “There is a yearning worldwide…because people want to escape the cul-de-sac of a sterile modernity and return to a geography of connection and of solidarity–solidarity with each other and with the natural world. If we fail, we may become the tools of our tools and turn machine intelligence into the enemy of human freedom. If we succeed, we may create the space for all health workers to focus their energies on Right Care for their patients.  Freed from the burden of repetition, we could enjoy a future of radical fulfillment and a democracy of knowledge that enables a democracy of health. If we can imagine such a future we can create it–a world that allows us to return to our roles as shamans in a digital village, free to focus on the things that matter most: warmth, empathy, and profound human presence that can overcome the angst of the clinical moment.”

The post Imagining the future of quality in medicine: Dr. Vikas Saini honored with Avedis Donabedian International Award appeared first on Lown Institute.

]]>
Digital redlining: How telehealth can exacerbate inequalities https://lowninstitute.org/digital-redlining-how-telehealth-can-exacerbate-inequalities/?utm_source=rss&utm_medium=rss&utm_campaign=digital-redlining-how-telehealth-can-exacerbate-inequalities Fri, 15 Oct 2021 16:20:02 +0000 https://lowninstitute.org/?p=9459 The rapid switch to telehealth has many potential advantages for expanding access of care. However, if not done with an intentional eye toward equity, telehealth can leave many behind.

The post Digital redlining: How telehealth can exacerbate inequalities appeared first on Lown Institute.

]]>
During Covid-19 surges, health systems replaced most in-person visits with video visits or phone calls to stop the spread. The rapid switch to telehealth has many potential advantages for expanding access of care; for example, patients no longer have to find transportation or childcare to see their doctor. However, if not done with an intentional eye toward equity, telehealth can leave many behind.

At Boston University Center for Antiracist Research’s symposium earlier this month, Dr. James Feigenbaum, Assistant Professor of Economics at BU, Dr. Lance Laird, Assistant Professor of Family Medicine at the BU School of Medicine, and Dr. Jayakanth Srinivasan, Research Associate Professor of Information Systems at the BU School of Business shared their insights into how “digital redlining” keeps some of the most vulnerable patients from accessing telehealth.

The practice of “redlining,” government-led discrimination in home loan offers based on neighborhood racial makeup, was made illegal in the 1960s. But more than 60 years later, internet providers still discriminate against customers based on where they live, a practice referred to as “digital redlining.” Comparisons of broadband internet access and poverty rates by census tract show that network providers systematically exclude low-income neighborhoods from broadband service, giving them much slower internet access.

The switch to telehealth makes these digital divides much more apparent, experts noted. When Boston Medical Center asked their patients if they had access to a reliable internet connection, more than 20% said no. Some may have internet access, but not have the technological skills or knowledge to navigate programs easily. Others may have access and knowledge, but they have competing priorities — for example, their children needed to use the internet to attend virtual classes.

“We’re asking families to ration broadband, to choose between closing the education gap or the health gap.”

Dr. Jayakanth Srinivasan, Research Associate Professor of Information Systems, at the BU School of Business

Some families had to choose which meeting or class to attend, or use their phone data to access both, which can be expensive. In some cases, families who had previously missed an internet bill were not given access even when the school system subsidized it. “For some patients, seeking health care is taking away their wages,” said Baird.

Another challenge is fragmentation of care through telehealth. A clinician and patient may have a long-standing relationship, but when it comes to virtual visits during Covid-19, patients were not always given a choice of provider. They had to see whichever doctor was available, not their doctor. This breaks apart the continuity and trust that is so important to high-quality care.

The issue of privacy in tech is another challenge. It’s one thing to see your doctor in the clinic, it’s another to let them into your house through video calls. Especially if you don’t get to choose which doctor you see, this can further create tension and reduce trust.

What can we do to solve the problem of digital redlining? First of all, we need to recognize that internet access is a necessity, not a luxury. “The federal government is accountable,” said Srinivasan, “The US government should be able to fund broadband access for everyone.” Federal and local officials must ban digital redlining and demand the same speed and quality of internet access for all.

Health systems should also be sure to ask their patients about their internet access and knowledge, to see where there are gaps in access. Health systems should work toward making patients’ preferred clinicians available for virtual visits, and think about how to create more private virtual spaces to build trust.

The rapid rollout of telehealth led to disparities in access, some more avoidable than others. As the pandemic recedes in certain regions, now is the time to make sure that these inequalities do not become embedded in the health system. “We have to correct these inequalities before it calcifies,” said Srinivasan, “Don’t let the concrete set.” Telehealth presents a massive opportunity to expand health care access — but it will only work if we can expand digital health equitably.

The post Digital redlining: How telehealth can exacerbate inequalities appeared first on Lown Institute.

]]>
The risks of rushing to incorporate AI into health care https://lowninstitute.org/the-risks-of-rushing-to-incorporate-ai-into-health-care/?utm_source=rss&utm_medium=rss&utm_campaign=the-risks-of-rushing-to-incorporate-ai-into-health-care Fri, 10 Jan 2020 16:31:00 +0000 https://lowninstitute.org/?p=2881 As AI becomes more popular in health care, clinicians and patients should take the opportunity to learn about how the potential risks of these products.

The post The risks of rushing to incorporate AI into health care appeared first on Lown Institute.

]]>
When people think of artificial intelligence (AI), they usually imagine something futuristic, such as robots taking over the world. But AI in health care in many ways is already here: wearable sensors and trackers, virtual diagnostic systems, and other data analysis technologies have shown potential to help doctors diagnose diseases, read scans, and make decisions about treatments.

This exciting potential has made health care AI a fast-growing enterprise. In 2017, Americans invested nearly $100 million in health care AI companies, about three times as much as the amount invested in 2011. This excitement and financial boom has lead some health care leaders to hail AI as a “revolution” in health care. Others, however, are more skeptical. 

A recent piece by Liz Szabo in Scientific American explores both the potential benefits and serious risks of rushing to incorporate AI into patient care. A primary concern is the lack of oversight of AI in health care. While prescription drugs must be approved by the Food and Drug Administration (FDA) before they are marketed and sold, most health care AI does not require FDA approval as long as it is “similar” enough to a product already on the market. (Sound familiar? It’s the same problematic process the FDA uses to approve new medical devices.)

Silicon Valley’s preference for speed over perfection is well known, and without sufficient oversight from the FDA, patients and health systems will be exposed to risks, such as: 

  • Inaccuracies from hospitals using AI software that has never been tested in the hospital setting. 
  • Exacerbated racial and ethnic disparities due to AI reproducing disparities that currently exist in health care data. 
  • Anxiety and unnecessary treatment provoked by erroneous tests and false positives.
  • Incomplete records of adverse events from AI, as companies are required to monitor the safety of their own products.
  • Wasting money on technology that does not meaningfully improve health.

As AI becomes more popular in health care, clinicians should take the opportunity to learn about how these products work and their potential risks–and push back on their unregulated use–for the sake of their patients. 

“While it is the job of entrepreneurs to think big and take risks, it is the job of doctors to protect their patients,” said Dr. Vikas Saini, quoted in Scientific American

The post The risks of rushing to incorporate AI into health care appeared first on Lown Institute.

]]>
Facebook’s “preventive health” tool has potential for overdiagnosis https://lowninstitute.org/facebooks-preventive-health-tool-has-potential-for-overdiagnosis/?utm_source=rss&utm_medium=rss&utm_campaign=facebooks-preventive-health-tool-has-potential-for-overdiagnosis Thu, 31 Oct 2019 16:59:54 +0000 https://lowninstitute.org/?p=2323 Privacy isn't the only thing we should be worried about with Facebook's new preventive health tool

The post Facebook’s “preventive health” tool has potential for overdiagnosis appeared first on Lown Institute.

]]>
When Facebook announced that they were launching a new “preventive health” tool, some users were understandably concerned about data privacy. There are a lot of reasons to distrust Facebook, from misinformation in political ads, to unauthorized sharing of personal user information.

But privacy isn’t the only thing we should be worried about with this new tool—there is also good chance that it will lead to overdiagnosis and overtreatment.

Facebook’s tool gives users health recommendations based on their gender and age. Some of these recommendations are widely accepted and have been shown to save lives, such as annual blood pressure tests and screening for cervical cancer every five years.

However, some of their recommendations are not so cut-and-dry. For example, the Facebook tool advises women age 45 to 55 to get a mammogram every year. While the American Cancer Society recommends this, other groups, such as the US Preventive Services Task Force, recommend biennial screening (every two years) for women age 50-74, acknowledging that screening for women younger than 50 has a much greater risk of harm than benefit. If all women follow Facebook’s advice, there will be many more false positives and biopsies, with no overall survival benefit.

Facebook’s advice on cholesterol screening is slightly misleading as well. They recommend, as does the American Heart Association, that people over 20 get cholesterol tests every 4-6 years. However, a study in the Annals of Internal Medicine found that women younger than 50 and men younger than 40 years without other risk factors have very low risk of cardiovascular disease, and thus may not benefit from regular cholesterol screening. The USPSTF similarly finds little evidence that men younger than 35 and women younger than 45 at low risk of CVD would benefit from cholesterol tests, so they do not offer a recommendation.

To create this tool, Facebook collaborated with the American Cancer Society, the American College of Cardiology, the American Heart Association, and the U.S. Centers for Disease Control and Prevention—but not the USPSTF, Choosing Wisely, or other professional groups that are concerned with overdiagnosis and overtreatment.

Users should be wary of the new Facebook tool, not only because of the potential for health data misuse, but also about the potential for physical and financial harm from overscreening.

The post Facebook’s “preventive health” tool has potential for overdiagnosis appeared first on Lown Institute.

]]>
Are 3D mammograms better than none? https://lowninstitute.org/are-3d-mammograms-better-than-none/?utm_source=rss&utm_medium=rss&utm_campaign=are-3d-mammograms-better-than-none Thu, 24 Oct 2019 20:01:37 +0000 https://lowninstitute.org/?p=2223 The hype around 3D mammograms misleads patients about the benefits of cancer screening.

The post Are 3D mammograms better than none? appeared first on Lown Institute.

]]>
The U.S. Food and Drug Administration (FDA) approved digital breast tomosynthesis in 2011. This new technology is often known as a “3D mammogram,” because it creates a 3-dimensional image of breast tissue by taking x-rays through multiple tissue planes.

Studies have found that 3D mammograms detect more cancers than 2D mammograms and result in fewer false positives, which sounds great. However, there are some significant downsides to 3D mammograms. There have been no studies of whether or not using 3D mammograms actually improve morbidity, mortality, or quality of life. These scans may detect more cancers, but there isn’t evidence that the cancers being detected would have harmed patients, so 3D mammograms may also lead to more overdiagnosis and overtreatment. Even worse, older 3D mammograms expose women to more radiation than conventional 2D mammography. For all of these reasons, the US Preventive Task Force and the American Cancer Society state that there is not enough evidence to recommend 3D mammograms.

But the lack of evidence hasn’t stopped 3D mammograms from becoming incredibly popular in recent years. In one large study of privately insured women who received mammograms, use of 3D mammograms more than tripled in two years, from 12.9% of screening examinations in early 2015 to 43.2% in late 2017.

The rise of 3D likely has more to do with an intensive marketing process than evidence of benefit, writes Liz Szabo in a recent Kaiser Health News investigation. According to KHN’s analysis of Open Payments data, over the past 6 years, manufacturers of 3D mammogram equipment have paid doctors and teaching hospitals more than $9.2 million for research, speaking fees, consulting, and meals related to 3D mammograms. 

Many clinicians who are spokespeople for 3D mammograms receive hundreds of thousands from companies that make this technology, yet they insist that this funding does not affect their opinion and they “can’t be bought.” Hologic, one of the largest manufacturers, has also funded specialty societies and patient advocate groups, and lobbied policymakers to have insurers cover the cost of 3D mammograms.

The rapid marketing push for 3D mammograms, followed by a corresponding increase in popularity of this technology, is disturbing given the lack of evidence that it saves any lives compared to 2D mammograms. Just as disturbing is the narrative that this marketing continues to push, that early detection of breast cancer is the best way to improve survival. As we’ve previously written, there has been little to no progress in reducing rates of mammography in low-risk women under 50, for which mammograms offer greater harms than benefits. Part of the problem is that most clinicians do not discuss the potential harms of overscreening for breast cancer, and most patients overestimate the benefits of screening as well.

Source: Keating NL, Pace LE. Breast Cancer Screening in 2018:Time for Shared Decision Making. JAMA. 2018;319(17):1814–1815. doi:10.1001/jama.2018.3388

The hype around 3D mammograms, condoned by the FDA and built by 3D equipment manufacturers, only hurts the problem more. When medical device manufacturers pay clinicians and advocacy groups to be cheerleaders for 3D, more patients hear the false message that new technology will find more cancers, and finding more cancers means saving more lives—without hearing anything about the potential harms of overdiagnosis.

The post Are 3D mammograms better than none? appeared first on Lown Institute.

]]>
Why AI? Questioning the role of artificial intelligence in health care https://lowninstitute.org/why-ai-questioning-the-role-of-artificial-intelligence-in-health-care/?utm_source=rss&utm_medium=rss&utm_campaign=why-ai-questioning-the-role-of-artificial-intelligence-in-health-care Fri, 24 May 2019 17:10:25 +0000 https://lowninstitute.org/?p=799 Health care experts explain the potential downsides to the artificial intelligence "revolution."

The post Why AI? Questioning the role of artificial intelligence in health care appeared first on Lown Institute.

]]>
Artificial intelligence (AI) in health care is a fast-growing business. In 2017, Americans invested nearly $100 million in health care AI companies, about three times as much as the amount invested in 2011. While we don’t expect robots to replace doctors anytime soon, new methods of data analysis and pattern recognition have the potential to help doctors diagnose diseases, read scans, and make decisions about treatments. Many health care leaders have called the AI boom the start of a “revolution” in health care; others, however, are more skeptical. 

The data problem

One issue is that these algorithms depend on having lots of real-world data, which they use to recognize patterns and make predictions. Although health systems have been using electronic health records (EHRs) for decades, these data usually cannot be shared and aggregated across EHR systems, because the records are not interoperable. Clinical trial data are rarely collected and submitted in a standardized format, making these data difficult to share and move. Collecting data is so difficult that it took Google a year to extract about 1,700 patient cases from medical records to test their algorithm for lung cancer screening. 

Can AI change behavior?

But even with the best data, how much will AI do for doctors and patients? In a recent JAMA Viewpoint, Dr. Ezekiel Emanuel, oncologist and bioethicist at the University of Pennsylvania, and Dr. Robert Wachter, professor of medicine at the University of California — San Francisco, explain why they don’t think AI developments on their own are going to significantly change the practice of medicine or health outcomes.

“A narrow focus on data and analytics will distract the health system from what is needed to achieve health care transformation: meaningful behavior change,” write Emanuel and Wachter. Just because research shows that a certain practice is best for patients does not meant that clinicians will do it. As we know from the continuing prevalence of waste, overuse, and underuse, changing clinician behavior is tough. Procedures and tests not recommended by specialty groups, such as episiotomies, are still performed at high rates in some hospitals. It is unlikely that AI decision-making tools will get doctors to change their behavior when years of research and educational campaigns have been unable to do so on a large scale. 

AI also does not do much to change patient behavior, which is much more likely to affect health than what happens at the doctor. As other clinicians have pointed out, knowing one’s risk for disease has not been shown to change patient behavior. Why would a patient decide to quit smoking after learning they have a six-fold increased risk of lung cancer through an AI prediction, if they haven’t already been convinced to quit by the 20-100x greater risk of lung cancer from smoking? 

Rather than invest in tools to identify patterns and predict disease, AI efforts should focus on “thoughtfully combining the data with behavioral economics and other approaches to support positive behavioral changes,” Emanuel and Wachter write.

AI and health disparities

Another potential downside to AI lies in its power — the ability to recognize patterns and predict outcomes. Racial and ethnic health disparities are embedded in our health system, and in the data we collect, meaning that AI would pick up these patterns and likely reproduce them.

“Because A.I. is trained on real-world data, it risks incorporating, entrenching and perpetuating the economic and social biases that contribute to health disparities in the first place,” writes Dr. Dhruv Khullar in The New York Times. We’ve already seen this happen in AI programs that assess risk in criminal justice sentencing.

“In medicine, you’re taught to stereotype,” said Dr. Damon Tweedy, Associate Professor at Duke Medical School and author of Black Man in a White Coat, at the Atlantic Pulse conference. Doctors are often taught to assume things about patients based on their age, race, and gender. Then these assumptions are written into their medical record and perpetuated in future visits with other doctors.

Doctors like Tweedy are trying to stop this pattern by recognizing their biases in the moment, and by training other doctors to do the same. However, as Khullar points out, AI could make these biases “automated and invisible.” We may “begin to accept the wisdom of machines over the wisdom of our own clinical and moral intuition,” he warns.

The runaway train

Although AI has a long way to go before being a mainstream part of medical practice, some algorithms are already being implemented in direct-to-consumer apps, with little to no regulation. For health apps that seek to diagnose patients, the FDA has demurred on regulating those deemed “low risk,” writes Michael Millenson, adjunct associate professor of medicine at Northwestern University, in The Conversation.

This is important because about one third of adults seek a diagnosis online, yet the effectiveness of consumer-facing diagnostic apps is highly inconsistent. According to a 2018 review of the evidence on DTC diagnostic digital tools, the evidence base is “sparse in scope, uneven in the information provided and inconclusive with respect to safety and effectiveness, with no studies of clinical risks and benefits involving real-world consumer use.” Reviewing the evidence that is available shows wide variety in apps’ functionality, accuracy, safety and effectiveness. Not exactly a glowing review.

As to why these apps are not being regulated, experts are shaking their heads. “I would love someone to explain to me how, exactly, low-risk is calculated,” said Millenson in an email correspondence. “I guess if you say, ‘This is not medical advice,’ you’re home free?”

The post Why AI? Questioning the role of artificial intelligence in health care appeared first on Lown Institute.

]]>
Genomic sequencing in primary care: Promises unfulfilled https://lowninstitute.org/genomic-sequencing-in-primary-care-promises-unfulfilled/?utm_source=rss&utm_medium=rss&utm_campaign=genomic-sequencing-in-primary-care-promises-unfulfilled Thu, 31 Jan 2019 20:40:18 +0000 https://lowninstitute.org/?p=989 Will routine genomic sequencing be the innovation that revolutionizes medicine? Or is it more hype than substance?

The post Genomic sequencing in primary care: Promises unfulfilled appeared first on Lown Institute.

]]>
The Chicago Tribune recently reported that the NorthShore University Health System will be conducting a pilot project of genome sequencing in primary care, making it the latest health system (along with Geisinger and Sanford Health) to offer broad genetic testing as a part of routine primary care. The Tribune touts NorthShore’s initiative as potentially “the future of primary care — a way to keep patients healthier and hold down costs by catching treatable diseases earlier.” Another high-ranking official from a neighboring medical center was quoted in the same article, saying “There’s a lot to be gained for [genome sequencing], and I think there’s very little to be lost.”

Will routine genomic sequencing be the innovation that revolutionizes medicine? Or is it more hype than substance? A timely perspective piece in the Journal of Clinical Investigation from Dr. Michael Joyner at the Mayo Clinic and Dr. Nigel Paneth at Michigan State University explains why genomic sequencing is unlikely to be the “future of medicine” that NorthShore and other health systems promise.

The idea of fully understanding our bodies and preventing future disease by looking at every gene in our DNA is tempting. But Joyner and Paneth pose an important question — does genomic sequencing predict more about our health than family history or social and physical characteristics? They write that despite the promise of the accurate predictive power of DNA, 

“Extensive analyses of thousands of potential gene-health outcomes often fail to match, let alone exceed, the predictive power of a few simply acquired and readily measured characteristics such as family history, neighborhood, socioeconomic circumstances, or even measurements made with nothing more than a tape measure and a bathroom scale.”

This is because few health conditions are connected to just one gene variant, making them difficult to assess disease risk using genetic information alone. Common health conditions such as high blood pressure, diabetes, cardiovascular disease, and many cancers, are each “linked to many hundreds of gene variants that individually and even collectively explain only a small fraction of the variance in disease frequency,” the authors write.

In one example from NorthShore health system, a 29-year-old patient with a family history of breast cancer finds out through genomic sequencing that she has an elevated risk of breast cancer. Her doctor then schedules her for regular mammograms and MRIs. The patient calls the genetic test “empowering” and “amazing.” However, it’s unclear whether this outcome is better for the patient, since it has not been shown that regular screening mammograms, even for women at high genetic risk of breast cancer, have greater benefit than harm before age 35. It’s possible the doctor would not have recommended starting screening so early, but because of the results of the genetic test, they felt pushed to “do something.” In a randomized controlled trial for genomic sequencing compared to family history alone, researchers found that routine genomic sequencing in primary care “may prompt additional clinical actions of unclear value.”

Another myth that Joyner and Paneth discuss is the idea that genetic testing will change patients’ behavior. However, the idea that knowing one’s risk of future disease will motivate someone to change their lifestyle, isn’t actually born out in studies of people who get tested. As the authors point out, why would a patient decide to quit smoking after learning they have a six-fold increased risk of lung cancer, if they haven’t already been convinced to quit by the 20-100x greater risk of lung cancer from smoking? 

Joyner and Paneth are troubled by the increasing investments in genomic testing, when after twenty years, the Human Genome Project has had no positive effect on population health or life expectancy. Should we be funneling money into routine genomic sequencing and other “futuristic” health technology for primary care patients, when so many Americans don’t have access to primary care at all? As the authors write, we have to make sure that our “obsession” with DNA does not preclude research on the many pressing health problems that we need to solve. Otherwise, we will end up with more unfulfilled promises.

The post Genomic sequencing in primary care: Promises unfulfilled appeared first on Lown Institute.

]]>
Another futuristic health care startup ignores overuse https://lowninstitute.org/another-futuristic-health-care-startup-ignores-overuse/?utm_source=rss&utm_medium=rss&utm_campaign=another-futuristic-health-care-startup-ignores-overuse Wed, 28 Nov 2018 21:11:36 +0000 https://lowninstitute.org/?p=2168 Silicon Valley has the fix for primary care and - surprise! - it's more technology.

The post Another futuristic health care startup ignores overuse appeared first on Lown Institute.

]]>
What’s the biggest problem in health care today? The shortage of primary care clinicians? Rampant overuse and waste? The lack of investment in public health and social determinants of health? 

No, the real problem in health care is that there isn’t enough technology involved, say the founders of Forward, a health care startup that just opened its newest clinics on the east coast. In a recent video on Cheddar  (a business, tech, and culture news site), Forward CEO Adrian Aoun explains why their model represents the “future of healthcare.” In the video, Aoun shows off the “tons and tons of technology” at Forward’s newest clinic, including heart scanners, body scanners, skin scanners, and DNA sequencing – all of which “give us a better insight to what’s going on with your body,” he says. The video sparked an interesting twitter conversation between investors excited about the expansion of Forward and doctors who were more wary.  

The hope for high-tech scans and DNA tests for primary care is that we will be able to get to know our bodies better and prevent illnesses before they start. However, as we wrote in a previous blog about Lab 100, these tests are as likely to expose people to overtreatment as they are to prevent illness. Here’s why:

Our tests are still not that accurate

If we all get regular heart scans, skin scans, and tests of every other body part, there’s no way we would miss any abnormalities, right? Unfortunately, for low-risk people without symptoms, getting tests like electrocardiographs, mammograms, cardiac stress tests, and many others are more likely to cause harm than good. If all of our tests were 100% accurate, they would be great to determine whether or not people have certain conditions, but they are not completely accurate. Even with 90% accuracy, these tests generate large numbers of false positives and expose many people to overtreatment. 

For example, screening low-risk adults for atrial fibrillation would require 10,000 people screened to prevent one stroke, but 800 of those people would get a false positive result (If Afib is present in 2% of patients, then 8000 do not have Afib, and 10% of these get a false positive result). And there is no clinical evidence that shows that treating asymptomatic Afib with anticoagulants improves outcomes for these patients.

Both doctors and patients overestimate the accuracy of tests and screenings, which often lead to false positives, overmedication, additional testing, and other unnecessary treatments, not to mention stress and additional financial costs. (For more on how many of the tests doctors do “just to be safe” end up creating more problems for patients, read Dr. Dan Morgan’s brilliant Washington Post op-ed about the harm of unnecessary tests.)

 

DNA sequencing has no proven benefit

The idea of fully understanding our bodies by looking at every gene in our DNA is very tempting. Genetic testing can tell us our risk factors for certain diseases and atypical responses to certain medications, which sounds great. But in practice, genetic testing in primary care has not been shown to benefit patients. In a randomized controlled trial published in the Annals of Internal Medicine last year, patients that had whole genomic sequencing compared to just a family history did not result in any new findings or changes in medication management. And the physicians participating in this trial received several hours of training in genomic medicine, so they were more knowledgable than most doctors.

Why wouldn’t genomic sequencing help patients? Because knowing one’s risk of a disease is unlikely to change your current treatment or behavior. For example, if a patient shows an elevated risk of heart disease in their DNA, a doctor would likely tell them to eat a healthy diet and get regular exercise, advice that could be given to every patient. Knowing your risk for certain diseases might even increase stress, especially for diseases like Alzheimer’s for which there is no effective treatment yet. And though some say that knowing one’s risk of future disease is a powerful motivator for changing one’s lifestyle, this isn’t actually born out in studies of people who get tested.

Listening is critical to diagnosis

In the Cheddar video, Aoun belittles the conventional history-taking process, saying, “Today you go to a doctor, you sit there and tell them stuff and they almost ‘divine’ the answer…like a modern fortune teller.” What Aoun misses is that taking a detailed history and letting the patient explain their concerns is one of the most important things doctors can do to understand what’s going on with the patient. 

As Dr. Stephen Martin, associate professor of Family Medicine and Community Health at the University of Massachusetts Medical School, explained in previous interview about diagnosis: 

“Listening to patients is where we learn a great deal about the arc of the symptoms they’re having, the context and severity of the symptoms, what patients are concerned about and why. You can notice and examine symptoms like rashes, ask detailed questions about where patients traveled recently, ask them about other potential stressors in their lives. A CT scan doesn’t have the answers to these questions.”

Also, a patient’s health depends on much more than their physical state, according to Dr. Joachim Sturmberg, associate professor of General Practice at Newcastle University in Australia. Physicians often have to understand a patient’s mental, emotional, and social experiences to put their physical symptoms in context.

What Forward gets right

Aoun isn’t wrong that health care needs a drastic overhaul. In a recent profile in Business Insider, Aoun argues that it’s not just tech but preventive care that’s missing from the health care system. It’s true that our health care system is much more focused on treating ailments than preventing what causes them. However, the prevention part goes beyond “catching diseases early,” it’s about giving people access to the basic building blocks of health — fresh food, stable housing and income, and freedom from toxic stress. Our country’s poor investment in social determinants of health cannot be solved with more scanners. 

Our country’s poor investment in social determinants of health cannot be solved with more scanners. 

That’s not to say that the Forward model is all bad. People are increasingly craving a more simple way to engage with the health care system and get more time with their clinicians. The membership model of Forward and other “concierge” clinics provides easy access to clinicians and plenty of time to talk with them, which is very valuable. However, this can also be accomplished more affordably, as advocates of Direct Primary Care have shown. The leaders of Forward likely assume that they can charge more for giving patients more shiny technology, but it’s doubtful this extra “benefit” is even a benefit at all.

If the leaders of Forward want to change the future of medicine for the better, they could establish neighborhood primary care clinics, invest in social determinants of health, or make a better electronic health record (please!). More of the same tech in primary care is not the answer.

The post Another futuristic health care startup ignores overuse appeared first on Lown Institute.

]]>
One step into the future, and two steps back https://lowninstitute.org/one-step-into-the-future-and-two-steps-back/?utm_source=rss&utm_medium=rss&utm_campaign=one-step-into-the-future-and-two-steps-back Tue, 07 Aug 2018 20:14:17 +0000 https://lowninstitute.org/?p=2170 Hospitals are developing the "check-up of the future" - but is it good for patients?

The post One step into the future, and two steps back appeared first on Lown Institute.

]]>
By Judith Garber, MPP

About a week ago, I hurt my back during a team sports practice (ultimate frisbee, if you must know). I’ve had several sports injuries but never back problems, so this new painful injury filled me with anxiety. Was it just a pulled muscle? Did I slip a disc? Was this the end of my frisbee career? Fortunately for me, my worrying was misplaced; my back felt better with just a few days of rest.

When I get a familiar injury or the common cold, it’s never fun, but at least I feel as though I know what’s going on with my body and how to take care of it. When it’s a completely new ailment, however, I feel lost. I want more than anything to know what’s wrong, how bad it is, and how I can make it better. 

I’m not the only one who thinks like this. The desire to “know our bodies,” to understand everything about our health and what could go wrong, is common. And, as new health care initiatives show, this desire is also lucrative. Genetic tests like 23AndMe, at-home tests for food sensitivity and vitamin deficiency (as seen on “Shark Tank”!) and other direct-to-consumer testing products are hugely popular. 

And now, hospitals are jumping on the bandwagon. This fall, Mount Sinai hospital in New York City is launching a program called “Lab 100,” a “hybrid clinic and research lab” that gives patients “the most comprehensive health assessment currently available” while making this longitudinal health data available for research purposes.

CNBC reporter Christina Farr visited the lab and logged her experience, describing high-tech tests that measured everything from body composition and balance to cognition. From Farr’s visit, the “clinic visit of the future” sounds as much like a carnival as a medical visit – “Test your strength!” “Virtual reality headset!” “How fast can you put the pegs in the holes?”

Having years of sophisticated health data seems like a valuable research tool in the long run. But in the short run, will this program be helpful for patients? Or could there be potential for these tests to trigger additional tests and procedures that aren’t needed?

Dr. Alan Roth, chairman of family medicine at Jamaica Hospital, thinks the latter is more likely. “Lab 100 looks like sophisticated, useless information that will do nothing to improve patient care,” he said, “We should be focusing on making primary care accessible to all, not providing rich people with unnecessary testing and treatment.”

Right Care Alliance member Lila O’Connell agrees. “How about this model for the annual physical? Arrive at the doctor’s office, both doctor and patient put on walking shoes and take a 30-minute walk, and discuss the patient’s lifestyle and health concerns with input from the doctor.” O’Connell points out that although the annual physical has become ritualized in American health care, there is little evidence to show that these check-ups actually improve health. 

Offering expensive health care programs for the wealthy is nothing new — see “executive health programs” and “concierge medicine,” for example. However, researchers warn that competition to bring wealthy patients to hospitals and clinics can result in them receiving unnecessary care that can lead to harmful overtreatment, such as cardiac stress tests, skin and prostate cancer screening, and screening for carotid artery stenosis.

The leaders of Lab 100 acknowledge the possibility of overtreatment from testing. Farr writes that when faced with incidental findings from tests, doctors at Lab 100 will “closely integrate with the traditional medical system so that they can deliver the right follow-up care.” That begs the question, what is the “right follow-up care” for incidental findings? Once you find something that looks suspicious, it’s difficult not to conduct further tests and procedures, even for doctors who understand the potential harms of overtreatment. And it’s likely that patients who seek treatment at Lab 100 are people who want to know as much as they can about their health – which means more follow-up. 

Lab 100 does offer an undeniably valuable service — time with the doctor. Each visit takes 90 minutes, much of which is spent consulting with the physician about ways patients can improve their lifestyle and well-being. The doctor is also unencumbered by the electronic medical record, because the patient data is automatically recorded. However, patients shouldn’t have to spend an inordinate amount of money on clinics like Lab 100 or concierge medicine to be able to have a strong and trusting relationship with their physician.

Lab 100 plays into a worrying cultural trend that we all fall prey to, by promising to end our anxiety about not knowing. Farr reports that she left the clinic visit feeling “reassured” about her good health. Isn’t that what we all want? But our discomfort with uncertainty is leading to more harm than we realize. As difficult as it is, we have to accept that we can’t know everything about our bodies, no matter how many high-tech check-ups we get.

The post One step into the future, and two steps back appeared first on Lown Institute.

]]>