Published
- 3 min read
The Algorithmic Triage: When AI Diagnoses Humanity's Most Vulnerable
The Emerging Landscape of AI in Homeless Healthcare
In the shadow of California’s worsening homelessness crisis, a Los Angeles-based health technology company called Akido Labs is deploying artificial intelligence to diagnose and treat homeless patients through its Scope AI program. This initiative represents a bold attempt to address the stark healthcare disparities facing California’s homeless population, where nearly a quarter report being unable to access needed medical care and only 39% have a primary care provider. The technology allows non-medically trained outreach workers to conduct initial patient interviews using AI-generated questions, with the system then suggesting diagnoses, medical tests, and medications that are ultimately reviewed and approved by remote human physicians.
The program operates under Medi-Cal’s CalAIM expansion into social services and has already seen over 5,000 patients in Los Angeles County homeless encampments since 2023. According to Akido, Scope AI achieves 99% accuracy within its top three diagnostic suggestions. The company claims this approach increases doctor efficiency from 200 to 350 patients per physician by reducing time spent on routine questions and paperwork. Partnerships with organizations like Five Keys, Reimagine Freedom, and the Young Women’s Freedom Center aim to expand this technology to serve women and girls who have been incarcerated and other marginalized groups.
The Human Context Behind the Technology
The healthcare crisis facing homeless Californians cannot be overstated. Research from the UCSF Benioff Homelessness and Housing Initiative reveals nearly half of homeless individuals report poor or fair health—a rate four times higher than the general U.S. population. These individuals die earlier and suffer from conditions exacerbated by their living situations. As Steve Good of Five Keys notes, many homeless individuals “haven’t seen doctors for years” or “haven’t seen a dentist ever.”
The fundamental problem Scope AI attempts to address is the critical shortage of medical professionals willing to serve homeless populations in encampments and shelters. This shortage creates an access gap that leaves vulnerable people without basic medical care. The promise of AI-driven diagnostics is that it can scale limited medical resources to reach more patients more quickly, potentially saving lives through earlier intervention and treatment.
The Ethical Abyss: Efficiency Versus Humanity
While the intention to expand healthcare access is commendable, the deployment of AI diagnostics for homeless populations raises profound ethical questions that strike at the very heart of our values as a society committed to human dignity and compassionate care.
The most concerning aspect of this approach is what it signifies about our collective response to human suffering. Are we solving homelessness or merely making its consequences more manageable through technological triage? There’s something deeply troubling about addressing a social failure—our inability to provide housing and adequate healthcare—with algorithms rather than comprehensive solutions. This technological intervention risks normalizing homelessness as a permanent feature of our society that requires automated management rather than elimination.
The Limitations of Algorithmic Compassion
Brett Feldman, director of USC Street Medicine, raises crucial concerns about whether AI can adequately account for the complex realities of homeless patients’ lives. His example of treating scabies without shower access illustrates how medical decisions for homeless patients often require understanding environmental constraints that algorithms might miss. The prescription that works for a housed patient might be completely inappropriate for someone living on the streets.
This points to a fundamental limitation of AI in healthcare: algorithms operate on patterns and data, but medical care for vulnerable populations requires contextual understanding, compassion, and sometimes creative problem-solving that transcends standardized protocols. The human connection between provider and patient—the trust built through personal interaction—is therapeutic in itself, particularly for populations that have experienced trauma and marginalization.
The Equity Implications of Technological Solutions
Stella Tran’s warning about AI benefits flowing primarily to wealthy communities if social service providers are excluded from development is particularly relevant. There’s a real danger that AI healthcare could become another area where technological progress widens rather than narrows inequality. Without deliberate inclusion of vulnerable populations in both development and implementation, we risk creating a two-tier healthcare system: human doctors for the wealthy, algorithms for the poor.
The potential for algorithmic bias is especially concerning given the 2024 study finding AI was significantly more likely to misdiagnose breast cancer in Black women than white women. If AI systems trained primarily on data from housed populations are deployed to diagnose homeless patients—who often have unique health profiles shaped by their living conditions—the risk of misdiagnosis could be substantial.
Privacy and Consent in Vulnerable Contexts
Data privacy concerns take on added significance when dealing with homeless populations, who may have limited understanding of how their health information is being used or stored. The power imbalance between outreach workers offering potentially life-saving care and individuals desperate for medical attention creates ethical challenges around truly informed consent. We must ask whether someone facing immediate health crises can meaningfully consent to experimental AI-driven care.
The Human Cost of Technological Missteps
As Feldman notes, medical errors have outsized consequences for homeless patients who lack easy follow-up access. If an AI-generated prescription causes adverse effects or doesn’t account for a patient’s living situation, the results could be catastrophic. Unlike housed patients who can call their doctor or visit an emergency room, homeless individuals may have no recourse when things go wrong.
This vulnerability creates an ethical imperative for extraordinary caution when experimenting with AI on homeless populations. The desperation for healthcare access does not justify lowering safety standards or rushing unproven technologies into practice.
Toward Ethical Implementation: Guardrails and Governance
The concerns raised by experts like Angel Hsing-Chi Hwang that “we don’t have perfect solutions to a lot of these challenges yet” should give us pause. Before scaling AI diagnostics for homeless populations, we need robust frameworks ensuring:
- Transparency: Patients must understand when AI is involved in their care and how it works
- Human oversight: Every AI recommendation must be reviewed by physicians familiar with homeless healthcare
- Bias testing: Algorithms must be rigorously tested across diverse homeless populations
- Safety protocols: Clear procedures for addressing AI errors or adverse outcomes
- Community involvement: Homeless individuals and their advocates should help design and evaluate these systems
The Larger Question: Technology or Transformation?
Ultimately, the most important question this technology raises is whether we’re using innovation to address symptoms rather than causes. AI diagnostics might make homelessness slightly less deadly, but they do nothing to address the underlying housing crisis. There’s a danger that technological solutions create moral comfort—the sense that we’re “doing something”—while avoiding the harder work of structural change.
As a society committed to human dignity, we must ensure that technological efficiency doesn’t become a substitute for genuine compassion and comprehensive solutions. AI might have a role in expanding healthcare access, but it should complement rather than replace the human connection and systemic changes that vulnerable populations truly need.
The deployment of AI in homeless healthcare represents a crossroads: will we use technology to bridge gaps in our social safety net, or will we allow it to become a permanent patch on a broken system? The answer will say much about our commitment to ensuring that every human being receives care that acknowledges their inherent dignity, not just their symptoms.