Advances in artificial intelligence now enable the creation of digital replicas of people who have passed away, allowing users to converse with a virtual version of a loved one. While this technology can offer comfort, it also opens the door to unhealthy forms of mourning and raises profound questions about how we understand death.
Fragile boundaries
Posthumous digital avatars are assembled from the online footprints left by an individual—social‑media updates, photos, audio clips, and video footage. By applying AI, machine learning, and sophisticated data‑analysis techniques, these systems can mimic the subject’s voice, mannerisms, and even recall personal memories. The resulting encounter can blur the line between genuine interaction and simulated illusion, leaving users uncertain whether they are engaging with the real person or a convincing copy. As virtual existence becomes commonplace, the emergence of these after‑life avatars introduces a host of ethical dilemmas.
Autonomy, data protection and privacy
Key concerns revolve around respecting personal autonomy and safeguarding private data, especially as these avatars reshape social ties and challenge natural human limits. The creation of a digital likeness after death affects three groups of people:
- Individuals whose online traces are publicly accessible—often through social platforms—whether they anticipated their death or not. Their data may be harvested voluntarily or without their consent for use by relatives, employers, or third‑party services.
- Family members, friends, or organizations that inherit these digital remnants. They must decide whether to author a virtual memorial or reject the notion of a “digital resurrection,” questioning who truly controls the posthumous representation.
- Consumers of avatar‑creation services. People may turn to these tools out of curiosity or as a coping mechanism for intense grief, but they risk developing a dependence on a market that can change or disappear without warning.
All three parties face the possibility of losing authority over how a deceased individual’s identity is perpetuated across countless virtual environments.
Public and pathological grief
When a digital avatar is shared widely, mourning shifts from a private experience to a public spectacle. This duality reshapes how society relates to loss and redefines the role of the departed in everyday life.
The case of Jang Nayeon, a young woman who died in 2016 and later featured in the documentary Meeting You, illustrates the discomfort that can arise from such exposure. Her grieving mother consented to a virtual recreation of her daughter, allowing a live‑streamed reunion that many viewers found unsettling—an example of grief’s hyper‑visibility in the digital age.
Although a therapist‑supervised avatar session can serve as a therapeutic tool, only a small fraction of current uses occur under professional guidance. Consequently, many users explore these simulations alone, making them vulnerable to addiction or to escaping the natural process of grieving. This can lead to pathological grief—an extended, intense state of mourning that persists beyond a year, impeding daily functioning and the search for meaning.
Overexposure and silent voyeurism
Digital platforms that archive a deceased person’s content—such as social‑media pages—often attract passive observers who browse without interacting, creating a form of silent voyeurism. While not always malicious, this habit contributes to a culture of continual surveillance, where personal data become consumable curiosities for the broader public.
The ethical challenge lies in the potential misuse of this data, ranging from simple curiosity to more sinister exploitation. Persistent visibility of a person’s digital remnants, especially through interactive avatars, sustains the illusion of an ongoing presence after death. This reshapes our perception of mortality, suggesting a form of artificial eternity that may not align with societal values.
Recognizing the risks and moral questions surrounding artificial immortality is essential for shaping responsible usage guidelines. Ongoing reflection is needed to protect human dignity—both for the living and the digitally represented dead—and to guard against abuse as technology continues to evolve.
Key concepts
AI chatbot safety Generative AI ethics Generative AI misinformationThis material is reproduced from The Conversation under a Creative Commons licence. You can view the original article for more details.
Image credits:
- Image 1 - credit: TechXplore1
- Image 2 - credit: TechXplore1
- Image 3: Credit: Pixabay/CC0 Public Domain - credit: TechXplore1
- Image 4: The Conversation - credit: TechXplore1

Your Opinion is valid .