██████╗ █████╗ ████████╗ ██████╗ ██████╗ ██╗ ██╗ █████╗ ██╗ ███████╗
██╔════╝██╔══██╗╚══██╔══╝ ██╔══██╗██╔═══██╗╚██╗ ██╔╝██╔══██╗██║ ██╔════╝
██║ ███████║ ██║ ██████╔╝██║ ██║ ╚████╔╝ ███████║██║ █████╗
██║ ██╔══██║ ██║ ██╔══██╗██║ ██║ ╚██╔╝ ██╔══██║██║ ██╔══╝
╚██████╗██║ ██║ ██║ ██║ ██║╚██████╔╝ ██║ ██║ ██║███████╗███████╗
╚═════╝╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚══════╝
A playful simulation exploring the question: Can AI care for others as well as humans can?
Background
This project is a digital reimagining of Cat Royale, an art installation by Blast Theory where an AI-controlled robotic arm cared for real cats in a specially designed habitat. The original work explored questions about automation, care, and what it means for machines to look after living beings.
This project is a playful, low-stakes simulation that explores the same questions without using real animals. Autonomous virtual cats live in digital habitats maintained either by AI algorithms or by human players.
Purpose
When automated systems are responsible for caring for others, how do they compare to human care?
This 3-minute interactive experience lets you observe and participate in this comparison:
- Watch an AI maintain a cat habitat based on programmed priorities
- Try maintaining an identical habitat yourself
- See which approach keeps the cats happier
Design Rationale
The concepts explored in this project connect to some interesting UX research work on AI systems and care.
AI as Black Box
Drawing on Dove and Fayard’s work on machine learning and metaphor, the AI caretaker is designed as a black box, where you see what it does (status messages, actions taken) but not exactly why or how it decides. While the implementation uses rule-based logic for practical reasons, it’s meant to represent the opacity of real AI systems we encounter today and in the future. Players can observe patterns in the AI’s behavior, but the underlying decision-making remains somewhat mysterious, just like many automated systems in our lives.
Handling Uncertainty
Bowler et al.’s research on uncertainty in digital systems shows how rigid scheduling can fail when dealing with unpredictable needs. The cats in this simulation have fluctuating, uncertain needs—each with unique personality traits (one gets hungry faster, another loves toys more). The AI follows fixed thresholds (feed when <20% hunger, clean when >80% litter), while human players can respond more flexibly and perhaps produce a better outcome. This comparison shows different approaches to managing uncertainty.
Experience and Reflection
Following McCarthy and Wright’s experience-centered design approach, the project focuses on felt experience rather than just task completion. Players experience the responsibility of caregiving firsthand, then compare their results with the AI’s performance. The final scores (happiness levels, actions taken, time in critical states) create a moment for reflection. Maybe the AI took fewer actions but maintained steadier happiness, or maybe human players were more responsive but less consistent. The point isn’t to declare a winner, but to make you think about what “good care” actually means in this experience-centered demo.
Technical details
The source code can be access here.
References
Dove, G. and Fayard, A.L., 2020, April. Monsters, metaphors, and machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-17).
Bowler, R.D., Bach, B. and Pschetz, L., 2022, April. Exploring uncertainty in digital scheduling, and the wider implications of unrepresented temporalities in HCI. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-12).
Wright, P. and McCarthy, J., 2010. Experience-centered design: designers, users, and communities in dialogue (Vol. 9). Morgan & Claypool Publishers.