TinyTroupe’s Hyperreal Economies: A Look at Simulated Agency

A DALL-E generated image about markets, people and hyperreality.

Somewhere in the early 2000s, I heard an economics professor explain how virtual worlds could be used as a laboratory for his discipline. Imagine two game environments with financial markets, identical in every respect except that in one environment insider trading is allowed, while in the other it is not. This setup could help us understand the impact of financial regulation. These days, we don’t even need avatars for such a laboratory. We can create artificial agents and study their behavior and the consequences of their actions.

TinyTroupe is an experimental Python library developed by researchers at Microsoft that allows for the simulation of people with specific personalities, interests, and goals. “These artificial agents—called TinyPersons—can listen to us and one another, reply, and go about their lives in simulated TinyWorld environments. This is achieved by leveraging the power of Large Language Models (LLMs), notably GPT-4, to generate realistic simulated behavior. This allows us to investigate a wide range of convincing interactions and consumer types, with highly customizable personas, under conditions of our choosing. The focus is thus on understanding human behavior, rather than directly supporting it (as AI assistants do). This results in, among other things, specialized mechanisms that make sense only in a simulated setting. Furthermore, unlike other game-like LLM-based simulation approaches, TinyTroupe aims to illuminate productivity and business scenarios, thereby contributing to more successful projects and products.”

Microsoft sees many applications for this in fields such as advertising, software testing, the creation of synthetic data, product and project management, and brainstorming.

Philosophy

What interests me are the ethical and ontological aspects of these developments. A source of inspiration is the paper On The Morality of Artificial Agents by Luciano Floridi & J.W. Sanders. “We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility”, so we read in the abstract. The authors discuss agency as autonomy, adaptability, and the capacity to effect change. The question remains how autonomous the artificial agents really are in the current state of technology. Then again, we may also have doubts about the ‘real’ autonomy of humans in their decision-making. As far as I understand this, the paper says that artificial agents, studied on a specific level of abstraction, can be moral agents without being morally responsible in the same way humans are. They can be ‘accountable’ for ethically good and bad outcomes but not therefore ‘responsible’. This perspective allows for a less anthropocentric approach to moral considerations in the digital age. Also it avoids discussions about ‘internal states’ of artificial agents such as emotions or consciousness.

Ultimately, humans remain morally responsible for the development, deployment, and consequences of these systems, but complex and creative systems such as AI-agents lead us to consider a ‘distributed morality’. These systems seem to make our ontology, understood as a the concepts and categories we apply on that what ‘is’, more complex.

Another very relevant author in order to analyze the rise of artificial agents, simulations and digital twins, is Jean Baudrillard.

In Simulacra and Simulation (1981), this French philosopher examines the relationship between reality, symbols, and society. Baudrillard describes how modern society has passed through various stages of representation, eventually reaching a state he calls “hyperreality,” where simulations and models become more real or influential than reality itself. At the very least, one could say that the line between digital reality and what people typically call “reality” has become increasingly blurred.

In his book Reality+ (2022), the Australian philosopher David Chalmers explains how virtual objects are real. “When a virtual hockey stick hits a virtual ball, there is an interaction between the data structure of the stick and that of the ball. The virtual objects exist independently of what we think.” Relationships between humans in virtual environments are also real, as younger generations have known for a long time. From this perspective, it’s not so much that the line between the “real” and the “simulated” is blurred, but rather that we need a different understanding of what “real” means.

Tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.