Introduction: The Ghost in the Object
We have all encountered it: the smart thermostat that learns your schedule but still leaves you cold; the autonomous vacuum that navigates around a toy but gets stuck on a rug; the fitness tracker that nudges you to move when you are already walking. These objects are not merely passive tools; they act, decide, and influence. But where does their agency originate? Is it in the code, the sensors, the user, or the context? This is the "material ghost"—the elusive locus of intent and action in post-digital objects. For practitioners designing such systems, understanding agency is not academic; it shapes how we allocate responsibility, anticipate failures, and create meaningful interactions. This guide adopts a critical, practice-oriented lens, drawing on frameworks from science and technology studies (STS) and interaction design, but grounding them in concrete scenarios. We will avoid the trap of treating objects as either fully autonomous or fully determined, instead exploring the messy middle where agency emerges from networks of humans, materials, and code.
Why Agency Matters Beyond Theory
In a typical smart building project, a team might install occupancy sensors to control lighting and HVAC. The sensors detect presence, and the system adjusts accordingly. But what happens when a sensor fails, or a user covers it for privacy? The object's agency—its capacity to act—shifts between the hardware, the algorithm, the facility manager, and the occupant. If the system overrides manual controls, whose intent prevails? These are not just design questions; they have ethical and operational implications. Understanding agency helps teams design more resilient systems that respect user autonomy while leveraging automation.
The Post-Digital Condition
Post-digital does not mean after digital; it describes a state where digital and physical are so entangled that separating them is futile. A smart speaker is not a plastic box with a screen; it is a node in a cloud service, a data collection point, a voice interface, and a household object all at once. Its agency is distributed across these layers. This guide maps that distribution, offering a vocabulary and methodology for tracing agency in your own work.
Deconstructing Agency: A Framework for Analysis
Agency in post-digital objects is often misunderstood as a property of the object itself. In reality, it is relational and emergent. To map it, we need a framework that accounts for material, symbolic, and social dimensions. Drawing on Actor-Network Theory (ANT) and distributed cognition, we propose a tripartite model: agency as capacity, agency as delegation, and agency as performance. Capacity refers to what an object can do (its technical affordances); delegation is how human intent is inscribed into the object (through design, code, or policies); performance is how agency manifests in use (through interaction and adaptation). This framework avoids both technological determinism (objects have fixed agency) and social constructivism (humans have all agency). Instead, it sees agency as a negotiation between the object's materiality, its programmed logic, and the context of use.
Agency as Capacity: The Object's Potential
Consider a robotic arm in a factory. Its capacity includes its range of motion, payload, speed, and sensors. This capacity is not agency itself but a precondition. The arm can lift heavy loads, but it cannot decide what to lift. Capacity is shaped by design choices: material strength, sensor precision, and processing power. For example, a sensor with limited field of view reduces the object's capacity to detect obstacles, thus constraining its agency. Practitioners must audit these capacities to understand the boundaries of object action.
Agency as Delegation: Inscribing Intent
Delegation occurs when designers encode goals into objects. A traffic light delegates the decision to stop or go to a timer and sensors. But delegation is never complete; it always involves translation and loss. The designer's intent to reduce congestion may be realized, but unintended consequences (like drivers speeding to beat a yellow light) emerge. Delegation also raises questions of responsibility: when an autonomous vehicle crashes, is the agency in the code, the sensor data, or the human supervisor? Mapping delegation helps identify where accountability lies.
Agency as Performance: Emergent Action
Performance is how agency unfolds in practice. A smart lock may have the capacity to unlock via a phone app and the delegated intent to grant access to authorized users. But in performance, a user may share their password, or the lock may fail due to a dead battery. Agency is performed differently each time. This is where the "material ghost" becomes visible: the object's action is never fully determined by design or context; it is an ongoing achievement. For teams, this means testing not just functionality but also edge cases where agency shifts unexpectedly.
Three Models of Post-Digital Agency: A Comparison
To navigate the complexity of agency, three dominant models have emerged in academic and design discourse: the object-centered model, the relational model, and the distributed model. Each offers a different lens for mapping agency, with distinct implications for practice. Below, we compare them across key dimensions: locus of agency, unit of analysis, and design implications. Understanding these models helps teams choose an appropriate analytic lens for their specific project, whether it is a consumer device, an industrial system, or an interactive installation.
| Model | Locus of Agency | Unit of Analysis | Design Implications | Example |
|---|---|---|---|---|
| Object-Centered | Within the object | Individual device | Focus on internal capabilities; risk of over-attributing autonomy | Smart thermostat that 'learns' preferences |
| Relational | Between object and user | Interaction dyad | Emphasizes feedback loops and mutual shaping | Voice assistant that adapts to user speech patterns |
| Distributed | Across network of humans, objects, and environment | System of relations | Requires systemic thinking; accounts for unintended consequences | Autonomous vehicle in mixed traffic |
When to Use Each Model
The object-centered model is useful for debugging technical failures, as it isolates the device's capabilities. However, it risks over-attributing agency to the object, leading to anthropomorphism. The relational model is better for user experience design, as it captures how user and object co-adapt. But it may ignore broader systemic influences like infrastructure or policy. The distributed model is the most comprehensive, making it suitable for complex sociotechnical systems, but it can be unwieldy for small-scale projects. Practitioners should choose based on the scope of the system and the questions they need to answer. For most post-digital objects, a hybrid approach that moves between models is most effective.
Common Pitfalls in Applying Models
A common mistake is to assume one model fits all phases of a project. Early in design, the object-centered model helps prototype core capabilities. During user testing, the relational model reveals interaction patterns. For deployment, the distributed model anticipates failures arising from network effects. Another pitfall is conflating agency with intelligence. A simple RFID tag has agency in the distributed sense (it triggers actions when scanned) but lacks intelligence. Mapping agency requires distinguishing between causal influence and intentional action.
Step-by-Step Methodology for Auditing Agency
Auditing agency in a post-digital object involves tracing the flows of influence across material, digital, and social layers. This methodology, developed from design practice and STS, comprises five steps: inventory, mapping, delegation analysis, performance observation, and responsibility assignment. Each step yields a partial map of agency, which together provide a comprehensive view. The process is iterative and should be revisited as the object evolves. Below, we detail each step with concrete actions and examples.
Step 1: Inventory of Capacities
List all technical components: sensors, actuators, processors, communication modules, power sources. For each, note its capabilities and limitations. For example, a motion sensor may have a range of 10 meters but cannot detect stationary occupants. This inventory sets the baseline for what the object can do. It is important to include external dependencies like cloud services or APIs, as these extend the object's capacities beyond its physical boundaries.
Step 2: Mapping Delegations
Identify all instances where human intent is inscribed into the object. This includes design decisions (e.g., threshold values), user configurations (e.g., schedules), and policies (e.g., privacy settings). Document who made these delegations and their rationale. For instance, a smart lock's delegation of 'unlock when phone is near' carries the designer's assumption that phone proximity indicates authorized user presence—a delegation that may fail if phone is stolen.
Step 3: Observing Performance
Observe the object in real use, ideally in diverse contexts. Record how agency is performed differently across users, times, and environments. Look for emergent behaviors not anticipated by designers. For example, a smart speaker may misinterpret a TV ad as a command, performing agency in a way that neither designer nor user intended. Video ethnography or logging can capture these performances.
Step 4: Analyzing Distribution
Map the network of actors (human and non-human) that participate in the object's action. This includes other devices, infrastructure, standards, and even cultural norms. For an autonomous vehicle, the network includes road markings, GPS satellites, traffic laws, and pedestrians' expectations. Use ANT techniques to trace how agency shifts as the network changes. This step reveals that agency is never fully contained in the object.
Step 5: Assigning Responsibility
Based on the audit, determine where responsibility lies for outcomes. This is both a design and an ethical exercise. If a smart building wastes energy because a sensor misreads occupancy, is the responsibility with the sensor manufacturer, the algorithm developer, or the building manager? The audit provides evidence for informed allocation. Document these assignments to guide future updates and to create accountability structures.
Real-World Scenarios: Agency in Action
To ground the framework, we examine three composite scenarios drawn from typical post-digital deployments. Each illustrates how agency is distributed and how the material ghost manifests. These scenarios are anonymized but reflect challenges commonly reported by practitioners. They show that agency is never static; it shifts as components fail, users adapt, and contexts change. Understanding these dynamics is essential for designing resilient systems.
Scenario 1: The Smart Office That Learned Too Much
A company installs an occupancy-based HVAC system to reduce energy costs. Sensors detect heat and motion, adjusting temperature per zone. Initially, the system saves 20% on energy. However, employees in a glass-walled conference room find the temperature uncomfortable because the sensors register solar gain as occupancy, overcooling the room. The agency to define comfort shifts from the HVAC algorithm to the facility manager, who overrides the system manually. The material ghost here is the sensor's inability to distinguish between heat from sun and heat from people, a design delegation that failed in context. The team eventually adds a separate light sensor to correct the delegation, but not before weeks of discomfort.
Scenario 2: The Autonomous Floor Cleaner That Refused to Clean
A hospital deploys autonomous cleaning robots to sanitize floors. The robots use lidar to navigate and are programmed to avoid obstacles. In one ward, the robots repeatedly avoid a specific area near a nurse's station because the lidar detects a reflective surface (a stainless steel cart) as an obstacle. The robot's agency to clean is blocked by its own capacity constraints. The nursing staff, frustrated, begin moving the cart before the robot arrives, thereby assuming agency for navigation. The distributed agency network now includes human workarounds. The design team updates the robot's obstacle classification, but the scenario reveals how material properties (reflectivity) become agents in the system.
Scenario 3: The Interactive Art Installation That Became a Social Mirror
An interactive art piece uses cameras and projectors to create a responsive environment: visitors' shadows become animated. The installation's agency is to transform visitor movement into visual effects. However, visitors quickly learn that by moving slowly, they can make the shadows 'dance' in specific ways. The agency is co-performed: the system's capacity to detect motion and the visitors' creative interpretation. The material ghost is the algorithm's openness to interpretation. The artist intended a specific aesthetic, but the visitors' agency reshapes the experience. This scenario highlights that in post-digital objects, agency can be a resource for creativity, not just a source of control.
Common Questions and Misconceptions
Practitioners often raise similar questions when first engaging with the concept of agency in post-digital objects. Below, we address the most frequent ones, clarifying common misconceptions and providing practical guidance. These answers are based on collective experience from design and engineering teams, not on isolated case studies.
Does an object have agency if it doesn't have AI?
Yes. Agency does not require intelligence or learning. A simple mechanical thermostat has agency: it triggers heating when temperature drops below a set point. Its agency is delegated from the designer and user. The key is that agency is about capacity to act and influence outcomes, not about consciousness. Even a passive object like a speed bump has agency—it slows cars. In post-digital objects, agency is often amplified by connectivity, but the underlying mechanism is the same.
Is agency the same as autonomy?
No. Autonomy is a degree of independence from human control, while agency is the capacity to act. An object can have high agency (many capacities, strong delegations) but low autonomy (tightly controlled by a central system). For example, a factory robot may have high agency (complex movements) but low autonomy (every action is scripted). Conversely, a smart speaker may have high autonomy (making decisions based on context) but limited agency (only can play music or answer questions). Understanding this distinction helps in designing systems that balance control and flexibility.
How do we design for ethical agency?
Designing for ethical agency means ensuring that the object's actions align with human values and that responsibility is transparent. This involves auditing delegations for biases, providing users with meaningful control, and avoiding black-box decisions. For instance, a hiring algorithm should have its agency to reject candidates auditable by humans. Ethical design also requires considering failure modes: what happens when the object's agency leads to harm? Teams should conduct pre-mortems and include fail-safe mechanisms. There is no universal checklist, but principles like transparency, accountability, and reversibility are starting points.
Practical Tools and Techniques for Mapping Agency
Several tools and techniques can help teams map agency in their projects. These range from low-tech methods like stakeholder mapping to computational tools like dependency graphs. The choice depends on the project's complexity and the team's familiarity with systemic thinking. Below, we describe three practical approaches that have been used effectively in design workshops and engineering retrospectives.
Technique 1: Actor-Network Mapping
Inspired by ANT, this technique involves listing all actors (human, technological, natural) that participate in the object's operation and drawing connections between them. Actors can be people, devices, software, standards, or even concepts like 'privacy'. Connections represent influences: a sensor influences an algorithm, which influences an actuator, which influences a user. The resulting map reveals how agency flows and where bottlenecks or unexpected loops occur. This is best done in a workshop with cross-functional teams, using sticky notes on a whiteboard. The map becomes a living document updated as the system evolves.
Technique 2: Delegation Matrices
A delegation matrix lists all design decisions on one axis and their potential consequences on another. For each delegation (e.g., 'use motion sensor for occupancy'), note what is assumed (e.g., 'occupants move'), what can go wrong (e.g., 'occupant sits still'), and who is affected. This matrix helps identify where delegations are fragile and where they might conflict with user values. It is particularly useful during the design phase to preempt failures. Teams can also use it to document trade-offs, such as choosing between accuracy and privacy.
Technique 3: Performance Logging and Analysis
Once a system is deployed, logging actual performance data is crucial. This includes sensor readings, user interactions, and system decisions. Analyzing this data can reveal patterns where agency shifts unexpectedly. For example, a smart lighting system might show that lights are on during non-working hours because the motion sensor detects cleaning staff. The log exposes the mismatch between intended and actual agency. Techniques like sequence analysis or anomaly detection can be applied. The key is to treat logs not just as performance metrics but as traces of agency in action.
Conclusion: Embracing the Ghost
The material ghost is not a flaw to be exorcised but a fundamental feature of post-digital objects. Agency will always be distributed, emergent, and partly unpredictable. Our goal as practitioners is not to eliminate this ambiguity but to map it, understand it, and design with it. By using the frameworks, models, and techniques outlined in this guide, teams can move beyond simplistic notions of smart objects and instead engage with the rich, messy reality of distributed agency. This shift in perspective leads to more resilient designs, clearer accountability, and ultimately objects that respect the complexity of the worlds they inhabit. The ghost is in the machine, but with careful mapping, we can learn to work with it rather than against it.
Key Takeaways
- Agency is not a property of objects but emerges from networks of humans, materials, and code.
- Three models—object-centered, relational, distributed—offer different lenses; use them in combination.
- Auditing agency involves inventorying capacities, mapping delegations, observing performance, analyzing distribution, and assigning responsibility.
- Real-world scenarios show that agency shifts with context; design for adaptability.
- Ethical design requires transparency and accountability in delegations.
- Practical tools like actor-network mapping, delegation matrices, and performance logging make agency tangible.
We encourage readers to apply these ideas to their own projects and share their findings. The map of agency is never complete, but each attempt brings us closer to understanding the ghosts we create.
Comments (0)
Please sign in to post a comment.
Don't have an account? Create one
No comments yet. Be the first to comment!