Silvana Fumega, PhD
In recent weeks I took part in two conversations that have stayed with me. One focused on social and sensor data, including wearables and other forms of continuous behavioral monitoring. The other examined the mental models shaping public data reforms. Although they came from different domains, both led me to the same question: what happens once data produces signals that institutions can act upon?
For much of the past decade, reform efforts centered on access and publication. If governments collected and released data, better governance was expected to follow. Through the Global Data Barometer, which assesses data governance across diverse institutional contexts, we repeatedly see that this is no longer where reform stalls. Many countries now have open data policies, data protection laws, and increasingly AI strategies in place. Yet the gap between formal adoption and meaningful institutional use persists. Mandates are fragmented, incentives reward compliance more than follow-up, and responsibility for action is often unclear.
Discussions around the OECD’s OURdata results, as Stefaan Verhulst has observed, point in a similar direction. Data supply continues to grow and expectations around AI accelerate, while progress on accessibility, governance arrangements, incentives, and stewardship slows or plateaus. Infrastructure expands; institutional capacity to govern its effects does not advance at the same pace.
Work on feminicide data illustrates how this plays out in practice. Information often exists across police records, courts, and civil society monitoring, yet classification gaps and fragmented authority allow cases to disappear administratively. Social and sensor data introduce a different pressure. Continuous streams of behavioral or health-related signals increasingly inform prioritization, allocation, and intervention decisions, especially when integrated into predictive tools. Here, the challenge is no longer visibility alone, but how responsibility is defined once signals begin to shape action.
Data governance is not new. What has changed is that continuous data streams can now feed directly into predictive and automated systems, narrowing the time between detecting a pattern and acting on it.
The central question is therefore not whether systems are innovative or efficient, but whether responsibility keeps pace with automation. As detection and action compress, accountability cannot remain an afterthought. The next phase of data and AI governance may depend less on technical sophistication than on how clearly institutions define who must act, under what authority, and with what safeguards. In fragile and polarized contexts, that authority is neither neutral nor uncontested. When detection accelerates but legitimacy remains weak, automation risks amplifying governance gaps rather than resolving them.
If this is the direction of travel, evaluation systems must evolve as well. Benchmarking efforts, including those I have worked on, have largely focused on infrastructure, policies, and data availability. That focus made sense in an earlier phase of reform. But as automated systems begin to shape administrative action, evaluation will need to assess not only what data exists, but how responsibility is assigned, coordinated, and sustained over time. If the last decade was about building data infrastructure, the coming years may need to focus more explicitly on whether institutions are equipped to govern what that infrastructure now makes possible.
Leave a comment