Software Development
AI / Data / Federal government / Guest posts / Policies

All data is not created equal: The case for government-wide disclosure modernization

Dean Ritz, board member at the Data Foundation, breaks down a policy paper he authored that provides an overview of why machine‐readable data matters, with theoretical and practical examples.

Data Foundation is out to prove open data's value, through research. (Photo by Flickr user Kevin Burkett, used under a Creative Commons license)
This is a guest post by Data Foundation board member Dean Ritz.

It may be hard to believe that long ago, card games like poker were considered games of chance rather than games of skill. You either won or lost as Lady Luck willed. It was Girolamo Cardano who almost 500 years ago had the insight that mathematics could be applied to increase one’s winnings (or reduce one’s losses). By seeing card games as embodying probabilities, he turned some games of chance into games of skill with data.

In many areas of human activity, data improves the performance and efficiency of human experts and, in domains such as reading MRI scans, outperforms human skill. The skill of legislating and governing is no different: Data empowers policymakers to make evidence-based decisions. At the scale of national policy making, data becomes a long lever to move the world toward policies with better or worse results. Surely, if we want to measure and achieve the desired ends of good governance, we will want to see it in the data.

In the past decade, Congress has passed bipartisan legislation to make more policy be driven by evidence captured as “machine readable.” These include the DATA Act of 2014, the GREAT Act of 2019, and the Evidence Act 2018. The term “machine readable” is defined by the OPEN Government Data Act of 2018 as “data in a format that can be easily processed by a computer without human intervention while ensuring no semantic meaning is lost.”

The goal that “no semantic meaning is lost” drives to a high standard. Meaning is easily lost when it has to squeeze into the limited resolution of spreadsheets and documents that are then interpreted and reinterpreted by human brains and computer software. How is that standard to be achieved?

The D.C.-based Data Foundation recently published a policy paper titled Understanding Machine-Readability in Modern Data Policy, which I authored. This paper provides an overview of why machine-readable data matters, with theoretical and practical examples from information theory and contemporary practice in business and government.

Disclosure modernization is a data-driven path to improve public trust in institutions.

It proposes six classifications for the organization of data (e.g., symbol, label, taxonomy). Each classification has its capabilities and its limitations. Policymakers, when expressing their expectations for machine-readable data, could match their expectations with the capabilities of each level of agreement, since all data is not created equal.

The paper offers three policy recommendations:

First, policymakers should require that measurements be machine readable when possible. Machine-readable standards enable technical innovations for automated reporting and data validation, reducing the compliance burden both for those reporting and for those performing analysis and exercising oversight.

Second, policymakers should clearly communicate intent in legislative and regulatory actions on the role, purpose, scope of detail rendered as data, and applicability of data standards. Legislators should explicitly state their expectations and purposes for data standards, through bill text and committee reports. Similarly, regulators can better support this recommendation by clearly establishing expectations for standard effectiveness through proposed regulatory actions and guidance documents.

Third, policymakers should encourage the adoption and use of open, consensus standards to encourage cooperation, efficiency, and innovation when drafting new data policies. Technology choices should minimize the technical and intellectual property obstacles to sharing and aggregating data. Governments should follow private enterprise in realizing the benefits of this technical and social cooperation through open source software and data-encoding standards.

The move to machine-readable data is but one part of the disclosure modernization movement. Disclosure modernization supports the proper functioning of compliance and financial systems, and most importantly, of governments themselves. The benefits and practical implications are vast, including that standardization builds confidence in compliance and financial systems which quickly and reliably detect fraud, errors, and other concerns. Improving public policies related to disclosure modernization can support efforts to enhance transparency and accountability. This is especially appropriate amidst the COVID-19 pandemic, where the occasion demands transparency and accountability. Disclosure modernization is a data-driven path to improve public trust in institutions.

Engagement

Join the conversation!

Find news, events, jobs and people who share your interests on Technical.ly's open community Slack

Trending

DC daily roundup: Tyto Athene's cross-DMV deal; Spirit owner sells to Accenture; meet 2GI's new cohort

DC daily roundup: $10M to streamline govt. contracting; life sciences might dethrone software; Acadia's new $50M

DC daily roundup: the DMV's VC cooldown, SmartSigns for safer driving; Rep. Schiff's AI copyright bill

Will the life sciences dethrone software as the king of technology?

Technically Media