Learn more about we ensure that Plural provides you with timely, comprehensive and accurate policy data
Plural’s tools are built on a constantly-updated foundation of data about legislative and policy activity. We pull data from dozens of sources, including official legislative websites/databases in each jurisdiction.
Frequency of updates
We run scrapers as often as possible given the constraints of our data sources (some only publish at certain times, and a few effectively enforce resource constraints). This means we are always pulling data on a daily basis, and in some jurisdictions it is an hourly basis.
Our scrapers run on a platform powered by Amazon’s Elastic Container Service, allowing us to flexibly scale to our needs, and the opportunity to provide the most up-to-date data.
We pull this data continuously into Plural and apply proprietary processing to help our users find bills quickly, make recommendations about data to look at based on our analysis, and notify you about changes to data that you are tracking. This process runs via Apache Airflow and results in data stored in both a PostgreSQL database and an Elasticsearch cluster.
How data gets into Plural
Plural is a little different: we value and contribute to the ecosystem of open civic data. In 2020, we added the Open States project to our team. Our data system starts with a suite of scrapers – software tailored to obtain data from a particular source – benefitting from years of refinement and close study of our source jurisdictions.
This data includes:
-
Bills and resolutions, including the rich text of the bill showing markup
-
Elected legislators
-
Committees
-
Public hearings
Jurisdictions include:
-
U.S. Congress
-
Every U.S. state legislature
-
Washington, D.C.
-
Puerto Rico
All of our civic data – as well as our customers’ data – is stored using best-in-class cloud-managed services to ensure uptime, reliability and security.