Two years ago, the Office of Innovation added a small feedback form to the bottom of a New Jersey government website.
The immediate result was that comments related to one issue dropped by 30% within a couple of weeks.
It was a simple, narrow banner that ran across the bottom of the page asking a simple question – “Did you find what you were looking for on this page?” – along with two buttons: “Yes” or “No.” Upon responding, an open response box would appear asking you to briefly elaborate.
Fast forward: The Feedback Widget has been replicated throughout 13 agency websites, collecting about 450,000 yes/no responses and nearly 93,000 comments from New Jerseyans.
The feedback gathered through the tool has led to concrete improvements in how people apply for benefits like unemployment insurance, find timely information on tax relief, and more. And AI has the potential to make this tool even more useful and impactful across the state government.
Here’s what we’ve learned.

Making identity verification easier
As we started gathering feedback through the widget, it became easier for agencies to identify problems that online users were experiencing.
For example, residents need to verify their identity to gain access to key benefits like unemployment assistance. But when the Department of Labor and Workforce Development launched a new online identity verification system, the agency saw a rise in feedback-widget responses that said things like this:
- “I have verified my identity 3 times already, and I have not received my check. Every time I log in, it states ‘verify your identity.’”
- “I have verified my identity… now I want to see where that status is on the NJ DOL site.”
- “Can I certify while waiting for identity verification to go through?”
Upon seeing this uptick in comments and confusion, we worked with the agency and its vendor to modify the site. Specifically, we made it so that the user would no longer have to leave the state website – and then return – in order to verify their identity.
In other words, we smoothed out a clunky process because of an issue that people flagged through the widget. And, the results were clear:
- The proportion of people clicking Yes on that site’s widget went from about 25% to about 64% within a week.
- Call center volume related to identity verification dropped by about 70%.
- The agency is saving the equivalent of about $65,000 a month because of the change.
- The increase in claimants able to quickly verify their identity jumped about 20%.
Making the state’s digital front door more friendly
About six months ago, we added the widget to the busy NJ.gov site. We’ve been working with the New Jersey Office of Information Technology to take widget feedback (along with user interviews) and start making key improvements.
For example, one common theme from the comments was that people simply couldn’t find what they were looking for, so we added a search function to the top of NJ.gov. The immediate result was that comments related to that issue dropped by 30% within a couple of weeks.
Also, an increase in comments related to the ANCHOR property tax relief program was climbing due to an upcoming deadline for applications. This led the team to update the “feature cards” on the homepage with a prominent, direct link to that program. Clickthrough rates on the ANCHOR feature card spiked into the thousands within two weeks while also cutting in half the comments seeking information about ANCHOR.
This reinforced the importance of providing timely and helpful content on a high-trafficked site, which is crucial to a good user experience.
Given success stories like these, more agencies are working with the Office of Innovation to add the Feedback Widget to their own pages.
AI is making feedback more digestible, actionable
The main question agencies have when adding the widget is how long it will take to manage and analyze comments. Thankfully, AI is helping turn that challenge into an opportunity.
We’ve developed a prototype that gathers all of the agency’s recent comments from the tool, uses GenAI to identify and categorize themes across those comments, and then generates a brief report of those top themes along with example quotes for each. (The ability to pull representative quotes is helpful in quickly grasping the scope of each theme or problem area.)
This summarized feedback saves time for the agency by revealing and helping prioritize issues that are common for many of their users. It also helps spark immediate ideas for potential fixes.
We’re currently testing this approach with agency partners, and working to make it as simple and useful as possible.
Looking forward, one way we plan to further improve the summaries is using AI to suggest actions that might actually resolve an issue, like clarifying navigation or making small operational tweaks.
Our AI Prototype: Challenges and solutions
- Challenge: It’s easy to use a single GenAI prompt to generate a summary report, but agencies want data to inform their decision-making: Which themes are most common? What percentage of comments does that theme represent? If you have a Large Language Model (LLM) summarize a batch of comments directly, instead of doing math, it makes up plausible-sounding guesses.
- Solution: Our prototype uses LLMs to find themes and categorize comments into those themes (and Python does the math). The result is statistics that agencies can rely on, while still taking advantage of the flexibility and summarization abilities of an LLM.
- Challenge: A challenge using LLMs to discover themes is that over time, you can end up with a huge list of similar but inconsistent themes, making analysis difficult.
- Solution: Our prototype limits itself to categorizing comments into a set of themes that we discover at the start using a powerful model, then stores those themes as the basis for future weeks. However, the prototype also allows for the discovery of new themes over time.
- Challenge: Without providing context of an agency’s work, the LLM often misconstrues what comments mean and how they relate to what the agency does.
- Solution: We give the LLM enough context to ensure it discovers high-quality themes that are relevant to agency staff. Specifically, we use deep research to generate 2-3 paragraph summaries of an agency and its website, which we then feed to the LLM along with the prompt to discover themes. The result is themes that are actually relevant to the agency.
We welcome more engagement
All of us working with the Feedback Widget are grateful to eagle-eyed New Jerseyans who send detailed and specific feedback on improving functionality – from broken links to information gaps and everything in between. The Office of Innovation and our agency partners appreciate every individual insight, even as we also leverage AI to identify larger issues and themes.
Our goal is to ensure that New Jersey continues to do a better and better job of delivering the right information and services to people in the best possible way.
You might notice that upon using the widget, you’re given the option to voluntarily provide an email address to join a cohort of people who provide feedback about projects we’re working on. We already have about 22,000 people now signed up to help with user research via that route, and we welcome more.
Also, we’ve open-sourced the code for the Feedback Widget for agency professionals who want to consider using it. Please reach out to us if you’re in that situation. We would also welcome thoughts from civic tech professionals about your work with feedback tools, including those that use AI.