Project Owl: Google hopes to improve by better surfacing authoritative content and enlisting feedback about suggested searches and Featured Snippets answers.

Danny Sullivan:

The takeaway from this? As I said, it’s going to be very much wait and see. One reason things might improve over time is that new data from those search quality raters is still coming in. When that gets processed, Google’s algorithms might get better.
 
 Those human raters don’t directly impact Google’s search results, a common misconception that came up recently when Google was accused of using them to censor the Infowars site (it didn’t; they couldn’t). One metaphor I’m using to help explain their role — and limitations — is as if they are diners at a restaurant, asked to fill out review cards.
 
 Those diners can say if they liked a particular dish or not. With enough feedback, the restaurant might decide to change its recipes to make food less salty or to serve some items at different temperatures. The diners themselves can’t go back into the kitchen and make changes.
 
 This is how it works with quality raters. They review Google’s search results to say how well those results seem to be fulfilling expectations. That feedback is used so that Google itself can tailor its “recipes” — its search algorithms — to improve results overall. The raters themselves have no ability to directly impact what’s on the menu, so to speak, or how the results are prepared.

The First Amendment.