Through the rise of fake news and content that is seen as disturbing or offensive, Google is fighting back. The search mogul recently announced three actions they are taking to combat the fake news phenomenon.
Project Owl is the internal name for Google’s efforts, though the company has said there is no particular reason for the name. What does Project Owl encompass?
- Feedback form for answers on Featured Snippets
- Feedback form for autocomplete search suggestions for why some suggestions ought to be removed.
- Emphasis on authoritative content to improve search quality.
Let’s dive into each aspect of Project Owl to see what Google is focusing on moving forward.
Featured Snippet Answer Improvements
Featured snippets appear at the very top of a search result. It’s the very first thing that is seen by users, and has tremendous influence on the search experience.
Based on the query and user activity, Google displays one or two sentences in a featured box to answer the query or provide relevant information. Through the rise of voice search and the use of digital assistants like Google Assistant, featured snippets are becoming more prominent.
Through voice search, featured snippets become the answer user’s receive in response to a particular question.
To improve the way they answer queries, Project Owl is giving users the opportunity to provide feedback.
Let Google know what you think. There are more options now, plus the ability to add comments in your own words.
If it’s offensive, fake, or problematic content, make a request for a legal removal. Google is cracking down on providing the best information possible. With the confirmation that 15% of queries are new every single day, they know not every query will have the best results for every user.
Through the data gathered, Google will use the information to make changes to the algorithm, preventing the snippets from showing. Don’t expect instant removals though, as Google said that individual Featured Snippets are very unlikely to see a quick removal.
Improvements on Autocomplete Feature
As users are typing in a query on their respective devices, Google offers suggestions to complete the query to help you get your answer quicker. It’s a way to help you save a little time.
The suggestions are taken from the most popular searches, but depending on the direction of your search and the words you are using, you may see some offensive material.
The Guardian covered this issue last December with a few example autocomplete suggestions that led to a not so heartening discovery. Suggestions like “are women evil” and “did the holocaust happen” were displayed, sometimes leading the searcher to stray far off track of what they originally intended.
Danny Sullivan of Search Engine Land shows the new “Report inappropriate predictions” link in his article covering Project Owl, showing suggestions based on the start of a query “who painted”:
As you can see, reports can be filed as hateful, sexually explicit, violent or includes dangerous and harmful activity, or other to capture other issues. Depending on the feedback and the answer, there is a chance that Google will remove a particular answer quickly if enough requests are made.
The overall goal is to avoid showing these suggestions in the future, avoiding any problematic content or search query suggestions.
Building Authoritative Content
To help improve on the suggestions and snippets that users see, Google is continuing its push for authoritative content. It’s their way to combat the fake and problematic content that users see within search results.
Earlier this year, Google began flagging content that may be seen as offensive or upsetting. From Google’s guide, upsetting or offensive content typically includes,
- Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
- Content with racial slurs or extremely offensive terminology.
- Graphic violence, including animal cruelty or child abuse.
- Explicit howto information about harmful activities (e.g., howtos on human trafficking or violent assault).
- Other types of content which users in your locale would find extremely upsetting or offensive.
The Takeaways from Project Owl
The new changes through Project Owl will take some time to impact search results. Google will need to collect data from many users to determine the quality of results, go through the ratings, and find what’s authoritative.
Users will feel that they’re able to reach Google in an easier manner and let them know that they’re doing something wrong. Don’t try to spam or game the system to knock off one featured snippet in the hopes that your content can take its place as Google is prepared for that.
Google acknowledges that their search results will never be perfect, but they’re on their way to presenting people with the best answers from legitimate sources.