Detecting fake information on the web generated by automated news and media bias, which affects journalistic decison making.
Project Brief
UX Designer
June 2020
My Contribution
User Experience
Interface Design
Tanvi Karwa
Image of a desktop website of Mayen

Identify fake news and bias in automated digital content.

Recognize and classify bias types specific to journalism
Increase trust in news from digital media for various demographics
Make online news reports more transparent
Automated news algorithms and what people do to prevent fake information
Through automated journalism, workflows are being automated and through crunched data, bots are able to generate news articles. More than 50% of the global population uses social media and it is often the first place to find daily news rather than specific news outlet websites. Social media is also the easiest destination for ads and marketing, making it a prominent place for fake news.

I did some initial user surveys to understand what people think of fake information and how they tackle them. Based on the survey and statistics available online
  • Only 5% people trust the news on social media.
  • LinkedIn is the most trusted social media platform and Facebook is the least.
  • More people search directly on Amazon first as compared to Google or Bing for buying a product.
Trust and usage of online content depends on age, gender, regional culture, internet speed and medium of consumption.
67% Gen Z users depend on social media for news, as compared to 47% matures (above 55 age). There are contrasting differences in behaviors between generations and I classified my personas based on user behaviors.
Data Visualization image
Ways in which people consume digital content
  • Focused consumption is consuming one form of content on one device.
  • Dual consumption is when users consume content from more than one device.
  • Information snacking is using wasted time to consume content.
  • Time shifted content is postponing consumption of content.
  • Content binging is when users consume multiple parts in a single session.
New mediums of content search
There is a shift in trend of the medium of content. Younger generations are using visual platforms more. Overall 62% internet users prefer consuming video content every day. In the last one year, there has been significant increase in use of voice and image recognition for search.
Data Visualization image
The Solution
A browser-based plugin and visual dashboard to detect and classify potential fake information based on logistic regressions
Any text based information is broken down to paragraphs, sentences, and words to create tokens which are analyzed in comparison with similar keywords from trusted sources to create an immediate report to identify bias or potential of fake information.
Plugin image
Automated articles generated by bots lack author attributions, back-links, and sources, which are retraced to find similar articles and confirm the veracity of the facts.

The full report presented in a dashboard includes a bias meter to describe instances of different biases that may be due to political inclinations, terminology problems (like phrasing of the headlines), negativity bias, etc.
Dashboard image
The Process
Classifying and organizing bias information
Journalism bias was a new concept for me. With help of Tanvi, I learned about different concepts, categorized them, and found how each can be recognized in online news articles and research reports. This was followed by 2 iterations of wireframes to create a visual plugin and dashboard to prioritize and convey influential data easily.

I studied many analytical studies and reports by Statista and DataReportal to understand digital trends and the effects of bias and how people are affected by fake news.
User Research ImageSketches
Convey the transparent process
I created a landing page for the process and marketing of the plugin. The aim for the website is more than a place to know about features and install the plugin because for a brand that promises true information, transparency of our process is a major concern.
Website imageWebsite imageWebsite image
Takeaways and next steps
With the rise of automation, there will be clear increase of fake information. I have always been interested in ethics in technology and this was a very insightful process of learning in how human perceptions can transform to fake information through machine bias or limited algorithms. I enjoyed how automation can solve problems in automation and I believe there is greater need of similar implementations to detect erroneous information transmissions.

While this is impactful to detect fake content on browsers, a majority of consumers use apps and websites on smartphones and I believe the next step is to create solutions for the mobile.