October 8, 2018
Featured
Articles
Associate Professors Yu-Ru Lin and Rebecca Hwa Receive Research Grant from DARPA
Featured Articles
Associate Professors Yu-Ru Lin and Rebecca Hwa Receive Research Grant from DARPA
Left: Dr. Rebecca Hwa, Right: Dr. Yu-Ru Lin

Dr. Yu-Ru Lin and Dr. Rebecca Hwa, Associate Professors at the University of Pittsburgh School of Computing and Information, in collaboration with Dr. Wen-Ting Chung from Pitt’s School of Education, were awarded a research grant amounting to more than $910,000 from the DARPA Understanding Group Biases (UGB) program for their project titled, “TRIBAL: A Tripartite Model for Group Bias Analytics.”

Lin, Hwa and Chung’s team was selected to carry out DARPA’s goal “to develop systems that can identify and characterize these biases at new speeds and scales in order to provide deeper insight into the diversity and complexity of human cultural models, as well as lead to better understanding of when, why, and how groups often interpret the same world differently.”

Their project aims to develop and advance a reproducible approach in revealing biases of different groups or cultures by analyzing social media data with cutting-edge methods of natural language processing and machine learning.

“Working with social media text is interesting and challenging because the language people use within social media is very raw in comparison to polished writings we find in Op-Ed pieces in newspapers (or even in their comment sections). I am eager to see what our findings will reveal about the linguistic choices different groups of people make while expressing their attitudes and values in different contexts,” said Hwa.

The proposed framework is driven by social theories on how groups’ cultural mindset are shaped across three theoretically grounded facets including value, emotion, and context.

According to Lin, “It’s our nature that we share similar biases with people around us. Knowing the existence of group biases is one thing, but how to fairly reveal the group biases possess a bigger challenge. Social media data allows a chance to observe communication behaviors, but we are still at the very early stage of leveraging computational capabilities to the understanding of group biases. I’m excited to try and see to what extent machine can help us improve such understanding a little more.”

The project will enable the exploration of questions such as: Do social groups express a dominant set of moral values (e.g., fairness) and emotional responses (e.g., fear) toward certain social contexts (e.g., a current news event)? How does a group’s set of beliefs relate to the beliefs of the individual members? How can the beliefs of ideologically opposing groups be explained in terms of differing values, emotions, and contexts?

In speaking on the importance of this project, Chung said, “This project is particularly important at this time. Our societies have gradually become more polarized. We tend to simply categorize people, and assume that stereotypes are self-evident in explaining group differences. Our project aims to look into biased views through people’s daily communicative texts. We hope it will offer people a more honest reflection of how, not only other groups, but also their own groups may have been dominated by certain habitual ways of thinking, feeling, and acting that prevent us all from seeing alternative perspectives, potentials, or problems.”