Process
& Work
TRIBAL: A Tripartite Model for Group Bias Analytics
– Yu-Ru Lin and Rebecca Hwa

Dr. Yu-Ru Lin and Dr. Rebecca Hwa, both Associate Professors from the School of Computing and Information, in collaboration with Dr. Wen-Ting Chung from the School of Education, have recently been awarded a research grants from the DARPA Understanding Group Biases (UGB) program for a project titled, “TRIBAL: A Tripartite Model for Group Bias Analytics.” The base year funding amount is $149,978 with a total award value of $912,072.

DARPA’s ambitious goal is “to develop systems that can identify and characterize these biases at new speeds and scales in order to provide deeper insight into the diversity and complexity of human cultural models, as well as lead to better understanding of when, why, and how groups often interpret the same world differently.” Lin, Hwa and Chung’s team was selected to carry out this goal. Their project aims to develop and advance reproducible approach in revealing biases of different groups or cultures by analyzing social media data with cutting-edge methods of natural language processing and machine learning. The proposed framework is driven by social theories on how groups’ cultural mindset are shaped across three theoretically grounded facets including value, emotion, and context. The project will enable the exploration of questions such as: Do social groups express a dominant set of moral values (e.g., fairness) and emotional responses (e.g., fear) toward certain social contexts (e.g., a current news event)? How does a group’s set of beliefs relate to the beliefs of the individual members? How can the beliefs of ideologically opposing groups be explained in terms of differing values, emotions, and contexts?

The Understanding Group Biases (UGB) program seeks to develop and prove out capabilities that can radically enhance the scale, speed, and scope of automated, ethnographic-like methods for capturing group biases and cultural models from increasingly available large digital datasets. DARPA hypothesizes that there may be new opportunities for overcoming current methodological trade-offs between capturing qualitative “thick” data or capturing quantitative “big” data, in part, by turning a common bug of machine-learning (a tendency to pick up non-obvious and implicit biases from the datasets on which it is trained) into a useful feature. Hence, UGB aims to develop systems that can identify and characterize these biases at new speeds and scales in order to provide deeper insight into the diversity and complexity of human cultural models, as well as lead to better understanding of when, why, and how groups often interpret the same world differently.