Skip to content

Research

A summary of selected research projects.

Determinants of Online Social Movement Petitioning Success; Efficacy of Anger

What makes a social movement successful online and motivates individuals to act?

Using a dataset of Change.org petitions, I examine the features of text that generate the most attention and support for a collective action frame. Using a variety of methods, including topic modeling, clustering, and natural language processing, I Identify the aspects of text that mobilize individuals. In this paper I focused on context based sentiment analysis using LSTMs based upon DeepMoji; and in predicting anger from text I improved upon prior social science methods of identifying angry sentiment by over 40%. I found that anger can be efficacious, but only to a certain extent. Drawing upon theories of emotions and mobilization from psychology, sociology, and organizational theory, I theorize that anger can both motivate individuals but also have repercussions when too much anger leads to a perception of ‘irrationality’. I supplemented my quantitative analysis with experiments and A/B tests to further identify mechanisms of support: the interaction between anger and sympathy in motivating action or inducing perceptions of irrationality.

Crowdsourced Category Labels; Bias and Status in Consumer Classification

What are the repercussions of crowdsourcing genre and category labels in product markets to consumers; how do individuals categorize products once given the authority?

In the past, category and genre labels were applied authoritatively by firms, publishers, and institutions. Today the proliferation of online platforms (e.g. Goodreads, Steam, IMDB) allows consumers to categorize offerings in markets. However consumers can have heterogeneous conceptions of market categories and varying product perceptions. Here I examine a source of divergent categorization among consumers by studying how subjective product evaluation impacts whether and how products are categorized. I study how users classify books on Goodreads, and show that poorly rated products are less likely to be categorized at all. In another study I examine how users categorize art (both real art and style GAN generated art) into two artificial categories; findings show that individuals categorize art they positively evaluate into their preferred subjective category. This illustrates that both user biases in categorization and selection effects in the choice to categorize in markets.

Price Impact on Categorization

How do Products listed on online platforms change in perception over time.

Building off of my research agenda, I study whether, when and to what extent changing one aspect of a product (e.g. Price) can attract different consumer audiences to a product and change its market portrayal. Utilizing a dataset of steam video games (a platform similar to many others in that it allows users/consumers to categorize products), I examine how changing price changes a product’s category (here Steam tags). I find that price changes can change product tags on a crowdsourced platform, however changes are rarely permanent and enduring, suggesting that price sensitive consumers do not have an enduring impact on a product’s market perception. The figure to the left illustrates one analysis, where we notice that the absolute number of product tags does not experience a lasting change after one month.

Search on Complex Landscapes

How can complexity in organizations be modeled? How do individuals navigate complex decision making landscapes?

With Professor Jon Atwell, we model how individuals search on complex landscapes. We review prior management literature which primarily applied Kauffman NK landscapes. However we discovered flaws and limitations in the model drawing upon insights from other literatures researching complexity. We develop an alternative model of constructing complex binary decision making landscapes that preserves local variance while at the same time preventing a landscape from having little global variation. We then recruited participants to search on a complex landscape to study their search behavior over time and in reaction to their own poor performance.