As UF researchers’ work shows, generative AI can be profoundly influential. These AI-generated images represent yet another side of its capabilities.
In an era when artificial intelligence and social media are driving news feeds, workplace chats and buying decisions, a group of University of Florida business researchers are asking a provocative question: We know what technology does for us, but what is it doing to us?
Liangfei Qiu studies social technology — how digital tools shape human behavior, and vice versa. AI, his research shows, can have a double-edged effect. Sometimes, it boosts participation among users in online forums. In other cases, over-reliance on AI reduces the pool of creative, problem-solving ideas, he says.
Recent research by Qiu and his colleagues focused on users’ behavior on Stack Overflow, where some 30 million platform developers and computer programmers gather to share knowledge and ideas. The researchers analyzed data from more than 3.2 million questions and 1.2 million answers on the forum.
Those who used AI produced answers that were easier to read and about 23% shorter. But the researchers also found a tipping point: Users were less likely to contribute voluntarily when AI-generated answers were prevalent. Human responses to help-seeking posts dropped significantly as AI-generated content took over.
That phenomenon isn’t unique to software development forums, he notes. The same behavior patterns should hold true for more mainstream question-and-answer sites like travel advice on Quora or Reddit’s appliance repair forums.
The study is believed to be the first of its kind to explore how generative AI affects voluntary knowledge sharing.

“Humans, by nature, can be somewhat lazy. If you don’t already have AI answers, people will think about solutions to a question from many different perspectives. In the long run, if the diversity of ideas and perspectives is reduced, it hurts the platform,” says Qiu, a professor of information systems and operations management in the Warrington College of Business.
Stack Overflow chose to ban the use of generative AI tools like ChatGPT, but Qiu and his colleagues instead suggest a more nuanced approach, permitting AI usage while promoting responsible and moderated use. That would allow users of Stack and other online forums to benefit from efficiency and learning opportunities without compromising content quality.
“Our results indicate that when users learn how to use these tools thoughtfully, they can produce answers that are not only concise but also easier to understand, which benefits the whole [online] community,” the researchers wrote.
More broadly, Qiu’s work has shown how technology can be leveraged in diverse ways, helping companies develop products that are the next big thing. When a mid-sized kitchenware company wanted a better way to capitalize on next season’s emerging trends, it turned to Qiu for data-driven advice. He went to work harvesting massive amounts of product reviews from Amazon and social media sites like Instagram.
“We know that social media is more ‘noisy’ and much more unstructured, but it has early prediction power because before people post online reviews, they talk about products on social media,” he says.
Social media has fundamentally reshaped how we design products. Instead of guessing what people want, we can now see emerging behavior in real time — what they’re using, modifying, celebrating, or complaining about.” — Liangfei Qiu
Qiu’s methods helped houseware manufacturer Epoca International’s development team lean into the best shapes, patterns and optimal shade of pink for its Paris Hilton-branded products, says Chief Executive Officer Brian Melzer.
Social media has fundamentally reshaped how we design products. Instead of guessing what people want, we can now see emerging behavior in real time — what they’re using, modifying, celebrating, or complaining about,” he says. “That signal lets us spot microtrends months before they hit retail. It shortened our innovation cycle, sharpened our intuition and helped us design with the consumer’s lived experiences, not our assumptions.”
Qiu says that by analyzing many subjective opin-ions, an objective consensus can emerge, making new product design less of a guessing game.
“The product designers can look at online reviews and social media posts but not in a data-driven way. When we dig deeper into large amounts of social media data, we can know what features make a product viral or trendy,” he says.
Another study Qiu conducted in 2023 looked at how generative AI answers affect peoples’ participation in online knowledge-sharing platforms like Quora — and illustrated how AI and human answers vary in tone and content

When a person asked ‘What’s the best parenting advice?’, an AI bot said, in part, “There is no one-size-fits-all answer to this question, as every child and every parent is different. However, some general tips that might be helpful include: Show love and affection to your child every day. Set clear boundaries and con-sequences for behavior. Encourage your child to express their feelings and listen actively to what they have to say. …”
People answered the question quite differently: “My husband likes this saying that the best thing that a man can do for his children is to love their mother. … I believe once you have a happy and stable marriage, it creates an invisible castle around the kids.” Another person wrote, “My wife and I figured this out on our own by doing it and seeing the results years later. Read to your kids every night. Feed them a good dinner without sugar to fill them with nutrition. …”
Those findings and others like it reinforce the value of study-ing social technology, says Hsing Kenneth Cheng, chair of the information systems and operations management department, who has co-authored multiple social technology papers with Qiu and Professor Jingchuan Pu. In recent years, their work has shown how online knowledge-sharing behaviors can predict employee turnover and assess workers’ engagement with peers and supervisors.
Online environments change power dynamics, motivation and relationships, Cheng notes.
“These studies reveal that AI-mediated systems are not neu-tral. They shape who participates, how much effort is invested, and even career outcomes,” Cheng says.
Down the hall from Qiu, Jingchuan Pu explores how digital platforms shape human behavior and knowledge sharing. It’s far from an abstract concept: Online behaviors at work can predict tangible outcomes like employee turnover and promotions.
One recent paper by Pu, Qiu, Cheng and their collaborators was among the first to use corporate online community data to predict turnover. Workers at a large Chinese company who both provided and sought information in its online community were about 18% less likely to quit their jobs, the researchers found. Companies could use the findings to identify employees at risk of quitting and target them with retention programs, Pu says.

In other studies, Pu and his collaborators have addressed how corporate knowledge-sharing communities affect employees’ behavior. Call it the “boss effect”: Workers exert greater effort when answering questions from higher-ranking colleagues and those who do are more likely to get promoted. In a separate study, they found that revealing users’ identities in online communities influences participation. Users who are identified write longer, more thoughtful content but also post less frequently — possibly due to concerns about their reputation or judgment.
Image motivation often drives behavior, the researchers found. When workers are identifiable in company forums, they either post “high effort” results or default to posting anonymously, if possible.
“Digital platforms don’t just reflect behavior, they shape it,” Pu says. “Even small changes in visibility — like showing some-one’s location — can reshape how people interact.”
Some of Pu’s ongoing work involves studying how AI influ-ences the way people generate their own social media images. In an upcoming paper, Pu found that text-to-image AI tools create idea fixation, leading users to mimic those styles in their own photos. He calls it the “IKEA effect”: After seeing a living room in the Swedish retailer’s store, the desire to completely replicate it at home is strong.

“We indeed found out that as people start to use AI to help create photos, the photos they took later were very similar to the AI ones,” Pu says.
In another use of intelligent systems, Qiu and his collaborators designed the framework of an AI system that detects criminal activity in real time while also emphasizing privacy preservation and ethical deployment for public safety agencies. He’s also shown how “responsible AI” could help law enforcement with earlier crime detection that doesn’t compromise civil liberties.
Using Internet of Things technologies such as wireless cameras, RFID readers and other devices, the researchers set out to automatically identify locations where smuggling crimes were likely to take place. To do that, they processed data from devices in public areas including roads and toll booths. The system monitors for four general types of activity in real time to filter out potential smuggling activities, such as drivers taking longer routes or making multiple trips through an area to look for police. Those features are used to train predictive models that flag suspicious vehicles. A data synthesizer protects personal data while training the system’s predictive models. To avoid discrimination, the system focuses on common criminal behaviors rather than personal identifiers like race and gender.
That 2025 study showed the IOT-enabled system could achieve high-performance crime detection and has the potential to increase border surveillance efficiency, the researchers noted.
Studying social technology is important because many companies rushed headlong into corporate knowledge-sharing platforms in the mid-2010s without considering how it could affect employees’ behavior. More recently, the proliferation of AI-generated review summaries on e-commerce websites can have the unintended effect of homogenizing future reviews. Pu’s research aims to delve into those issues.
Companies are implementing AI without fully understanding its behavioral consequences,” he says.
The featured image at the top was created by a generative AI program and service that creates high-quality images from text descriptions, known as “prompts”.
The prompt for this image was: Create an artwork graphic that has no background and no words for the following: Big Question UF researchers ask: We know what technology does for us but what is it doing to us? Focus on how AI and social tech reshape human behavior, participation, creativity and workplace dynamics.
Sources:
Hsing Kenneth Cheng
John B. Higdon Eminent Scholar Chair, Information Systems and Operations Management Department
kenny.cheng@warrington.ufl.edu
Jingchuan Pu
Associate Professor of Information Systems and Operations Management
Jingchuan.Pu@warrington.ufl.edu
Liangfei Qiu
PricewaterhouseCoopers Professor of Information Systems and Operations Management
liangfei.qiu@warrington.ufl.edu

