Study examines ‘joiners’ who help make startups successful

June 14, 2015

A growing interest in the startup culture has focused attention on company founders who often take great risks to launch new ventures. But what about the people who join these founders to help them develop and commercialize innovative new products and services?

A research highlight published this week in the journal Science reports on research analyzing these ‘joiners,’ and finds that while they resemble founders in their willingness to take risks and their desire for the freedom of a startup, there are important differences. For instance, joiners are less interested in management and more interested in functional roles such as research and development, making them more like the people who go to work for established companies.

‘Sometimes you can have a single founder who handles the full range of activities for a startup, but especially in technology you need additional people to research and develop the products,’ said Henry Sauermann, an associate professor in the Scheller College of Business at the Georgia Institute of Technology. ‘There are many people who are interested in working for startups but who don’t want to be founders.’

Sauermann and co-author Michael Roach, an assistant professor in the Dyson School of Applied Economics and Management at Cornell University, found the differences while examining the entrepreneurial interests of 4,200 Ph.D. candidates who were within two years of obtaining degrees in STEM fields. Nearly half (46 percent) of these scientists and engineers reported an interest in joining a startup as an employee, while slightly more than one in ten (11 percent) said they expected to found their own companies.

The researchers surveyed these Ph.D. candidates about personal characteristics such as acceptance of risk, desire for autonomy, interest in commercializing new technology and willingness to take on managerial tasks. They also asked about interests in entrepreneurship, in roles as both startup founders and the joiners who support them. The study, which includes a more comprehensive companion article to be published in the journal Management Science, may be the first to consider founders and joiners as separate groups.

‘A key insight from our research is that many of the characteristics that we often think of as unique to founders, such as a tolerance for risk and the desire to bring new ideas to life, also generalize to the broader entrepreneurial workforce, including people who want to work in startups but don’t want to be founders themselves,’ Roach said.

Understanding how the personal preferences of newly-minted Ph.D. scientists and engineers fits into their entrepreneurial interests may be important to helping them find the best application for their hard-won knowledge and skills. Increasingly, startups provide an attractive career path for Ph.D. graduates who may not find academic research attractive or may experience difficulty in finding positions in academia — but who still want to be involved in research and commercialization activities, Roach said. More emphasis may be needed on preparing STEM doctorates for these entrepreneurial employee career paths.

‘Most university programs designed to foster entrepreneurship — such as courses, workshops and incubators — focus on training people to be a founder,’ Roach noted. ‘But founders make up a small share of the entrepreneurial workforce, and we do very little to train the larger share of people who will work in startups as employees rather than founders. For example, many programs focus on how to write business plans and secure funding, while less attention is paid to how to work effectively in a small startup team.’

The high degree of interest in entrepreneurship among science and engineering Ph.D. candidates surprised Sauermann, who expected that the soon-to-be-graduates might prefer a safer career path in established companies.

‘A surprising number of people from this group found entrepreneurship attractive,’ he said. ‘This may mean we don’t have as much of a problem getting people interested in startups as is widely believed. It may be more a question of how the transition from the Ph.D. training to the startup world happens.’

The paper is based on a 2010 study of Ph.D. candidates about to graduate from 39 different U.S. tier one colleges and universities. In a follow up study, Roach and Sauermann surveyed the group to examine the career transitions they made into industry, startups and academia. Results are being analyzed, and the two researchers hope to follow this group to see how their careers develop.

The data may also help provide information on how context affects careers. For instance, exposure to an entrepreneurial environment appears likely to increase an individual’s willingness to work in a startup, but doesn’t seem to boost their interest in being a founder.

‘An interest in being a founder is more closely associated with individual traits and preferences that predispose them to entrepreneurship,’ Roach said. ‘At the same time, individuals who lack these traits are unlikely to become interested in being a founder even when exposed to entrepreneurial influences. One implication of this is that programs that hope to stimulate entrepreneurship may do more to increase the pool of entrepreneurial workers than to make people into founders.’

The study should be encouraging for those promoting the entrepreneurial career path, Sauermann said. ‘Not everybody has to start their own company,’ he added. ‘You can also make a difference for the world by joining a founder.’

Companies are making cybersecurity a greater priority

June 11, 2015

Companies are spending increasing amounts on cybersecurity tools, but aren’t convinced their data is truly secure and many chief information security officers believe that attackers are gaining on their defenses, according to a new RAND Corporation study.

Charting the future of cybersecurity is difficult because so much is shrouded in secrecy, no one is entirely certain of all the methods malicious hackers use to infiltrate systems and businesses do not want to disclose their safety measures, according to the report.

While worldwide spending on cybersecurity is close to $70 billion a year and growing at 10 percent to 15 percent annually, many chief information security officers believe that hackers may gain the upper hand two to five years from now, requiring a continual cycle of development and implementation of stronger and more innovative defensive measures.

“Despite the pessimism in the field, we found that companies are paying a lot more attention to cybersecurity than they were even five years ago,” said Martin Libicki, co-lead author of the study and senior management scientist at RAND, a nonprofit research organization. “Companies that didn’t even have a chief information security officer five years ago have one now, and CEOs are more likely to listen to them. Core software is improving and new cybersecurity products continue to appear, which is likely to make a hacker’s job more difficult and more expensive.”

The RAND study draws on interviews with 18 chief information security officers and details the burgeoning world of cybersecurity products. It also reviews the relationship between software quality and the processes used to discover software vulnerabilities. Insights from these elements were used to develop a model that can shed light on the relationship between organizational choices and the cost of confronting cyberattacks.

“Companies know what they spend on cybersecurity, but quantifying what they save by preventing malicious attacks is much harder to tally,” said Lillian Ablon, co-lead author of the report and a researcher at RAND. “In addition, malicious hackers can be extremely sophisticated, so costly measures to improve security beget countermeasures from hackers.

“Cybersecurity is a continual cycle of trying to eliminate weaknesses and out-think an attacker. Currently, the best that defenders can do is to make it expensive for the attackers in terms of money, time, resources and research.”

Libicki and Ablon say several of the study’s findings surprised them. They found that it was the effect of a cyberattack on reputation — rather than direct costs — that worried most chief information security officers. It matters less what actual data is affected than the fact that any data is put at risk.

However, the process of estimating those losses is not particularly comprehensive, and the ability to understand and articulate an organization’s risk from network penetrations in a standard and consistent manner does not exist — and may not exist for the foreseeable future.

RAND created a framework that portrays the struggle of organizations to minimize the cost arising from insecurity in cyberspace over a 10-year period. Those costs include the losses from cyberattack, the direct costs of training users, and the direct cost of buying and using cyber safety tools.

Additional costs also must be factored in, including the indirect costs associated with restrictions on employees using their personal devices on company networks and the indirect costs of air-gapping — ensuring a computer network is physically isolated from unsecure networks. This is particularly true for sensitive sub-networks.

The RAND study includes recommendations for both organizations and policymakers. Organizations need to determine what needs to be protected and how badly, including what machines are on a company’s network, what applications are running and what privileges have been established. Employees’ desire to bring their own devices and connect them to the company network also can increase vulnerabilities.

Libicki said most of the chief information security officers who were interviewed were not interested in government efforts to improve cybersecurity. However, the RAND researchers believe government could play a useful role. For example, a government guide outlining how systems fail — similar to guides for aviation and medical fields — could help build a body of knowledge to help educate companies with the goal of developing higher levels of cybersecurity.

When bosses ‘serve’ their employees, everything improves

May 7, 2015

When managers create a culture where employees know the boss puts employees’ needs over his or her own, measureable improvements in customer satisfaction, higher job performance by employees, and lower turnover are the result, according to research by Robert Liden, Sandy Wayne, Chenwei Liao, and Jeremy Meuser, that has just been published in the Academy of Management Journal.

Employees feel the most valued, and in return give back to the company and its customers when their bosses create a culture of trust, caring, cooperation, fairness and empathy. According to Sandy Wayne one of the authors of the research, “The best business leadership style is far from, ‘Do this. Don’t do that.’ A servant leader looks and sounds a lot more like, ‘Is there anything I can do to help you?’ Or, ‘Let me help you….’ Or, ‘What do you need to…?’ This approach helps employees reach their full potential.”

The corresponding admiration employees have for bosses who care about them manifests itself in teamwork, loyalty and dedication to the business and its customers. The leadership style trickles down. Wayne said, “It’s contagious. The employees see their leaders as role models and often mimic those qualities, creating a culture of servant leadership. This serving culture drives the effectiveness of the business as a whole.”

The study was conducted at the Jason’s Deli national restaurant chain, and the sample included:

  • 961 employees
  • 71 Jason’s Deli restaurants
  • 10 metropolitan areas.

The findings were based on data from surveys completed by managers, employees, and customers, and data from corporate records. “The University of Illinois at Chicago research project on Servant Leadership has provided a remarkable insight into the myriad of opportunities to enhance our greatest asset – our culture,” Joe V. Tortorice, chairman and founder of Jason’s Deli said. “The professional interpretation of the date has educated and inspired our executive team.”

Professor Wayne says stores with servant leaders experienced the following positive outcomes:

  • 6 percent higher job performance
  • 8 percent more customer service behaviors
  • 50 percent less likely to leave the company

(See infographic below)

The study suggests this is an increasingly relevant form of leadership that offers promise to the premise that if businesses lead by caring for their people, the profits will take care of themselves.

How limiting CEO pay can be more effective, less costly

April 15, 2015

CEOs make a lot of money from incentive pay tied to stock performance. Although such schemes help align executives’ interests with shareholders, they are not necessarily the best schemes as compared to schemes that rely on trust between board and executives.

“Ironically, the necessary trust is easier to establish when the alternative of using stock-based pay is less powerful. Our research found that government-imposed limits on contingent compensation make stock-based pay a worse alternative, facilitating superior trust-based incentives,” says Ben Hermalin, an economist in the Haas Economic Analysis and Policy Group, UC Berkeley’s Haas School of Business,


The paper, “When Less is More: The Benefits of Limits on Executive Pay,” forthcoming in the Review of Financial Studies, is co-authored by Prof. Hermalin and Peter Cebon, senior research fellow, Melbourne Business School, University of Melbourne.


We derive conditions under which limits on executive compensation can enhance efficiency and benefit shareholders (but not executives). Having their hands tied in the future allows a board of directors to credibly enter into relational contracts with executives that are more efficient than performance-contingent contracts. This has implications for the ideal composition of the board. The analysis also offers insights into the political economy of executive-compensation reform.

The full study is published online:

Legalizing marijuana and the new science of weed

March 23, 2015

More than a year into Colorado’s experiment legalizing marijuana, labs testing the plants are able for the first time to take stock of the drug’s potency and contaminants — and openly paint a picture of what’s in today’s weed. At the 249th National Meeting & Exposition of the American Chemical Society (ACS), one such lab will present trends — and some surprises — that its preliminary testing has revealed about the marijuana now on the market.

ACS, the world’s largest scientific society, is holding the meeting here through Thursday. It features nearly 11,000 presentations on a wide range of science topics. A brand-new video on the research is available at

Three major patterns have emerged over the past few months since Andy LaFrate, Ph.D., and his lab began testing marijuana samples. Those patterns concern potency, amounts of a substance called CBD and contaminants in the products.

“As far as potency goes, it’s been surprising how strong a lot of the marijuana is,” LaFrate says. “We’ve seen potency values close to 30 percent THC, which is huge.” LaFrate is the president and director of research of Charas Scientific, one of eight labs certified by Colorado to do potency testing.

THC is an abbreviation for tetrahydrocannabinol, which is the psychoactive compound in the plant. He explains that three decades ago, THC levels were well below 10 percent. Its content has tripled in some strains because producers have been cross-breeding them over the years to meet user demands for higher potency, he says.

But an unexpected consequence of this breeding has occurred, says LaFrate. Many of the samples his lab has tested have little to no cannabidiol, or CBD. CBD is a lesser known compound in marijuana that is of increasing interest to medical marijuana proponents. Researchers are investigating CBD as a treatment for schizophrenia, Huntington’s disease and Alzheimer’s disease. It is also being considered for anxiety and depression. But unlike THC, CBD doesn’t get people high — that’s a key trait for many people who are wary of buzz-inducing drugs and for potential medical treatments for children. As for recreational users, the lack of CBD in marijuana means that many of the hundreds of strains they select from could in actuality be very similar chemically, according to LaFrate.

“There’s a lot of homogeneity whether you’re talking medical or retail level,” he says. “One plant might have green leaves and another purple, and the absolute amount of cannabinoids might change, which relates to strength. But the ratio of THC to CBD to other cannabinoids isn’t changing a whole lot.” That means there might be little difference in how the varieties make you feel, even though some people claim one kind will make you mellow and another will make you alert, LaFrate explains.

As for contamination testing, although Colorado doesn’t yet require it, some producers have voluntarily submitted samples to see what’s in their products. LaFrate says the results have been surprising. His lab looks for both biological and chemical contaminants, such as pathogenic microbes and solvents.

“It’s pretty startling just how dirty a lot of this stuff is,” he says. “You’ll see a marijuana bud that looks beautiful. And then we run it through a biological assay, and we see that it’s covered in fungi.”

The lab also finds varying levels of chemical contaminants such as butane, which is used to create marijuana extracts. Contamination isn’t necessarily a cause for alarm, but it does signal a need to figure out what levels are safe.

“It’s a natural product,” LaFrate says. “There’s going to be microbial growth on it no matter what you do. So the questions become: What’s a safe threshold? And which contaminants do we need to be concerned about?”

In other words, legalizing marijuana has raised a lot of issues that still have to be hammered out. LaFrate, who has been involved with the policy side of Colorado’s new marijuana market, as well as the laboratory side, says he expects regulations will continue to evolve as scientists, lawmakers and others learn more about the plant and its products.

How to avoid making a bad hire

February 18, 2015

Bad hiring decisions cost employers millions of dollars, damage workplace morale, reduce productivity and account for more than half of employee turnover nationwide. It doesn’t have to be that way according to a new study that reveals how a few minor changes in the wording of a job advertisement can increase the size and quality of an applicant pool.

Think of a typical job ad. It focuses on what the employer wants from the applicant: academic degrees, specific skills and a strong work ethic, for example. But David Jones, associate professor of business, has found that ads focusing on what employers can provide job seekers — like work autonomy, career advancement and inclusion in major decisions — result in better employee-company matches. And these ads produce larger numbers of more qualified applicants.

For the study, which will appear in a forthcoming issue of the Journal of Business and Psychology, Jones and his co-authors, Joseph Schmidt from the University of Saskatchewan and Derek Chapman from the University of Calgary, manipulated real job ads at a large engineering-consulting firm in Canada. It’s perhaps the first study, Jones says, that uses data of this kind collected in a field setting with active job seekers who were applying for actual professional positions. And the results were clear. Ads that focused on “needs-supplies” (N-S) fit (what the organization can supply to meet an applicant’s needs) received almost three times as many highly rated applicants as ads with “demands-abilities” (D-A) fit wording (what abilities and skills the organization demands of candidates).

‘Superstar’ applicants attracted to employee-centered ads The study, “Does Emphasizing Different Types of Person-Environment Fit in Online Job Ads Influence Application Behavior and Applicant Quality? Evidence from a Field Experiment,” was based on responses from 991 applicants who responded to 56 ads for engineering and project management-based positions emphasizing N-S or D-S fit. Jones, who has a doctorate in industrial and organizational psychology, and his colleagues wrote the N-S fit statements based on the psychological needs of humans described in self-determination theory and included the motivating work factors outlined in the long-established job characteristics model.

Part of the reason so many employers still run D-A-heavy ads, Jones suspects, is because the people writing them often have little training in this area, have very specific skill gaps they need to fill quickly, or rely on headhunters who might focus on their clients’ needs more than the applicants’ needs. “A hiring manager in a specific unit or a supervisor of the second shift in manufacturing with little training in this stuff may be crafting the ad,” Jones says. “So it’s not surprising that it’s filled with D-A statements because they want someone with a specific skill set that they don’t have to spend a lot of time training and who can start day one.”

The solution, however, isn’t to simply slap in some N-S statements to make the job sound more worker-friendly. “It’s key not to add these types of statements if they aren’t true,” says Jones. “If you create what is called a psychological contract where the applicant has an expectation of what is going to happen as an employee and then it doesn’t, the people you hire are less likely go above and beyond and are more likely to quit much sooner than they otherwise would.”

The researchers also used survey responses from 91 of the 991 applicants to explore the importance of N-S wording in job seekers’ decisions to apply. Jones theorizes that the reason N-S fit ads solicit larger pools is because they attract both “the superstars who have the luxury of applying to a small number of positions” as well as applicants with average resumes who apply for as many positions as possible during a job search. With a D-A approach, Jones says, the applicant pool is composed primarily of the latter: “average applicants who use the shotgun approach.”

Public startups boom under JOBS Act, study shows

January 29, 2015

The JOBS Act is doing its job and getting more startups to go public, according to a new study from the University at Buffalo School of Management.

Forthcoming in the Journal of Financial Economics, the study found that the Jumpstart Our Business Startups (JOBS) Act increased initial public offerings (IPOs) by 21 per year since it was passed in April 2012 — a 25 percent increase over 2001-2011 levels. Three-quarters of the new offerings came from the biotechnology/pharmaceutical industry.

Study co-author Michael Dambra, PhD, assistant professor of accounting and law in the UB School of Management, says businesses that are heavily invested in product research and development — like those in the biotechnology/pharmaceutical industry — have the most to gain from the JOBS Act.

“Because the JOBS Act allows firms to communicate with prospective investors before disclosing required firm-specific information publicly, companies can defer releasing costly research information to competitors,” says Dambra. “For example, without the JOBS Act, a pharmaceutical company would have to immediately reveal details about drugs they’re working on when going public, allowing rivals to counter with generic options.”

An IPO is the first sale of stock by a private company to the public. The JOBS Act streamlines the process for emerging growth companies — those with less than $1 billion in annual revenue — by exempting them from certain accounting and disclosure requirements and allowing them to communicate with prospective investors prior to a public filing of their registration statement.

The study analyzed three samples of data to determine how the JOBS Act has affected IPO activity: an international sample of IPOs from January 2001 to March 2014, a hand-collected sample of U.S. IPOs in the two years before and after the JOBS Act, and a domestic sample of IPOs from January 2001 to March 2014.

Dambra cautions, however, that the recent bull market may have also had an impact on the boom in IPOs and suggests that additional research is warranted.

“While we control for market conditions in our tests, we cannot be certain that our results would hold in a bear market,” says Dambra.

Limiting internet congestion a key factor in net neutrality debate

December 10, 2014

Too many vehicles on the highway inevitably slow down traffic. On the Internet information highway, consumers value high-speed Internet service, but there is little reason to think broadband traffic congestion will improve if the Federal Communications Commission abandons net neutrality, according to economic research.

In their paper, “The Economics of Network Neutrality,” Ben Hermalin, Haas Economics Analysis and Policy Group, and Nicholas Economides, Berkeley-Haas visiting professor from NYU’S Stern School of Business, find that if Internet Service Providers known as ISPs initiate price discrimination in their pricing, a “recongestion effect” will occur. In other words, online delivery channels that are less congested at the onset of new pricing tiers will eventually become recongested when consumer behavior adjusts.

As the net neutrality debate continues, the study published in the RAND Journal of Economics (Vol. 43, No. 4, Winter 2012) provides a reminder of the potential fallout that multiple pricing might have on online traffic.

Hermalin and Economides use models to explore the economics of the current pricing regime known as “net neutrality,” in which residential ISPs, such as ATT and Comcast, treat all content providers equally and don’t directly charge them for the content they deliver to end users.

The models measure linear pricing versus price discrimination and compare the rate of congestion through the information pipeline between broadband providers and households under these different pricing strategies.

Hermalin says that many existing economic models examining price discrimination haven’t taken the fixed capacity component seriously. Once the fixed capacity component is understood, “relaxing net neutrality becomes a bad thing,” he says, “Except for the ISPs.”

Linear pricing sets a fixed price for a product or service. Price discrimination is a pricing strategy that offers the same or similar product at different price points in order to maximize consumer demand or preference. For example, a type of breakfast cereal may come in two sizes: a small box for individuals and a large box for families. Even though the larger box of cereal may contain twice as much cereal, the price is not double the cost of the small box.

President Obama supports net neutrality but some ISPs continue to lobby the FCC to authorize “paid prioritization” or the creation of Internet “fast lanes” for those willing to pay more.

To better understand broadband congestion, consider Prof. Hermalin’s hypothetical example of traffic on a real highway. If two of three lanes were reserved just for Mercedes Benz vehicles, drivers of Mercedes cars would enjoy a faster commute to and everyone else in the single remaining lane would be forced to slow down due to the added congestion. Predictably, Hermalin explains, more people would start buying Mercedes in order to take advantage of two lanes rather than one lane. The result? The two lanes that were previously less congested would recongest.

“Ultimately there is no real benefit because there is a fixed capacity on the highway,” says Hermalin. “Likewise, the ISPs have a fixed amount of bandwidth to spread around unless they invest in more.”

In the net neutrality debate, ISPs claim that in order to invest in more bandwidth, they need to charge content providers (Netflix, Amazon, etc.) either for streaming certain content or for facilitating content at premium speed. For years, the FCC has debated whether to alter the current system of a neutral network.

The findings also suggest that while consumers may be willing to pay more for faster service, if net neutrality rules were relaxed, eventually the larger economic fallout would be that people will try to spend less in reaction to increasing prices.

The FCC’s authority to regulate Internet traffic is currently under appeal as broadband providers challenge whether providing Internet service is a utility subject to FCC regulation.
University of California – Berkeley Haas School of Business

Using social media for behavioral studies is cheap, fast, but fraught with biases

December 1, 2014

The rise of social media has seemed like a bonanza for behavioral scientists, who have eagerly tapped the social nets to quickly and cheaply gather huge amounts of data about what people are thinking and doing. But computer scientists at Carnegie Mellon University and McGill University warn that those massive datasets may be misleading.

In a perspective article published in the Nov. 28 issue of the journal Science, Carnegie Mellon’s Juergen Pfeffer and McGill’s Derek Ruths contend that scientists need to find ways of correcting for the biases inherent in the information gathered from Twitter and other social media, or to at least acknowledge the shortcomings of that data.

And it’s not an insignificant problem; Pfeffer, an assistant research professor in CMU’s Institute for Software Research, and Ruths, an assistant professor of computer science at McGill, note that thousands of research papers each year are now based on data gleaned from social media, a source of data that barely existed even five years ago.

“Not everything that can be labeled as ‘Big Data’ is automatically great,” Pfeffer said. He noted that many researchers think — or hope — that if they gather a large enough dataset they can overcome any biases or distortion that might lurk there. “But the old adage of behavioral research still applies: Know Your Data,” he maintained. vStill, social media is a source of data that is hard to resist. “People want to say something about what’s happening in the world and social media is a quick way to tap into that,” Pfeffer said. Following the Boston Marathon bombing in 2013, for instance, Pfeffer collected 25 million related tweets in just two weeks. “You get the behavior of millions of people — for free.”

The type of questions that researchers can now tackle can be compelling. Want to know how people perceive e-cigarettes? How people communicate their anxieties about diabetes? Whether the Arab Spring protests could have been predicted? Social media is a ready source for information about those questions and more.

But despite researchers’ attempts to generalize their study results to a broad population, social media sites often have substantial population biases; generating the random samples that give surveys their power to accurately reflect attitudes and behavior is problematic. Instagram, for instance, has special appeal to adults between the ages of 18 and 29, African-Americans, Latinos, women and urban dwellers, while Pinterest is dominated by women between the ages of 25 and 34 with average household incomes of $100,000. Yet Ruths and Pfeffer said researchers seldom acknowledge, much less correct, these built-in sampling biases.

Other questions about data sampling may never be resolved because social media sites use proprietary algorithms to create or filter their data streams and those algorithms are subject to change without warning. Most researchers are left in the dark, though others with special relationships to the sites may get a look at the site’s inner workings. The rise of these “embedded researchers,” Ruths and Pfeffer said, in turn is creating a divided social media research community.

As anyone who has used social media can attest, not all “people” on these sites are even people. Some are professional writers or public relations representatives, who post on behalf of celebrities or corporations, others are simply phantom accounts. Some “followers” can be bought. The social media sites try to hunt down and eliminate such bogus accounts — half of all Twitter accounts created in 2013 have already been deleted — but a lone researcher may have difficulty detecting those accounts within a dataset.

“Most people doing real social science are aware of these issues,” said Pfeffer who noted that some solutions may come from applying existing techniques already developed in such fields as epidemiology, statistics and machine learning. In other cases, scientists will need to develop new techniques for managing analytic bias.

The Institute for Software Research is part of Carnegie Mellon’s School of Computer Science, now celebrating its 25th year.


About Carnegie Mellon University: Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 12,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. A global university, Carnegie Mellon has campuses in Pittsburgh, Pa., California’s Silicon Valley and Qatar, and programs in Africa, Asia, Australia, Europe and Mexico.
Carnegie Mellon University

Feeling bad at work can be a good thing (and vice versa)

August 21, 2014

Research by the University of Liverpool suggests that, contrary to popular opinion, it can be good to feel bad at work, whilst feeling good in the workplace can also lead to negative outcomes.

In a Special Issue published in Human Relations, Dr Dirk Lindebaum from the University’s Management School, together with his co-author Professor Peter Jordan, developed a new line of study, and commissioned research to further explore the role of emotions in the workplace.

They found that the commonly-held assumption that positivity in the workplace produces positive outcomes, while negative emotions lead to negative outcomes, may be in need for reconsideration. This is partly due to this assumption failing to take into account the differences in work contexts which effect outcomes.

For instance, anger does not always lead to negative outcomes and can be used as a force for good through acting upon injustices. In some cases, anger can be considered a force for good if motivated by perceived violations of moral standards. An employee, for example, could express anger constructively after a manager has treated a fellow worker unfairly.

In such cases, anger can be useful to prevent these acts of injustice from repeating themselves in the future. Likewise, being too positive in the workplace, rather than resulting in greater well-being and greater productivity, can lead to complacency and superficiality.

One article within the Special Issue also finds that, within team situations, negativity can have a good affect, leading to less consensus and therefore greater discussion amongst workers which enhances team effectiveness.

An interesting contradiction is identified in another study of the special issue. Here, people derive satisfaction from doing `good’ in the context of helplines by providing support to people in times of emotional distress. However, they are negatively affected by their line of work due to people shunning them in social situations (e.g., since they might catch the emotional taint they attribute to the profession of the helpline workers).

Management expert, Dr Lindebaum said: “The findings of the studies published in this Special Issue challenge the widely held assumption that in the workplace positive emotions generate or engender a positive outcome, and vice versa.

This Special Issue adds to our knowledge and understanding of how the positive and negative dynamics affect the working environment and has practical application and relevance in the workplace.


`When it can be good to feel bad and bad to feel good: Exploring asymmetries in workplace and emotional outcomes’ was edited by Dr Dirk Lindebaum and Professor Peter Jordan from the Griffith University in Australia, and is published in Human Relations, September 2014 (Vol. 67, Issue 9).
University of Liverpool

« Previous PageNext Page »