The Cost of Entertainment on Social Media: Meta and the Limits of Corporate Social Responsibility

In a controversial move, Meta announced the removal of fact-checking services from its social media platforms, Facebook and Instagram, on January 7, 2025 (Kaplan 2025). In a recent interview, Meta CEO Mark Zuckerberg claimed, “We tried in good faith to address those concerns without becoming arbiters of truth. But the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US” (Bond and Jingnan 2025). The decision, limited to the United States thus far, was announced mere weeks before Trump returned to office. The timing and rhetoric surrounding this choice underscore the fundamental tension between profit-driven motives and the ethical responsibilities of major tech corporations. Meta’s distance from fact-checking endeavors reveals the limitations of corporate social responsibility (CSR) when it conflicts with business interests. However, it also raises questions of what exactly a social media company is responsible for–preventing misinformation or protecting free speech–and whether CSR can be separated from the pursuit of profits.

The Context: CSR and Misinformation on Social Media

Corporate social responsibility (CSR) is a recent development that calls for businesses to assess their impact on the people, planet, and society at large (Stobierski 2021). Thus, their practices must not serve just the company and its profits, but the world in which it operates. Companies can now identify themselves as socially responsible through the label of B Corps, social purpose corporations (SPCs), or low-profit limited liability companies (L3Cs) due to the increasing relevance of CSR (Stobierski 2021). “Disease, social injustice, economic collapse, and war are weighing heavily on shoppers’ minds,” creating the modern expectation that a business provides more than just products and services (Bowling 2022). CSR is a powerful marketing tactic and can have a significant impact on how a company is perceived, with shoppers being 4 to 6 times more likely to purchase from a CSR company and with 80% recommending the brand to friends and family (Bowling 2022). When it comes to social media, companies like Meta have been called to regulate consumer fraud, hate speech, misinformation, and other online ills (Katz 2025).

Widespread public concerns regarding misinformation first arose after Russia used Facebook and other sites to manipulate American voters in the 2016 presidential election (Bond and Jingnan 2025). In a 2019 speech at Georgetown, Zuckerberg said, “You know, no one tells us that they want to see misinformation, right? That’s why we work with independent fact-checkers to stop hoaxes that are going viral from spreading” (Bond and Jingnan 2025). Meta contracted third-party groups (including AFP USA, Check Your Fact, Factcheck.org, Lead Stories, and more) to moderate content in a three-step process: scanning to detect harmful content, assessing whether it violates the law or their terms of service, and intervening either by removing posts, adding warnings levels, or diminishing its presence on user’s feeds (Katz 2025). 

Demands for fact-checking and content moderation continued to grow through the pandemic era, culminating in Trump’s ban from several social media platforms following the January 6th insurrection (Bond and Jingnan 2025). As a result, however, this ban brought opposing complaints to a head: Many people, particularly conservatives, accused Meta of stifling free speech and cast Trump’s ban as proof of censorship (Bond and Jingnan 2025). It is worth noting that while conservative claims have been debunked more compared to liberal ones, there is no data suggesting that their claims were unfairly targeted (Bond and Jingnan 2025). Regardless, the critiques still gained powerful traction. Additionally, as reporter Steven Lee Meyers noted, “Policing the truth on social media is a Sisyphean challenge. The volume of content – billions of posts in hundreds of languages — makes it impossible for the platforms to identify all the errors or lies that people post, let alone remove them” (Proulx 2025). All of these factors informed Meta’s decision to remove the independent fact-checking program from its US platforms.

The Decision: Meta’s Fact-Checker Removal

In a public announcement, Meta’s Chief Global Affairs Officer Joel Kaplan criticized third-party fact-checking, saying, “As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable” (Kaplan 2025). Citing the fact that experts can have their own biases, Kaplan publicized Meta’s transition to a Community Notes model, which is used by Elon Musk’s X (formerly Twitter) (Kaplan 2025). The Community Notes model allows users to make notes on misleading content and the most highly-rated notes are attached to the post itself. However, this model relies on consensus rather than facts, still presents partisan bias, and has not significantly reduced engagement with misinformation on X (Katz 2025). Proposed notes are not visibly attached to posts (you have to go into the Community Notes chat window to see them) and it is difficult for users to agree on more contentious subjects (Bond and Jingnan 2025). 

In addition to transitioning to Community Notes, Meta announced that it would lift restrictions on topics such as immigration and gender identity, stating that “it’s not right that things can be said on TV or the floor of Congress, but not on our platforms” (Kaplan 2025). It will still keep its automated systems focused on tackling illegal and high-severity violations, like pornography (Kaplan 2025). Less severe violations will solely rely on community reports before any action is taken (Kaplan 2025). Kaplan announced all of these measures in the name of “free expression,” where “all the good, bad and ugly is on display” (Kaplan 2025). 

The Consequences of Meta’s Decision

By eliminating fact-checkers, Meta risks amplifying the very problem it once sought to address: the unchecked spread of misinformation. Community Notes take time and are susceptible to manipulation–studies have found an increase in hateful content on X (Proulx 2025). More time has been spent liking and reposting content from dictatorships and terrorist groups like ISIS (Proulx 2025). In this way, truth can become a matter of toxic and inconclusive debate rather than fact. This can undermine public trust in digital information and contribute to polarization. 

However, while fact-checking is proven to have some effect on users, it only lasts for a few weeks and rarely causes a significant change in a user's stance on politicians or other issues (Bond and Jingnan 2025). A number of innocent users were also subject to bans, deletions, and scrutiny under the 2016 program, which means the previous measure had its own inefficiencies and failures as well (Kaplan 2025). This can be viewed as a form of unfair censorship. Furthermore, Artificial Intelligence (AI) has made content moderation even more challenging, since it is difficult to discern between human and machine-generated content and because AI bots can create content at an incredibly fast rate (Katz 2025). In light of this, it begs the question of whether a social media company can choose to be responsible for free speech over content moderation, especially when the latter becomes increasingly challenging.

The trade-off between these two responsibilities means that more harmful content will exist on user’s feeds, but fewer voices will be relegated to the sideline. Since both goals are morally worthy, Meta’s choice to prioritize one over the other could still designate them as socially responsible, just in a different way. However, CSR implies a certain level of positive action. Meta’s transition actually shifts the burden of content moderation away from the platform itself and onto the public, despite the company’s significant role in shaping online discourse. Thus, until Meta more actively campaigns for free speech measures (ie. fighting censorship regulations in countries like China), its mission to protect self-expression is hollow.

The Limits of Corporate Social Responsibility

Indeed, Meta’s decision highlights the inherent weaknesses of voluntary corporate social responsibility. While companies often engage in CSR initiatives to project an image of ethical responsibility, these efforts are ultimately subject to corporate discretion. Unlike legally mandated regulations, voluntary CSR commitments can be abandoned when they no longer align with business objectives. This is why there are no plans to remove fact-checkers in the UK and the rest of Europe, where governments have mandated tech firms to take more responsibility for their content or be faced with consequences (McMahan 2025). Ava Lee, a member of Global Witness, found that “Zuckerberg’s announcement is a blatant attempt to cozy up to the incoming Trump administration,” whose regulatory decisions, including an FTC lawsuit, threaten Meta’s profit margin (McMahan 2025). Like many other CEOs, Zuckerberg met with President Trump at his Mar-Lago estate and donated $1 million to his inauguration fund (McMahan 2025). Dana White, a close Trump ally, also recently joined Meta’s board of directors in a show of alliance (McMahan 2025).

Meta’s choice to remove fact-checkers illustrates how corporate social responsibility remains secondary to financial imperatives. This phenomenon is not unique to Meta. Many large corporations adopt socially responsible policies only to scale them back when they become financially inconvenient, including X and YouTube. However, as the modern era progresses, the demand for CSR will make its removal from business operations less cut and dry. A Forbes survey found that 76% of consumers would not support a business that opposes their views (Bowling 2022). More importantly, advertisers–who account for most of the revenue made by social media firms–do not want their brands to circulate with potentially damaging content (Proulx 2025). Connecting with consumers on social media now poses a brand safety risk (Katz 2025). Some users and advertisers have already left major platforms–in October 2024, X saw 300,000 to 2.6 million users leaving daily (Binder 2024). CSR is increasing in influence to the point where the absence of it may risk a business’ ultimate goal: maximizing profits.

Conclusion: The Need for Regulation

Meta’s removal of fact-checking services exemplifies the fragile nature of corporate social responsibility when it clashes with business priorities. While CSR initiatives can foster positive change, they remain vulnerable to corporate discretion, highlighting the need for external accountability. If the goal is to combat misinformation effectively, society cannot rely solely on voluntary corporate measures; regulatory action is essential to ensure that platforms like Meta fulfill their responsibility in shaping a well-informed public sphere.

References

Bond, Shannon, and Jingnan Huo. "Meta Backs Away from Fact-Checking in the U.S." NPR, 12 Jan. 2025, https://www.npr.org/2025/01/12/nx-s1-5252739/meta-backs-away-from-fact-checking-in-the-u-s.

Bowling, Sarah. "How Corporate Responsibility Is Influencing Consumer Buying Decisions." Forbes, 2 May 2022, https://www.forbes.com/councils/theyec/2022/05/02/how-corporate-responsibility-is-influencing-consumer-buying-decisions/.

Binder, Matt. "X’s Declining User Base: Elon Musk's Platform Projected to Lose Millions of Users in 2025." Mashable, https://mashable.com/article/elon-musk-x-declining-user-base-2025.

Kaplan, David. "More Speech and Fewer Mistakes." Meta Newsroom, 2025, https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/.

Katz, Jordan. "Ask the Expert: What Meta’s New Fact-Checking Policies Mean for Misinformation and Hate Speech." MSU Today, Michigan State University, https://msutoday.msu.edu/news/2025/ask-the-expert-what-meta-new-fact-checking-policies-mean-for-misinformation-and-hate-speech.

McMahan, Jason, et al. "Facebook and Instagram Get Rid of Fact-Checkers." BBC News, https://www.bbc.com/news/articles/cly74mpy8klo.

Proulx, Natalie. "Should Social Media Companies Be Responsible for Fact-Checking Their Sites?" The New York Times, 14 Jan. 2025, https://www.nytimes.com/2025/01/14/learning/should-social-media-companies-be-responsible-for-fact-checking-their-sites.html.

Stobierski, Tim. "What Is Corporate Social Responsibility? 4 Types." Harvard Business School Online Blog,https://online.hbs.edu/blog/post/types-of-corporate-social-responsibility.

Next
Next

Music of The Future