Social media platforms like Facebook and Twitter have evolved from simple communication tools to powerful political instruments. They don’t just inform us—they influence us, sometimes in ways we’re unaware of. These platforms have the ability to shape public opinion and sway political outcomes by not only pushing certain information down our throats but also by silencing or withholding crucial details that we need to know. What we see and hear on social media is carefully curated, with algorithms prioritising what they think will capture our attention, often at the expense of broader perspectives or important truths. This creates a one-sided view of reality that can have profound consequences on how we think, vote, and engage in political discussions. In the hands of political campaigns or powerful influencers, these platforms can even manipulate our beliefs, shaping our understanding of the world in ways that may benefit particular interests while leaving us unaware of the full picture.

Fake news – is it real?

Sadly, it is. One of the most pressing concerns is the spread of fake news and disinformation. It’s no secret that misleading content and false claims can spread quickly across social media platforms, sometimes with severe consequences. In recent years, political campaigns, foreign interference, and even conspiracy theorists have used social media to spread false information, often with the aim of influencing voters and disrupting democratic processes.

During the 2016 U.S. Presidential Election, for example, social media platforms became the focal point for the spread of fake news, with many accusing platforms like Facebook and Twitter of failing to properly moderate content. The spread of these misleading stories undermined trust in the media, confused the public, and ultimately impacted elections across the globe. As a result, Facebook and Twitter faced immense pressure to take action.

However, it’s important to note that misinformation was not coming from just one side of the political spectrum. Fake news and misleading claims were being spread by various parties, candidates, and organisations. But despite this, the response from social media platforms seemed to disproportionately target one political side. Fact-checking and content moderation practices appeared to single out one party more than others, leading to accusations of bias.

This selective focus could be seen as a form of misinformation in itself—if the public is led to believe that only one side of the political spectrum is being “economical with the truth,” this could influence the perception of fairness. By disproportionately fact-checking and scrutinising one party, social media platforms risk giving the impression that one side is the primary offender, creating an unbalanced narrative. In turn, this could have consequences, as it may influence how people view the political landscape, shaping their trust in certain sources of information.

Facebook’s approach to fake news

Facebook, the social media giant with over 2.8 billion active users, is built on the idea of connecting people from all corners of the globe. Mark Zuckerberg, Facebook’s founder, has often said that the platform’s primary goal is to create a space where people can build communities and stay connected, regardless of geographical or cultural barriers. This sense of global community is central to Facebook’s mission statement: “To give people the power to build community and bring the world closer together.”

This mission reflects Facebook’s desire to foster communication and connections, even if those connections involve political discussions or contentious debates. Facebook sees itself as a platform for freedom of expression—people should be able to post what they want, share opinions, and discuss issues freely.

When it comes to political ads, this open approach means that Facebook is willing to allow advertisements from political figures, organisations, or activists, even if they are not always accurate. Facebook has set guidelines to ensure that ads are transparent, providing voters with information about who is behind a political ad, but it still allows those ads to appear, even if they contain misleading content.

This approach has sparked significant controversy. Critics argue that Facebook’s willingness to accept political ads, regardless of their accuracy, undermines the integrity of democracy. By prioritising the ability to connect people, Facebook opens itself up to criticism that it is enabling the spread of disinformation for the sake of profit. However, Facebook justifies this approach by claiming that it’s simply a platform for discussion, not an arbiter of truth.

In the case of political ads, this has sometimes meant that misinformation from all sides—whether right-wing, left-wing, or elsewhere—was allowed to circulate. However, the fact-checking efforts on Facebook have often been criticised for targeting particular political figures or parties more than others, leading to claims that the platform is selectively moderating content. If this is the case, it creates a misleading narrative, suggesting that only one side is “playing fast and loose” with the facts. This could influence the public to believe that certain political views are inherently more truthful than others, potentially skewing perceptions and undermining trust in the platform.

Facebook v Twitter mission statements and their impact on the spread of fake news

Twitter, Facebook, Political Ads and Mission Statements.

Facebook’s mission statement is: “to give people the power to build community and bring the world closer together.”

Twitter’s mission statement is: “to give everyone the power to create and share ideas and information instantly without barriers.”

Facebook’s mission statement focuses on connecting people. The nature of the information they share is irrelevant.

By contrast, Twitter’s mission statement focuses on instant free speech without barriers. It invites users to say the first thing that comes to their minds without being censured or having to worry about political correctness.

Indeed, two completely different mission statements.

Twitter’s Approach: Free Speech Without Barriers

In stark contrast to Facebook’s policy, Twitter has chosen to take a much more stringent approach to political ads. In 2019, Twitter made the groundbreaking decision to ban all political ads on its platform. The reasoning behind this decision was clear: Twitter wanted to limit the influence of money in politics and reduce the spread of misleading or divisive content. The platform argued that allowing political ads could allow powerful organisations or individuals to buy influence and distort political messages, especially given the unregulated nature of social media.

Twitter’s decision to ban political ads was also deeply rooted in its mission statement: “To give everyone the power to create and share ideas and information instantly, without barriers.” Unlike Facebook, Twitter’s mission centres around unfiltered, real-time communication. In the spirit of free expression, Twitter is committed to ensuring that everyone’s voice is heard, without favouring those who can afford to pay for visibility through ads.

By banning political ads, Twitter positions itself as a defender of free speech. The company believes that political messaging should not be influenced by money and that the conversations happening on the platform should be genuine, not manipulated by targeted ads. Twitter wants to level the playing field, ensuring that the power to influence is not reserved for those with the deepest pockets.

However, Twitter’s stance has not been without controversy. Critics argue that by banning political ads, Twitter is limiting the ability of candidates and organisations to communicate directly with voters. In an era where political campaigns are increasingly digital, many feel that Twitter’s decision hinders the democratic process by restricting one of the most important forms of political expression.

Moreover, Twitter’s aggressive stance on moderating content—particularly in terms of fact-checking and restricting potentially harmful political content—has also drawn attention. Some argue that this creates an imbalance, particularly if certain political figures or parties are disproportionately targeted with warnings, fact-checks, or account suspensions, while others are allowed to continue without similar scrutiny. This selective enforcement can lead to accusations of bias, which, like Facebook’s moderation approach, may shape public opinion about which side is more truthful or trustworthy.

Furthermore, the paid promotion by political parties to Ads that promote political ideas and ideologies suppresses free speech because the practice helps promote ideology based on whoever pay the most, rather than on the basis of merit or genuine public interest.

Political ads and fake news

At the heart of the debate over political ads on Facebook and Twitter lies the companies’ mission statements. These statements aren’t just a few words written on their “About Us” pages—they are the foundation of everything these companies do. Mission statements reflect the values that guide their decisions, from policy changes to long-term strategies.

For Facebook, the mission is all about connection. The company believes that by giving people a platform to share their ideas, regardless of the content, it is fostering a sense of global unity. Political ads, despite their potential for harm, are seen as another form of communication that should be allowed. This openness to all types of content, including political ads, aligns with Facebook’s core value of connecting people around the world.

Twitter’s mission, by contrast, is focused on empowering individuals to share ideas and information without barriers. This commitment to unrestricted, instant communication means that Twitter views political ads as a potential threat to free speech. By banning them, Twitter is staying true to its belief in an open, fair platform where all voices can be heard on equal footing. The company does not want money to determine which voices are heard the loudest.

Both approaches are deeply tied to the companies’ mission statements. While Facebook sees itself as a platform for building communities through any content, Twitter takes a more restrictive approach, focusing on ensuring that speech remains free from financial influence.

Final thoughts

When we look at Facebook and Twitter, we see two platforms that are taking very different stances on political ads. These differences can be traced back to the companies’ mission statements, which provide the framework for how they approach challenges like fake news, political influence, and the role of social media in society.

Facebook’s emphasis on connection means that it’s willing to accept political ads—even if they spread misinformation. In contrast, Twitter’s commitment to free speech without barriers means that it’s willing to forgo the financial benefits of political ads to maintain a platform that prioritises equality of voice. Both companies are trying to solve the same problem—disinformation—but their solutions reflect their deeply held values and the priorities set out in their mission statements.

However, the debate also raises important questions about fairness and bias in content moderation. If social media platforms disproportionately target one political side in their fact-checking and moderation efforts, they may unintentionally create a misleading narrative. In doing so, these platforms risk shaping public perceptions and influencing political discourse in ways that may not reflect the true nature of the problem. It’s clear that mission statements matter more than ever, as they guide these companies’ decisions and shape the future of digital communication. As the world continues to grapple with misinformation, these platforms must carefully consider how their policies influence public trust—and the political landscape as a whole.

Scroll to Top