The Oxford dictionary defines disinformation as “false information which is intended to mislead.” That simple definition seems to understate the problem, given the fact that everything from Brexit to the election of Donald Trump has been blamed (rightly or wrongly) on disinformation.
The root cause of disinformation is the same as that plaguing open source software, namely the fact that anyone can upload anything they want to a “platform” due to the open nature of the ecosystem. For open source software, that means bad actors can upload malware to an open source repository (such as PyPI, RubyGems, CPAN, npm, etc) disguised as valid code. In the information world, it translates into bad actors creating incorrect or misleading posts on Twitter, Facebook, YouTube, etc.
There are even more parallels. For example:
- Disinformation is equivalent to malware in the open source software space. In other words, something designed to “pollute the environment” of a software developer that downloads and installs it in the same way that disinformation can pollute the mind of a reader, sowing doubt or reinforcing biases.
- Misinformation is incorrect information. This is usually posted through ignorance rather than any intention to mislead. In the open source world, misinformation is equivalent to software that inadvertently ships with undocumented bugs or security issues.
- Malinformation is more difficult to pin down, ranging from “correct information used in a misleading context” all the way to “whatever a powerful authority doesn’t like.” In the open source software world, this is much easier to understand since it includes anything that contravenes corporate governance, such as the use of an EOL programming language or including a GPL licensed package in a commercial product’s codebase.
Given these parallels, it shouldn’t come as a surprise that the history of both disinformation and open source software have largely followed the same trajectory:
- Open source was initially ignored by enterprises, then vilified by many of the largest software vendors, and finally adopted by everyone – including the US government.
- Disinformation in social media was largely ignored by powerful authorities, despite the fact that some analysts were sounding the alarm early on. However, since 2015/2016 (ie., post Brexit/Trump election) authorities have moved squarely into the vilify stage.
The result is a crisis of trust in the information we read and the open source code we use, which has manifested in a post-truth world exemplified by institutional decay (in governments, mass media, scientific research, etc), and a crisis in the security of our software supply chain.
There are currently a number of initiatives aimed at addressing the software supply chain issue in the open source world. What (if anything) can the information world learn to help address the current post-truth crisis, which only seems to be getting worse?
Open Source Learnings for the Disinformation Problem
Both disinformation in social media and exploits in open source code have been around for decades, but it’s only recently that bad actors have weaponized them to such an extent that, for example, countries now routinely fear election interference while enterprises, hospitals and critical infrastructure sites are under increasing assault by ransomware attacks.
In the open source world, this escalation is due to the fact that bad actors have discovered the economies of scale that can be realized by compromising a popular application with malware that then gets propagated downstream to hundreds or even thousands of customers, thus providing hackers with a potential entry point. Solarwinds is the poster child for this kind of attack.
In the information world, this escalation is a result of improvements in analytics (more data; better algorithms) that can identify how best to take advantage of the fact that provocative social media posts generally get more attention/promotion by a platform’s algorithms. Cambridge Analytica was the poster child for this kind of disinformation.
But it doesn’t have to be this way. There are solutions that can at least curb the proliferation of these exploits.
Checks & Balances
Open source repositories are a good analogy for the social media platforms on which disinformation is spread. Checks and balances exist in the open source world such that when malicious or unsecure code is uploaded, it is found and removed from the repository in just a few days despite the fact that many repositories host hundreds of thousands of packages. That kind of quick turnaround is one of the benefits of mass adoption paired with crowdsourced feedback.
The closest parallel with disinformation is Wikipedia, which also crowdsources information that is subject to checks and balances. The problem comes when politics dominate, capturing the editorship and suppressing conflicting voices, rather than providing multiple views and letting the reader decide for themselves.
Of course, open source movements are no stranger to politics, but bugs and security issues (listed on the US National Vulnerability Database for example) and even malware (see ActiveState’s Malware Archive) are published publicly so anyone can investigate and determine for themselves the accuracy of the information, rather than suppressed or censored by content/fact checkers some of which are the platform’s employees, and others of which are third party entities which may have their own agendas.
After all, the counter to bad information should be better information: eliminate hate speech and direct calls for violence, and allow the rest of the arguments to balance themselves. But just like on Wikipedia, it seems we no longer trust that the best argument will win.
Those that care about restoring the balance can take a lesson from the open source world where volunteers coalesce to establish norms, guidelines and standards that can help steer the open source ecosystem in the right direction. It requires lots of time and effort, but this kind of organized, collective power will trump content checkers and moderators every time.
Legislation
In response to the software supply chain crisis, President Biden announced Executive Order 14028 back in 2021, which provided recommendations aimed at improving the nation’s cybersecurity. More significantly, it imposed requirements and deadlines for government software vendors to secure their software supply chain, or lose access to the US government market.
The Order was also a catalyst that kicked off legislation worldwide proposing cybersecurity rules, regulations and certification for “all products with digital elements.” Failure to comply can result in fines and/or being denied access to a country’s market. It also led to the US government’s own followup, The National Cybersecurity Strategy 2023, which proposes to:
- Limit software vendors’ ability to contractually disclaim liability for poor software security.
- Hold software vendors liable for insecure software development and maintenance practices.
While these bills sound like a sea change in software development, they’ve proven to be nothing more than paper tigers since governments in general, and the US government specifically, are reluctant to impose restrictions on the software industry in order to avoid constraining innovation.
In response to the disinformation crisis, governments have also proposed legislation, such as the UK’s Online Safety Bill and more notably, the EU’s Digital Services Act (DSA) which came into effect this year. DSA has the broad goal of fostering online security, but similar to the software legislation, it also imposes penalties on social media companies that do not comply with rules around removing content designated objectionable by the EU. Failure to comply in a timely manner may result in the platform being banned from the EU and/or penalties of up to 6% of global revenues for platforms with more than 45M users.
While the EU has issued threats based on DSA, no penalties have been levied to date. Whether this is due to a reluctance of (once again) constraining innovation, or for fear of being seen as imposing authoritarian censorship is yet to be seen.
At the end of the day, though, neither software nor social media legislation is likely to be successful on a region to region basis due to the fluid nature of digital media which recognizes no geographic boundaries. For example, when Sweden imposed a Robin Hood tax on stock trades within its borders back in the 1980’s speculators simply executed their trades on other countries’ exchanges. After seven years, the Swedish government was forced to repeal the legislation rather than lose out entirely on the revenue generated through stock traders. Similarly, no region is willing to risk banning software or social media since they’re key enablers of business for their national companies. Inevitably, those companies would be forced to re-incorporate in a jurisdiction that doesn’t impose restrictions.
Global imposition of legislation like DSA is not likely to play well in jurisdictions with strong or even constitutional support for free speech. Similarly, software security legislation is unlikely to be welcomed in regions that see their software industry as driving economic and technological advantage.
Best Practices
Enterprises are fond of adopting best practices since they can simply impose the consequences on their employees. Those same best practices have been slow to be imposed downstream though, whether on the developers of open source software or the users of social media since they’re rightly seen as a barrier to adoption. However, some practices are starting to become more generally accepted, including:
- MultiFactor Authentication (MFA) coupled with verification can help curb account hijacking and impersonators, limiting anonymity which can be a powerful driver of bad behavior.
- Identifying and eliminating traffic that originates from bot and zombie accounts that can be used to publish and/or promote disinformation.
- Content labeling that can add clarifications to more contentious posts, exposing bias or misleading context.
But even with best practices in place across most open source communities, malicious code still gets uploaded to central repositories and downloaded by hundreds or even thousands of developers before the community can deal with it. Similarly, no matter how good a social media platform is at vetting users or moderating content, too many users will still be exposed.
At the end of the day, whether it’s open source code or social media posts, the reality is “user beware,” which raises the need for:
- Tools – with effective labeling, social media users would be able to filter out content they found offensive, disturbing or unfavorable. Automated labeling via AI is one avenue worth exploring, since content labeling (via sentiment analysis, photo recognition, etc) is a core strength. For open source users, filters placed on code import routines can be extremely effective at limiting a company’s exposure to malware.
- Education – while awareness of software supply chain security is growing, companies still spend far too much of their time and resources on fixing vulnerabilities, rather than focusing on emerging threats. Similarly, media literacy remains an afterthought in most educational systems despite its increasing importance. Countries should look to programs like those run by Estonia for guidance.
Conclusions – Solving Disinformation with Open Source Principles
The disinformation problem is not an easy one to solve, but the current thinking that government-mandated censorship will just make it go away is misguided. Like software supply chain security, the problem is far more complex than a blunt instrument like legislation will ever be able to address.
Instead, a multi-pronged approach involving technology, education and regulatory guidance is likely to be far more effective – and beneficial – for both society and business in the long run.
Will there ever be a time when governments embrace social media the way enterprises have embraced open source software? Most government entities (and many politicians) already make use of social media in at least a cursory (read: self promotional) way, but have yet to make it a significant channel for communication with their constituents, preferring to rely on traditional broadcast media that imposes distance from a populace they may feel uncomfortable dealing directly with.
In other words, like the current state of the open source supply chain, too many pitfalls still exist. However, open source initiatives like SLSA are well on their way to closing the weakest links in the software supply chain. There’s no reason social media can’t benefit from many of the same lessons open source has learned.
Next Steps
For more lessons from the software supply chain security crisis that may help address the disinformation problem, download our Journey to Software Supply Chain Security eBook.