Some typical ML use cases include:
- Recommendation Engines – ML routines can make recommendations for everything from movies to products to articles and more based on a user’s past interactions.
- Natural Language Processing – covers a wide range of uses, from speech recognition to sentiment analysis to virtual assistants – anything that involves spoken or written text.
- Image Recognition – also covers a wide range of uses, from computer vision to facial verification to deepfake identification – anything that involves pictures and visualization.
- Simulations – including competitive sports, traffic patterns and weather forecasts, but also code generationwith Large Language Models (LLMs) like ChatGPT – any field for which there is enough historical data to enable generation of new data.
At base, an ML model is simply an algorithm that has been trained on a dataset. The end goal is to have an algorithm that is able to make accurate predictions based on new data. All ML models will give you output for whatever input you give them, but not every model will tell you how it reached its conclusion, or why it gave you the output that it did. Thus, not all algorithms are the same, and they can be differentiated and classified based on their level of transparency and interpretability.
With the rising importance of this field, as well as the increased innovation and reliance on ML to make our decisions for us (whether that’s a simple as TikTok deciding which video we should watch, or as frustrating as a loan application that gets declined), it is worthwhile to take a look at the algorithms that are effectively running our lives these days. This post will explore the concepts of white box and black box neural networks (a group of algorithms that describe the relationship between sets of data using weights and classifiers) in ML models, and explore the topic of algorithm transparency.
What Is a White Box Machine Learning Model and How Do You Build One?
A white box machine learning model (White Box) is one that allows humans to easily interpret how it was able to produce its output and draw its conclusions, thereby giving us insight into the algorithm’s inner workings. White boxes are transparent in terms of:
- How they behave
- How they process data
- Which variables they give weight to
- How they generate their predictions
Examples of such models include linear trees, decision trees, and regression trees.
A linear model uses a linear function to model the relationship between input and output variables. Thus, it relies on logic that is transparent enough for humans to parse and explain how it came up with its predictions. With each output, it is possible to generate a decision tree that explains which parts of the input were taken into consideration, which ones were not, and how each factor was weighted.
Such models are useful for companies or projects that require high accountability and trust, or the ability to reproduce the steps that led to the outcome. In this way, white boxes are vital to domains such as:
- Risk assessment where the stakes are high. This can include medical and financial applications where mistakes can cost lives or livelihoods.
- Robotics and autonomous vehicles where safety is a concern. This can include self-driving cars and policing robots where mistakes can be deadly.
- Scientific research where you need clearly reproducible results.
But it is also useful for scenarios where you want to understand what went wrong with a process, and how to improve it in the future. For example, if a business understands how a model came to its prediction, it is better able to figure out how to make improvements on outcomes, as well as prepare against something going wrong.
Building a white box neural network involves:
- Choosing a model that has higher transparency and interpretability (such as linear models, decision/regression trees, or fixed-rule models), but is still suitable for the problem at hand.
- Choosing input features that are suitable for the problem but still understandable.
- After interpreting, regularizing, and validating the model, it is also important to communicate the results with a transparent overview of the steps that led to its generation.
White Box Pros:
- Have started to rise in popularity due to increasing mistrust of complicated and opaque AI systems
- Can produce reliable and useful predictions due to their tendency to be more linear
White Box Cons:
- Commonly don’t produce groundbreaking results or innovative new ideas
- Not suitable for modeling more complex relationships, which can also lead to lower accuracy.
What Is a Black Box Machine Learning Model and How Do You Build One?
Black box machine learning models (Black Boxes), on the other hand, rank higher on innovation and accuracy, but lower on transparency and interpretability. Black Boxes produce output based on your input data set, but do not – and cannot – clarify how they came to those conclusions. So, while a user can observe the input variable and the output variable, everything in between related to the calculation and the process is not available. Even if it were, humans would not be able to understand it.
Black Boxes tend to model extremely complex scenarios with deep and non-linear interactions between the data. Some examples include:
- Deep-learning models
- Boosting models
- Random forest models
It is typical for such models to apply high-dimensional and non-linear transformations to the input variables (which is already at a complexity beyond human understanding). Unlike their White Box counterparts, Black Boxes do not provide a breakdown of weighted scores, nor do they elucidate how the different features relate to each other in a simple way.
Black Boxes typically have many layers, and employ algorithms and models that are nonlinear in nature. They also make use of ML techniques like embeddings and hidden representation. Essentially, any time data scientists incorporate techniques that render features incomprehensible to humans (such as using deep learning to computationally generate features) the result is a Black Box model.
Despite the opaqueness and lack of transparency (even to the developers who created them), Black Boxes have been the traditional standard for ML algorithms. They have produced some of the most accurate and groundbreaking results to date thanks to their ability to model complex behavior which tends to be more true to real-life scenarios. For example, Black Boxes are behind the way that ChatGPT works, along with how all the other GPT-driven (Generative Pretraining Transformer) applications work, including text-to-image generators like Stable Diffusion.
Black Box Pros:
- High predictive accuracy
- Indispensable for speech recognition, image recognition, natural language processing, recommendation systems, and fraud detection
- Effective at sorting out large amounts of data since they can model complex relationships among large data sets
Black Box Cons:
- Low algorithm interpretability
- Difficult to explain why/how a Black Box decision was made to internal or external stakeholders
- Results may not be perfectly reproducible
Algorithm Transparency in Machine Learning
In the field of ML, the concept of algorithmic transparency is becoming increasingly important. While such transparency used to be a requirement only for sensitive industries like finance and health, even these industries are starting to embrace Black Boxes in order to gain greater effectiveness at the expense of intelligibility and accountability. But society needs to strike a balance, especially when we are becoming increasingly reliant on machines to help us with important life and societal decisions.
If we have no ability to comprehend how these powerful decision-making machines work, but continue to give them more agency over our lives, it will undoubtedly lead to both practical and legal consequences.
This has led to a number of initiatives, including:
- More businesses are beginning with White Boxes to see whether they are sufficient for their needs, and they’re only turning to Black Boxes if it’s really necessary to improve accuracy at the cost of transparency.
- More research is being conducted with the goal of making Black Boxes more transparent. For example:
- LIME (which stands for Local Interpretable Model-Agnostic Explanation) is a program that tries to make Black Boxes more understandable by using techniques such as learning the behavior of the underlying model (by changing the input and seeing how the predictions change), as well as locally approximating the non-interpretable model with a more interpretable one (such as a linear model with minimal non-zero coefficients).
Conclusions – Are Black Box Models Better than White Box Models?
There are tradeoffs and balances in everything, and ML models are no exception. This field is moving very fast, and there is now more social awareness of its impact, as well as the potential consequences of not understanding the inner workings of the complex algorithms that play an increasing role in our lives. People are becoming increasingly skeptical about how their personal (and potentially discriminating) data are being fed into ML systems, forcing policy makers to require greater accountability for ML models. At the same time, ML tools like ChatGPT are becoming more and more embedded in our everyday lives.
White box models and black box models are very different in terms of balance between interpretability and transparency. While Black Boxes are currently in the ascendent, they may not be appropriate for every use case. Understanding the difference is crucial when deciding which approach to use for your organization (or even whether to combine both approaches to achieve transparency and accuracy at the same time).
There is also a lot of exciting work being done to achieve this balance in the form of increasing Black Box transparency through software programs that use reverse engineering and approximating techniques. These third-party tools are promising solutions for quality control and will likely improve over time.
Next steps:
Sign up for a free ActiveState Platform account and and start experimenting with black box and white box models using Python.
Read Similar Stories
Learn about the top 10 machine learning algorithms that can save a developer’s day. Follow along to build with Python’s scikitlearn and more.
XGBoost and Random Forest are two popular decision tree algorithms for machine learning. We compare their features and suggest the best use cases for each.
Learn how to use saliency maps to understand which parts of a photo neural networks consider important when classifying images.