Possibility and Injustice/Bias in AI Transfer Learning


Original CCSWG Discussion

This is a "code critique" posted in the 2020 Critical Code Studies Working Group, on how transfer learning in artificial intelligence opens up philosophical conversations on ontological emergence while dramatizing the social and cultural costs of using pre-trained neural networks. As a case in point, it looks at a simple implementation of transfer learning using Google's Tensor Flow framework.

Author/s: Tensor Flow

Language/s: Python, NumPy, TensorFlow, Keras, TensorFlow Hub

Year/s of development: Current Learning Materials at tensorflow.org

Location of code: https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub

Overview:

In machine learning with neural networks, the practice of transfer learning uses one network trained for one task on another related but different task, without fully retraining the original network. This approach works in practice and it seems to mirror how humans use previous experience to learn new tasks. The learning of the new task in ML could then be thought of as “emergent” behavior, as the network classifies new input data under new categories.

There are lots of possibilities for philosophical reflection on this emergent behavior, although, transfer learning can also more clearly demonstrate how machine learning can be biased in potentially dangerous or unjust ways. In fact, in some of the early papers on multi-task and transfer learning, learning outcomes improve when ML learns with “bias”, a fact wholly accepted by the authors (see for example, R. Caruana, “Multitask Learning”. Machine Learning, vol. 28, 1997, p. 44).

That some bias is necessary in learning or knowledge production is an insight that philosophers have also come to understand through science studies and much contemporary thought. But this does not remove the potential danger or injustice. Consider an example of a transfer learning of possibility in multilingual language learning. Similarly, consider a transfer learning of injustice in facial recognition. These are possibilities and injustices that are present in neural networks in general. The code I would like to consider more fully dramatizes this bias however.

The code to consider comes from an ML tutorial on the TensorFlow.org website. TensorFlow is a higher level programming framework for neural network-based ML. Interestingly, this tutorial uses TensorFlow Hub, a repository of reusable, pre-trained models. In some ways, this repository provides a central gesture of transfer learning made into a new software platform.

To demonstrate the disconnect and potential bias between the classifying task of the originally trained network and the transfer network applied to a new, related task, consider first of all that the pre-trained model from this repository of models is loaded and configured with just 3 lines of code:

classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"}

IMAGE_SHAPE = (224, 224)

classifier = tf.keras.Sequential([
    hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])
    

Secondly, in the beginning of the tutorial, a photograph of Grace Hopper, the famous woman in technology, is fed in as test data after establishing the transfer-learned network. Bias of the original network is shown by the fact that the new network classifies the image as “military uniform” (That Dr. Hopper is wearing) rather than “Grace Hopper” or “Important early computer scientist”, etc.

Following through the tutorial, a pre-trained network is loaded for a second example and its network weights are explicitly set not to be trained further (the whole point of transfer learning is to not have to retrain them):

feature_extractor_layer.trainable = False

Only, the final layer of the network is removed and a classifying layer (a “classification head”) for recognizing flower species is configured as the penultimate set of nodes.

model = tf.keras.Sequential([
  feature_extractor_layer,
  layers.Dense(image_data.num_classes, activation='softmax')
])

Until in the tutorial the network is modified further, adding the classification head, the flower detection is not as “accurate”. So in this case, the neural network is potentially unjust until accurate. That the recognition is first inaccurate in this case shows the limitations of transfer learning. But after adding the classification head, the network identifies most of the flowers accurately. Herein lies transfer learning’s philosophical possibility, but this possibility has its own limitations.

Questions: