As mentioned in our submitted paper, PromptExplainer requires only a few lines of code and can be seamlessly integrated into existing prompt-based learning models without requiring any additional parameters.

Taking the OpenPrompt (https://github.com/thunlp/OpenPrompt) that provides available codes for multiple SOTA prompt-based models as an example, PromptExplainer can be implemented with a few lines of work by modifying the pipline_base.py (https://github.com/thunlp/OpenPrompt/blob/main/openprompt/pipeline_base.py)

The codes can be inserted into line 301 to implement PromptExplainer:


        outputs = self.prompt_model(batch) # equation 4
        outputs = self.verbalizer.gather_outputs(outputs)
        if isinstance(outputs, tuple):
            outputs_at_mask = [self.extract_at_mask(output, batch) for output in outputs]
        else:
            outputs_at_mask = self.extract_at_mask(outputs, batch)
        label_words_logits = self.verbalizer.process_outputs(outputs_at_mask, batch=batch)
        # PromptExplainer
        os1, os2, os3 = outputs.shape
        output_all = outputs.reshape(os1 * os2, os3)
        all_token_logits = self.verbalizer.process_outputs(output_all,
                                                           batch=batch)  # equation 5
        all_token_logits = all_token_logits.reshape(os1, os2, label_words_logits.shape[
            1])
        all_token_softmax = F.softmax(all_token_logits, dim=-1) # equation 6
        E = all_token_softmax[:,:,i] # equation 7, i is the class number


The codes for evaluation experiments will be released on Github.