@inproceedings{jin-etal-2022-probing,
    title = "Probing Script Knowledge from Pre-Trained Models",
    author = "Jin, Zijia  and
      Zhang, Xingyu  and
      Yu, Mo  and
      Huang, Lifu",
    editor = "Han, Wenjuan  and
      Zheng, Zilong  and
      Lin, Zhouhan  and
      Jin, Lifeng  and
      Shen, Yikang  and
      Kim, Yoon  and
      Tu, Kewei",
    booktitle = "Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.umios-1.10/",
    doi = "10.18653/v1/2022.umios-1.10",
    pages = "87--93",
    abstract = "Adversarial attack of structured prediction models faces various challenges such as the difficulty of perturbing discrete words, the sentence quality issue, and the sensitivity of outputs to small perturbations. In this work, we introduce SHARP, a new attack method that formulates the black-box adversarial attack as a search-based optimization problem with a specially designed objective function considering sentence fluency, meaning preservation and attacking effectiveness. Additionally, three different searching strategies are analyzed and compared, , Beam Search, Metropolis-Hastings Sampling, and Hybrid Search. We demonstrate the effectiveness of our attacking strategies on two challenging structured prediction tasks: part-of-speech (POS) tagging and dependency parsing. Through automatic and human evaluations, we show that our method performs a more potent attack compared with pioneer arts. Moreover, the generated adversarial examples can be used to successfully boost the robustness and performance of the victim model via adversarial training."
}Markdown (Informal)
[Probing Script Knowledge from Pre-Trained Models](https://preview.aclanthology.org/ingest-emnlp/2022.umios-1.10/) (Jin et al., UM-IoS 2022)
ACL
- Zijia Jin, Xingyu Zhang, Mo Yu, and Lifu Huang. 2022. Probing Script Knowledge from Pre-Trained Models. In Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS), pages 87–93, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.