Ayuda
Ir al contenido

Dialnet


Explaining Recurrent Neural Network Predictions in Sentiment Analysis

    1. [1] Fraunhofer Heinrich Hertz Institute, Berlin, Germany
    2. [2] Technische Universita ̈t Berlin, Berlin, Germany
  • Localización: 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis WASSA 2017: Proceedings of the Workshop / Alexandra Balahur Dobrescu (ed. lit.), Saif M. Mohammad (ed. lit.), Erik van der Goot (ed. lit.), 2017, ISBN 978-1-945626-95-1, págs. 159-168
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for un- derstanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the result- ing LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno