There have been many attempts to represent the meaning of the text in the history of natural language processing. A good solution to this problem could spur the research of question answering systems, text summarization, text simplification, etc. One of the recent advances is abstract meaning representation, or AMR. In this talk, we will discuss the algorithms of building AMR graphs and NLP applications that can benefit from these graphs.
  • 01

    Meaning Representation for Natural Language Understanding by Mariana Romanyshyn

    • On-Demand Recording


Instructor Bio:

Mariana Romanyshyn is a computational linguist passionate about building natural language processing applications. She has professional experience with syntactic parsing, sentiment analysis, named entity recognition, fact extraction, text anonymization, etc. For the last four years, Mariana has been working on error correction and text improvement algorithms at Grammarly. She cares a lot about computational linguistics, constantly looks for talented linguists and spreads the word about the field of NLP by participating at AI conferences, collaborating with Ukrainian universities and organizing educational events. Mariana's main interest is structural linguistics as a method of formalizing the natural language.

Mariana Romanyshyn

Technical Lead at Grammarly Inc.