I am currently a Ph.D. student at Johns Hopkins University in the Cognitive Science department, where I work on computational semantics. While an Ertegun Scholar, I started working with vector representations of word and phrase meanings, and my current computational work continues this, focusing on representing aspects of meaning that are compositional—i.e. derived relatively straightforwardly from the meanings of the parts—as well as those that are noncompositional, arising due to the linguistic context and idiosyncratic relations between elements of this context. For instance, why does a vampire stake denote a stake you use to kill a vampire, a vampire cape denote a cape that the vampire wears, while a tent stake denotes a stake used to secure a tent, and tent cape leads to confusion? Presumably the answer has something to do with the kinds of relationships things like vampires and stakes tend to have with one another—so the question is how those item-wise relationships are represented and also how a final meaning for the phrase is selected based on them. Current work draws on ideas and models from Harmonic Grammar, developing neural networks that compute over vector representations of semantic objects to automatically learn and also derive these sorts of semantic subtleties.
Right now (2018), I am preparing a dissertation on the representation of structured semantic objects (as opposed to unstructured “bags of words”) in the brain. How do you know that a house dog is a kind of dog while a dog house is a kind of house? Though several theoretical proposals exist, most models that have received empirical attention in neurolinguistics provide no answer to this basic question, which is the main object of my dissertation.