Learning With NLP by John LaVaMe
**More information:
Get Learning With NLP by John LaVaMe at Salaedu.com
Description
Hi! I’m a second-year PhD student in computer science, conducting research in natural language processing at Stanford University. I am grateful to be co-advised by Chris Manning and Percy Liang.
My goals are to design systems that robustly and efficiently learn to understand human languages to the end of advancing human communication and education, and to teach others.
Feel free to look me up on Google Scholar or Twitter, or take my CV.
I’m broadly interested in research topics relating to the skills and abilities neural models are theoretically capable of, the subset they empirically acquire through (self-)supervision, the subset we can interpret, and expanding all of these sets. In particular, I tend to work in (interpreting) representation learning, induction of latent hierarchical structure, settings with small data, and multilinguality. As an undergraduate at Penn, I worked in the lab of Chris Callison-Burch.
News
[November 2019] I’m giving a talk at Berkeley on Probing Neural NLP: Ideas and Problems!
[November 2019] Designing and Interpreting Probes with Control Tasks was named best paper runner up at EMNLP 2019! Slides of the talk available here.
[August 2019] My work with Percy Liang on designing and understanding neural probes with random control tasks has been accepted to EMNLP! paper and blog post are both available.
[July 2019] I’m giving a talk at Amazon AI on finding and understanding emergent linguistic structure in neural NLP!
[June 2019] Gave a talk at NAACL’19 on structural probes! PDF of the slides now available.
[April 2019] I had a great time being interviewed by Waleed Ammar and Matt Gardner on structural probes! You can find the new NLP Highlights podcast episode on the topic here.
[March 2019] I’m presenting a poster on syntax in unsupervised representations of language at the Stanford Human-Centered Artificial Intelligence Institute Symposium
[Feb 2019] My work with Chris Manning on methods for finding syntax trees embedded in contextual representations of language has been accepted to NAACL 2019!
[Oct 2018] I’ve started offering office hours for research-interested Stanford undergraduates!
Research Office Hours for Undergraduates (ROHU)
An open time for undergraduates looking for advice and discussions on natural language processing research. learn more.
ROHU is on break for winter break; will return in updated form in Winter Quater 2019! Until then, feel free as always to reach out to me.
When Wednesdays, 6:00-7:00 PM (starting Oct 9!)
Where Gates Building, 2nd floor, A-wing big middle room (219). (Or 219a, in the back, if 219 is busy.)
Who You, Stanford undergraduates wanting to chat about research; me, wanting to help
Hi! I’m a second-year PhD student in computer science, conducting research in natural language processing at Stanford University. I am grateful to be co-advised by Chris Manning and Percy Liang.
My goals are to design systems that robustly and efficiently learn to understand human languages to the end of advancing human communication and education, and to teach others.
Feel free to look me up on Google Scholar or Twitter, or take my CV.
I’m broadly interested in research topics relating to the skills and abilities neural models are theoretically capable of, the subset they empirically acquire through (self-)supervision, the subset we can interpret, and expanding all of these sets. In particular, I tend to work in (interpreting) representation learning, induction of latent hierarchical structure, settings with small data, and multilinguality. As an undergraduate at Penn, I worked in the lab of Chris Callison-Burch.
News
[November 2019] I’m giving a talk at Berkeley on Probing Neural NLP: Ideas and Problems!
[November 2019] Designing and Interpreting Probes with Control Tasks was named best paper runner up at EMNLP 2019! Slides of the talk available here.
[August 2019] My work with Percy Liang on designing and understanding neural probes with random control tasks has been accepted to EMNLP! paper and blog post are both available.
[July 2019] I’m giving a talk at Amazon AI on finding and understanding emergent linguistic structure in neural NLP!
[June 2019] Gave a talk at NAACL’19 on structural probes! PDF of the slides now available.
[April 2019] I had a great time being interviewed by Waleed Ammar and Matt Gardner on structural probes! You can find the new NLP Highlights podcast episode on the topic here.
[March 2019] I’m presenting a poster on syntax in unsupervised representations of language at the Stanford Human-Centered Artificial Intelligence Institute Symposium
[Feb 2019] My work with Chris Manning on methods for finding syntax trees embedded in contextual representations of language has been accepted to NAACL 2019!
[Oct 2018] I’ve started offering office hours for research-interested Stanford undergraduates!
Research Office Hours for Undergraduates (ROHU)
An open time for undergraduates looking for advice and discussions on natural language processing research. learn more.
ROHU is on break for winter break; will return in updated form in Winter Quater 2019! Until then, feel free as always to reach out to me.
When Wednesdays, 6:00-7:00 PM (starting Oct 9!)
Where Gates Building, 2nd floor, A-wing big middle room (219). (Or 219a, in the back, if 219 is busy.)
Who You, Stanford undergraduates wanting to chat about research; me, wanting to help
NLP online course
So what is NLP?
NLP stands for Neuro-Linguistic Programming. Neuro refers to your neurology;
Linguistic refers to language; programming refers to how that neural language functions.
In other words, learning NLP is like learning the language of your own mind!
NLP is the study of excellent communication–both with yourself, and with others.
It was developed by modeling excellent communicators and therapists who got results with their clients.
NLP is a set of tools and techniques, but it is so much more than that.
It is an attitude and a methodology of knowing how to achieve your goals and get results
king –
We encourage you to check Content Proof carefully before paying.“Excepted” these contents: “Online coaching, Software, Facebook group, Skype and Email support from Author.”If you have enough money and feel good. We encourage you to buy this product from the original Author to get full other “Excepted” contents from them.Thank you!